00:00:00.000 Started by upstream project "autotest-per-patch" build number 132412 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.061 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.670 The recommended git tool is: git 00:00:00.671 using credential 00000000-0000-0000-0000-000000000002 00:00:00.673 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.685 Fetching changes from the remote Git repository 00:00:00.687 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.699 Using shallow fetch with depth 1 00:00:00.699 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.699 > git --version # timeout=10 00:00:00.710 > git --version # 'git version 2.39.2' 00:00:00.710 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.721 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.721 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.640 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.656 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.672 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.672 > git config core.sparsecheckout # timeout=10 00:00:05.686 > git read-tree -mu HEAD # timeout=10 00:00:05.705 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.727 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.727 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.813 [Pipeline] Start of Pipeline 00:00:05.829 [Pipeline] library 00:00:05.830 Loading library shm_lib@master 00:00:05.830 Library shm_lib@master is cached. Copying from home. 00:00:05.851 [Pipeline] node 00:00:05.862 Running on CYP13 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.863 [Pipeline] { 00:00:05.872 [Pipeline] catchError 00:00:05.873 [Pipeline] { 00:00:05.881 [Pipeline] wrap 00:00:05.887 [Pipeline] { 00:00:05.893 [Pipeline] stage 00:00:05.894 [Pipeline] { (Prologue) 00:00:06.083 [Pipeline] sh 00:00:06.366 + logger -p user.info -t JENKINS-CI 00:00:06.381 [Pipeline] echo 00:00:06.383 Node: CYP13 00:00:06.388 [Pipeline] sh 00:00:06.687 [Pipeline] setCustomBuildProperty 00:00:06.697 [Pipeline] echo 00:00:06.698 Cleanup processes 00:00:06.702 [Pipeline] sh 00:00:06.979 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.979 1874918 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.991 [Pipeline] sh 00:00:07.274 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.274 ++ grep -v 'sudo pgrep' 00:00:07.274 ++ awk '{print $1}' 00:00:07.274 + sudo kill -9 00:00:07.274 + true 00:00:07.289 [Pipeline] cleanWs 00:00:07.300 [WS-CLEANUP] Deleting project workspace... 00:00:07.300 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.306 [WS-CLEANUP] done 00:00:07.312 [Pipeline] setCustomBuildProperty 00:00:07.327 [Pipeline] sh 00:00:07.607 + sudo git config --global --replace-all safe.directory '*' 00:00:07.701 [Pipeline] httpRequest 00:00:08.329 [Pipeline] echo 00:00:08.331 Sorcerer 10.211.164.20 is alive 00:00:08.342 [Pipeline] retry 00:00:08.344 [Pipeline] { 00:00:08.359 [Pipeline] httpRequest 00:00:08.363 HttpMethod: GET 00:00:08.363 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.364 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.384 Response Code: HTTP/1.1 200 OK 00:00:08.384 Success: Status code 200 is in the accepted range: 200,404 00:00:08.384 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:24.846 [Pipeline] } 00:00:24.863 [Pipeline] // retry 00:00:24.871 [Pipeline] sh 00:00:25.159 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:25.176 [Pipeline] httpRequest 00:00:25.556 [Pipeline] echo 00:00:25.558 Sorcerer 10.211.164.20 is alive 00:00:25.569 [Pipeline] retry 00:00:25.571 [Pipeline] { 00:00:25.584 [Pipeline] httpRequest 00:00:25.590 HttpMethod: GET 00:00:25.590 URL: http://10.211.164.20/packages/spdk_7bc1aace114e829dcd7661e5d80f80efc04bb5ba.tar.gz 00:00:25.591 Sending request to url: http://10.211.164.20/packages/spdk_7bc1aace114e829dcd7661e5d80f80efc04bb5ba.tar.gz 00:00:25.603 Response Code: HTTP/1.1 200 OK 00:00:25.603 Success: Status code 200 is in the accepted range: 200,404 00:00:25.604 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_7bc1aace114e829dcd7661e5d80f80efc04bb5ba.tar.gz 00:00:49.627 [Pipeline] } 00:00:49.645 [Pipeline] // retry 00:00:49.656 [Pipeline] sh 00:00:49.945 + tar --no-same-owner -xf spdk_7bc1aace114e829dcd7661e5d80f80efc04bb5ba.tar.gz 00:00:52.538 [Pipeline] sh 00:00:52.823 + git -C spdk log --oneline -n5 00:00:52.823 7bc1aace1 dif: Set DIF field to 0 explicitly if its check is disabled 00:00:52.823 ce2cd8dc9 bdev: Insert metadata using bounce/accel buffer if I/O is not aware of metadata 00:00:52.823 2d31d77ac ut/bdev: Remove duplication with many stups among unit test files 00:00:52.823 4c87f1208 accel: Fix a bug that append_dif_generate_copy() did not set dif_ctx 00:00:52.823 e9f1d748e accel: Fix comments for spdk_accel_*_dif_verify_copy() 00:00:52.834 [Pipeline] } 00:00:52.845 [Pipeline] // stage 00:00:52.855 [Pipeline] stage 00:00:52.858 [Pipeline] { (Prepare) 00:00:52.875 [Pipeline] writeFile 00:00:52.889 [Pipeline] sh 00:00:53.175 + logger -p user.info -t JENKINS-CI 00:00:53.190 [Pipeline] sh 00:00:53.476 + logger -p user.info -t JENKINS-CI 00:00:53.489 [Pipeline] sh 00:00:53.777 + cat autorun-spdk.conf 00:00:53.777 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:53.777 SPDK_TEST_NVMF=1 00:00:53.777 SPDK_TEST_NVME_CLI=1 00:00:53.777 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:53.777 SPDK_TEST_NVMF_NICS=e810 00:00:53.777 SPDK_TEST_VFIOUSER=1 00:00:53.777 SPDK_RUN_UBSAN=1 00:00:53.777 NET_TYPE=phy 00:00:53.786 RUN_NIGHTLY=0 00:00:53.792 [Pipeline] readFile 00:00:53.823 [Pipeline] withEnv 00:00:53.826 [Pipeline] { 00:00:53.841 [Pipeline] sh 00:00:54.131 + set -ex 00:00:54.131 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:54.131 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:54.131 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:54.131 ++ SPDK_TEST_NVMF=1 00:00:54.131 ++ SPDK_TEST_NVME_CLI=1 00:00:54.131 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:54.131 ++ SPDK_TEST_NVMF_NICS=e810 00:00:54.131 ++ SPDK_TEST_VFIOUSER=1 00:00:54.131 ++ SPDK_RUN_UBSAN=1 00:00:54.131 ++ NET_TYPE=phy 00:00:54.131 ++ RUN_NIGHTLY=0 00:00:54.131 + case $SPDK_TEST_NVMF_NICS in 00:00:54.131 + DRIVERS=ice 00:00:54.131 + [[ tcp == \r\d\m\a ]] 00:00:54.131 + [[ -n ice ]] 00:00:54.131 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:54.131 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:54.131 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:54.131 rmmod: ERROR: Module irdma is not currently loaded 00:00:54.131 rmmod: ERROR: Module i40iw is not currently loaded 00:00:54.131 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:54.131 + true 00:00:54.131 + for D in $DRIVERS 00:00:54.131 + sudo modprobe ice 00:00:54.131 + exit 0 00:00:54.142 [Pipeline] } 00:00:54.159 [Pipeline] // withEnv 00:00:54.166 [Pipeline] } 00:00:54.181 [Pipeline] // stage 00:00:54.191 [Pipeline] catchError 00:00:54.193 [Pipeline] { 00:00:54.207 [Pipeline] timeout 00:00:54.207 Timeout set to expire in 1 hr 0 min 00:00:54.210 [Pipeline] { 00:00:54.224 [Pipeline] stage 00:00:54.227 [Pipeline] { (Tests) 00:00:54.243 [Pipeline] sh 00:00:54.535 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:54.535 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:54.535 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:54.535 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:54.535 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:54.535 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:54.535 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:54.535 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:54.535 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:54.535 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:54.535 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:54.535 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:54.535 + source /etc/os-release 00:00:54.535 ++ NAME='Fedora Linux' 00:00:54.535 ++ VERSION='39 (Cloud Edition)' 00:00:54.535 ++ ID=fedora 00:00:54.535 ++ VERSION_ID=39 00:00:54.535 ++ VERSION_CODENAME= 00:00:54.535 ++ PLATFORM_ID=platform:f39 00:00:54.535 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:00:54.535 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:54.535 ++ LOGO=fedora-logo-icon 00:00:54.535 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:00:54.535 ++ HOME_URL=https://fedoraproject.org/ 00:00:54.535 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:00:54.535 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:54.535 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:54.535 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:54.535 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:00:54.535 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:54.535 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:00:54.535 ++ SUPPORT_END=2024-11-12 00:00:54.535 ++ VARIANT='Cloud Edition' 00:00:54.535 ++ VARIANT_ID=cloud 00:00:54.535 + uname -a 00:00:54.535 Linux spdk-cyp-13 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:00:54.535 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:57.839 Hugepages 00:00:57.839 node hugesize free / total 00:00:57.839 node0 1048576kB 0 / 0 00:00:57.839 node0 2048kB 0 / 0 00:00:57.839 node1 1048576kB 0 / 0 00:00:57.839 node1 2048kB 0 / 0 00:00:57.839 00:00:57.839 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:57.839 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:00:57.839 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:00:57.839 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:00:57.839 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:00:57.839 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:00:57.839 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:00:57.839 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:00:57.839 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:00:57.839 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:00:57.839 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:00:57.839 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:00:57.839 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:00:57.839 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:00:57.839 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:00:57.839 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:00:57.839 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:00:57.839 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:00:57.839 + rm -f /tmp/spdk-ld-path 00:00:57.839 + source autorun-spdk.conf 00:00:57.839 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:57.839 ++ SPDK_TEST_NVMF=1 00:00:57.839 ++ SPDK_TEST_NVME_CLI=1 00:00:57.839 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:57.839 ++ SPDK_TEST_NVMF_NICS=e810 00:00:57.839 ++ SPDK_TEST_VFIOUSER=1 00:00:57.839 ++ SPDK_RUN_UBSAN=1 00:00:57.839 ++ NET_TYPE=phy 00:00:57.839 ++ RUN_NIGHTLY=0 00:00:57.839 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:57.839 + [[ -n '' ]] 00:00:57.839 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:57.839 + for M in /var/spdk/build-*-manifest.txt 00:00:57.839 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:00:57.839 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:57.839 + for M in /var/spdk/build-*-manifest.txt 00:00:57.839 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:57.839 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:57.839 + for M in /var/spdk/build-*-manifest.txt 00:00:57.839 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:57.839 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:57.839 ++ uname 00:00:57.839 + [[ Linux == \L\i\n\u\x ]] 00:00:57.839 + sudo dmesg -T 00:00:57.839 + sudo dmesg --clear 00:00:57.839 + dmesg_pid=1876486 00:00:57.839 + [[ Fedora Linux == FreeBSD ]] 00:00:57.839 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:57.839 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:57.839 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:57.839 + [[ -x /usr/src/fio-static/fio ]] 00:00:57.839 + export FIO_BIN=/usr/src/fio-static/fio 00:00:57.839 + FIO_BIN=/usr/src/fio-static/fio 00:00:57.839 + sudo dmesg -Tw 00:00:57.839 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:57.839 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:57.839 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:57.839 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:57.839 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:57.839 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:57.839 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:57.839 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:57.839 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:57.839 16:12:43 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:00:57.839 16:12:43 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:57.839 16:12:43 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:57.839 16:12:43 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:00:57.839 16:12:43 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:00:57.839 16:12:43 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:57.839 16:12:43 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:00:57.839 16:12:43 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:00:57.839 16:12:43 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:00:57.839 16:12:43 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:00:57.839 16:12:43 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:00:57.839 16:12:43 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:00:57.839 16:12:43 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:57.839 16:12:43 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:00:57.839 16:12:43 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:57.839 16:12:43 -- scripts/common.sh@15 -- $ shopt -s extglob 00:00:57.839 16:12:43 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:57.839 16:12:43 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:57.839 16:12:43 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:57.840 16:12:43 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:57.840 16:12:43 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:57.840 16:12:43 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:57.840 16:12:43 -- paths/export.sh@5 -- $ export PATH 00:00:57.840 16:12:43 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:57.840 16:12:43 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:57.840 16:12:43 -- common/autobuild_common.sh@493 -- $ date +%s 00:00:57.840 16:12:43 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732115563.XXXXXX 00:00:57.840 16:12:43 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732115563.2tvyll 00:00:57.840 16:12:43 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:00:57.840 16:12:43 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:00:57.840 16:12:43 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:57.840 16:12:43 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:57.840 16:12:43 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:57.840 16:12:43 -- common/autobuild_common.sh@509 -- $ get_config_params 00:00:57.840 16:12:43 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:00:57.840 16:12:43 -- common/autotest_common.sh@10 -- $ set +x 00:00:57.840 16:12:43 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:57.840 16:12:43 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:00:57.840 16:12:43 -- pm/common@17 -- $ local monitor 00:00:57.840 16:12:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:57.840 16:12:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:57.840 16:12:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:57.840 16:12:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:57.840 16:12:43 -- pm/common@21 -- $ date +%s 00:00:57.840 16:12:43 -- pm/common@25 -- $ sleep 1 00:00:57.840 16:12:43 -- pm/common@21 -- $ date +%s 00:00:57.840 16:12:43 -- pm/common@21 -- $ date +%s 00:00:57.840 16:12:43 -- pm/common@21 -- $ date +%s 00:00:57.840 16:12:43 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732115563 00:00:57.840 16:12:43 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732115563 00:00:57.840 16:12:43 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732115563 00:00:57.840 16:12:43 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732115563 00:00:58.102 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732115563_collect-cpu-load.pm.log 00:00:58.102 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732115563_collect-vmstat.pm.log 00:00:58.102 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732115563_collect-cpu-temp.pm.log 00:00:58.102 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732115563_collect-bmc-pm.bmc.pm.log 00:00:59.046 16:12:44 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:00:59.046 16:12:44 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:59.046 16:12:44 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:59.046 16:12:44 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:59.046 16:12:44 -- spdk/autobuild.sh@16 -- $ date -u 00:00:59.046 Wed Nov 20 03:12:44 PM UTC 2024 00:00:59.046 16:12:44 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:59.046 v25.01-pre-233-g7bc1aace1 00:00:59.046 16:12:44 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:59.046 16:12:44 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:59.046 16:12:44 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:59.046 16:12:44 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:00:59.046 16:12:44 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:00:59.046 16:12:44 -- common/autotest_common.sh@10 -- $ set +x 00:00:59.046 ************************************ 00:00:59.046 START TEST ubsan 00:00:59.046 ************************************ 00:00:59.046 16:12:44 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:00:59.046 using ubsan 00:00:59.046 00:00:59.046 real 0m0.001s 00:00:59.046 user 0m0.000s 00:00:59.046 sys 0m0.001s 00:00:59.046 16:12:44 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:00:59.046 16:12:44 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:59.046 ************************************ 00:00:59.046 END TEST ubsan 00:00:59.046 ************************************ 00:00:59.046 16:12:44 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:59.046 16:12:44 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:59.046 16:12:44 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:59.046 16:12:44 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:59.046 16:12:44 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:59.046 16:12:44 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:59.046 16:12:44 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:59.046 16:12:44 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:59.046 16:12:44 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:59.307 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:59.307 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:59.569 Using 'verbs' RDMA provider 00:01:15.422 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:27.667 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:27.667 Creating mk/config.mk...done. 00:01:27.667 Creating mk/cc.flags.mk...done. 00:01:27.667 Type 'make' to build. 00:01:27.667 16:13:13 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:01:27.667 16:13:13 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:27.667 16:13:13 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:27.667 16:13:13 -- common/autotest_common.sh@10 -- $ set +x 00:01:27.667 ************************************ 00:01:27.667 START TEST make 00:01:27.667 ************************************ 00:01:27.667 16:13:13 make -- common/autotest_common.sh@1129 -- $ make -j144 00:01:27.999 make[1]: Nothing to be done for 'all'. 00:01:29.381 The Meson build system 00:01:29.381 Version: 1.5.0 00:01:29.381 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:29.381 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:29.381 Build type: native build 00:01:29.381 Project name: libvfio-user 00:01:29.381 Project version: 0.0.1 00:01:29.381 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:29.381 C linker for the host machine: cc ld.bfd 2.40-14 00:01:29.381 Host machine cpu family: x86_64 00:01:29.381 Host machine cpu: x86_64 00:01:29.381 Run-time dependency threads found: YES 00:01:29.381 Library dl found: YES 00:01:29.381 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:29.381 Run-time dependency json-c found: YES 0.17 00:01:29.381 Run-time dependency cmocka found: YES 1.1.7 00:01:29.381 Program pytest-3 found: NO 00:01:29.381 Program flake8 found: NO 00:01:29.381 Program misspell-fixer found: NO 00:01:29.381 Program restructuredtext-lint found: NO 00:01:29.381 Program valgrind found: YES (/usr/bin/valgrind) 00:01:29.381 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:29.381 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:29.381 Compiler for C supports arguments -Wwrite-strings: YES 00:01:29.381 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:29.381 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:29.381 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:29.381 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:29.381 Build targets in project: 8 00:01:29.381 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:29.381 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:29.381 00:01:29.381 libvfio-user 0.0.1 00:01:29.381 00:01:29.381 User defined options 00:01:29.381 buildtype : debug 00:01:29.381 default_library: shared 00:01:29.381 libdir : /usr/local/lib 00:01:29.381 00:01:29.381 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:29.640 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:29.640 [1/37] Compiling C object samples/null.p/null.c.o 00:01:29.640 [2/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:29.640 [3/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:29.640 [4/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:29.640 [5/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:29.640 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:29.640 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:29.640 [8/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:29.640 [9/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:29.640 [10/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:29.640 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:29.640 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:29.640 [13/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:29.640 [14/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:29.640 [15/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:29.640 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:29.640 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:29.640 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:29.640 [19/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:29.640 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:29.640 [21/37] Compiling C object samples/server.p/server.c.o 00:01:29.899 [22/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:29.899 [23/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:29.899 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:29.899 [25/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:29.899 [26/37] Compiling C object samples/client.p/client.c.o 00:01:29.899 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:29.899 [28/37] Linking target samples/client 00:01:29.899 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:29.899 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:29.899 [31/37] Linking target test/unit_tests 00:01:29.899 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:30.159 [33/37] Linking target samples/null 00:01:30.159 [34/37] Linking target samples/server 00:01:30.159 [35/37] Linking target samples/lspci 00:01:30.159 [36/37] Linking target samples/gpio-pci-idio-16 00:01:30.159 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:30.159 INFO: autodetecting backend as ninja 00:01:30.159 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:30.159 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:30.419 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:30.419 ninja: no work to do. 00:01:37.017 The Meson build system 00:01:37.017 Version: 1.5.0 00:01:37.017 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:37.017 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:37.017 Build type: native build 00:01:37.017 Program cat found: YES (/usr/bin/cat) 00:01:37.017 Project name: DPDK 00:01:37.017 Project version: 24.03.0 00:01:37.017 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:37.017 C linker for the host machine: cc ld.bfd 2.40-14 00:01:37.017 Host machine cpu family: x86_64 00:01:37.017 Host machine cpu: x86_64 00:01:37.017 Message: ## Building in Developer Mode ## 00:01:37.017 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:37.017 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:37.017 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:37.017 Program python3 found: YES (/usr/bin/python3) 00:01:37.017 Program cat found: YES (/usr/bin/cat) 00:01:37.017 Compiler for C supports arguments -march=native: YES 00:01:37.017 Checking for size of "void *" : 8 00:01:37.017 Checking for size of "void *" : 8 (cached) 00:01:37.017 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:37.017 Library m found: YES 00:01:37.017 Library numa found: YES 00:01:37.017 Has header "numaif.h" : YES 00:01:37.017 Library fdt found: NO 00:01:37.017 Library execinfo found: NO 00:01:37.017 Has header "execinfo.h" : YES 00:01:37.017 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:37.017 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:37.017 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:37.017 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:37.017 Run-time dependency openssl found: YES 3.1.1 00:01:37.017 Run-time dependency libpcap found: YES 1.10.4 00:01:37.017 Has header "pcap.h" with dependency libpcap: YES 00:01:37.017 Compiler for C supports arguments -Wcast-qual: YES 00:01:37.017 Compiler for C supports arguments -Wdeprecated: YES 00:01:37.017 Compiler for C supports arguments -Wformat: YES 00:01:37.017 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:37.017 Compiler for C supports arguments -Wformat-security: NO 00:01:37.017 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:37.017 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:37.017 Compiler for C supports arguments -Wnested-externs: YES 00:01:37.017 Compiler for C supports arguments -Wold-style-definition: YES 00:01:37.017 Compiler for C supports arguments -Wpointer-arith: YES 00:01:37.017 Compiler for C supports arguments -Wsign-compare: YES 00:01:37.017 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:37.017 Compiler for C supports arguments -Wundef: YES 00:01:37.017 Compiler for C supports arguments -Wwrite-strings: YES 00:01:37.017 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:37.017 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:37.017 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:37.017 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:37.017 Program objdump found: YES (/usr/bin/objdump) 00:01:37.017 Compiler for C supports arguments -mavx512f: YES 00:01:37.017 Checking if "AVX512 checking" compiles: YES 00:01:37.017 Fetching value of define "__SSE4_2__" : 1 00:01:37.017 Fetching value of define "__AES__" : 1 00:01:37.017 Fetching value of define "__AVX__" : 1 00:01:37.017 Fetching value of define "__AVX2__" : 1 00:01:37.017 Fetching value of define "__AVX512BW__" : 1 00:01:37.017 Fetching value of define "__AVX512CD__" : 1 00:01:37.017 Fetching value of define "__AVX512DQ__" : 1 00:01:37.017 Fetching value of define "__AVX512F__" : 1 00:01:37.017 Fetching value of define "__AVX512VL__" : 1 00:01:37.017 Fetching value of define "__PCLMUL__" : 1 00:01:37.017 Fetching value of define "__RDRND__" : 1 00:01:37.017 Fetching value of define "__RDSEED__" : 1 00:01:37.017 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:37.017 Fetching value of define "__znver1__" : (undefined) 00:01:37.017 Fetching value of define "__znver2__" : (undefined) 00:01:37.017 Fetching value of define "__znver3__" : (undefined) 00:01:37.017 Fetching value of define "__znver4__" : (undefined) 00:01:37.017 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:37.017 Message: lib/log: Defining dependency "log" 00:01:37.017 Message: lib/kvargs: Defining dependency "kvargs" 00:01:37.017 Message: lib/telemetry: Defining dependency "telemetry" 00:01:37.017 Checking for function "getentropy" : NO 00:01:37.017 Message: lib/eal: Defining dependency "eal" 00:01:37.017 Message: lib/ring: Defining dependency "ring" 00:01:37.017 Message: lib/rcu: Defining dependency "rcu" 00:01:37.017 Message: lib/mempool: Defining dependency "mempool" 00:01:37.017 Message: lib/mbuf: Defining dependency "mbuf" 00:01:37.017 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:37.018 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:37.018 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:37.018 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:37.018 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:37.018 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:37.018 Compiler for C supports arguments -mpclmul: YES 00:01:37.018 Compiler for C supports arguments -maes: YES 00:01:37.018 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:37.018 Compiler for C supports arguments -mavx512bw: YES 00:01:37.018 Compiler for C supports arguments -mavx512dq: YES 00:01:37.018 Compiler for C supports arguments -mavx512vl: YES 00:01:37.018 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:37.018 Compiler for C supports arguments -mavx2: YES 00:01:37.018 Compiler for C supports arguments -mavx: YES 00:01:37.018 Message: lib/net: Defining dependency "net" 00:01:37.018 Message: lib/meter: Defining dependency "meter" 00:01:37.018 Message: lib/ethdev: Defining dependency "ethdev" 00:01:37.018 Message: lib/pci: Defining dependency "pci" 00:01:37.018 Message: lib/cmdline: Defining dependency "cmdline" 00:01:37.018 Message: lib/hash: Defining dependency "hash" 00:01:37.018 Message: lib/timer: Defining dependency "timer" 00:01:37.018 Message: lib/compressdev: Defining dependency "compressdev" 00:01:37.018 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:37.018 Message: lib/dmadev: Defining dependency "dmadev" 00:01:37.018 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:37.018 Message: lib/power: Defining dependency "power" 00:01:37.018 Message: lib/reorder: Defining dependency "reorder" 00:01:37.018 Message: lib/security: Defining dependency "security" 00:01:37.018 Has header "linux/userfaultfd.h" : YES 00:01:37.018 Has header "linux/vduse.h" : YES 00:01:37.018 Message: lib/vhost: Defining dependency "vhost" 00:01:37.018 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:37.018 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:37.018 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:37.018 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:37.018 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:37.018 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:37.018 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:37.018 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:37.018 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:37.018 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:37.018 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:37.018 Configuring doxy-api-html.conf using configuration 00:01:37.018 Configuring doxy-api-man.conf using configuration 00:01:37.018 Program mandb found: YES (/usr/bin/mandb) 00:01:37.018 Program sphinx-build found: NO 00:01:37.018 Configuring rte_build_config.h using configuration 00:01:37.018 Message: 00:01:37.018 ================= 00:01:37.018 Applications Enabled 00:01:37.018 ================= 00:01:37.018 00:01:37.018 apps: 00:01:37.018 00:01:37.018 00:01:37.018 Message: 00:01:37.018 ================= 00:01:37.018 Libraries Enabled 00:01:37.018 ================= 00:01:37.018 00:01:37.018 libs: 00:01:37.018 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:37.018 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:37.018 cryptodev, dmadev, power, reorder, security, vhost, 00:01:37.018 00:01:37.018 Message: 00:01:37.018 =============== 00:01:37.018 Drivers Enabled 00:01:37.018 =============== 00:01:37.018 00:01:37.018 common: 00:01:37.018 00:01:37.018 bus: 00:01:37.018 pci, vdev, 00:01:37.018 mempool: 00:01:37.018 ring, 00:01:37.018 dma: 00:01:37.018 00:01:37.018 net: 00:01:37.018 00:01:37.018 crypto: 00:01:37.018 00:01:37.018 compress: 00:01:37.018 00:01:37.018 vdpa: 00:01:37.018 00:01:37.018 00:01:37.018 Message: 00:01:37.018 ================= 00:01:37.018 Content Skipped 00:01:37.018 ================= 00:01:37.018 00:01:37.018 apps: 00:01:37.018 dumpcap: explicitly disabled via build config 00:01:37.018 graph: explicitly disabled via build config 00:01:37.018 pdump: explicitly disabled via build config 00:01:37.018 proc-info: explicitly disabled via build config 00:01:37.018 test-acl: explicitly disabled via build config 00:01:37.018 test-bbdev: explicitly disabled via build config 00:01:37.018 test-cmdline: explicitly disabled via build config 00:01:37.018 test-compress-perf: explicitly disabled via build config 00:01:37.018 test-crypto-perf: explicitly disabled via build config 00:01:37.018 test-dma-perf: explicitly disabled via build config 00:01:37.018 test-eventdev: explicitly disabled via build config 00:01:37.018 test-fib: explicitly disabled via build config 00:01:37.018 test-flow-perf: explicitly disabled via build config 00:01:37.018 test-gpudev: explicitly disabled via build config 00:01:37.018 test-mldev: explicitly disabled via build config 00:01:37.018 test-pipeline: explicitly disabled via build config 00:01:37.018 test-pmd: explicitly disabled via build config 00:01:37.018 test-regex: explicitly disabled via build config 00:01:37.018 test-sad: explicitly disabled via build config 00:01:37.018 test-security-perf: explicitly disabled via build config 00:01:37.018 00:01:37.018 libs: 00:01:37.018 argparse: explicitly disabled via build config 00:01:37.018 metrics: explicitly disabled via build config 00:01:37.018 acl: explicitly disabled via build config 00:01:37.018 bbdev: explicitly disabled via build config 00:01:37.018 bitratestats: explicitly disabled via build config 00:01:37.018 bpf: explicitly disabled via build config 00:01:37.018 cfgfile: explicitly disabled via build config 00:01:37.018 distributor: explicitly disabled via build config 00:01:37.018 efd: explicitly disabled via build config 00:01:37.018 eventdev: explicitly disabled via build config 00:01:37.018 dispatcher: explicitly disabled via build config 00:01:37.018 gpudev: explicitly disabled via build config 00:01:37.018 gro: explicitly disabled via build config 00:01:37.018 gso: explicitly disabled via build config 00:01:37.018 ip_frag: explicitly disabled via build config 00:01:37.018 jobstats: explicitly disabled via build config 00:01:37.018 latencystats: explicitly disabled via build config 00:01:37.018 lpm: explicitly disabled via build config 00:01:37.018 member: explicitly disabled via build config 00:01:37.018 pcapng: explicitly disabled via build config 00:01:37.018 rawdev: explicitly disabled via build config 00:01:37.018 regexdev: explicitly disabled via build config 00:01:37.018 mldev: explicitly disabled via build config 00:01:37.018 rib: explicitly disabled via build config 00:01:37.018 sched: explicitly disabled via build config 00:01:37.018 stack: explicitly disabled via build config 00:01:37.018 ipsec: explicitly disabled via build config 00:01:37.018 pdcp: explicitly disabled via build config 00:01:37.018 fib: explicitly disabled via build config 00:01:37.018 port: explicitly disabled via build config 00:01:37.018 pdump: explicitly disabled via build config 00:01:37.018 table: explicitly disabled via build config 00:01:37.018 pipeline: explicitly disabled via build config 00:01:37.018 graph: explicitly disabled via build config 00:01:37.018 node: explicitly disabled via build config 00:01:37.018 00:01:37.018 drivers: 00:01:37.019 common/cpt: not in enabled drivers build config 00:01:37.019 common/dpaax: not in enabled drivers build config 00:01:37.019 common/iavf: not in enabled drivers build config 00:01:37.019 common/idpf: not in enabled drivers build config 00:01:37.019 common/ionic: not in enabled drivers build config 00:01:37.019 common/mvep: not in enabled drivers build config 00:01:37.019 common/octeontx: not in enabled drivers build config 00:01:37.019 bus/auxiliary: not in enabled drivers build config 00:01:37.019 bus/cdx: not in enabled drivers build config 00:01:37.019 bus/dpaa: not in enabled drivers build config 00:01:37.019 bus/fslmc: not in enabled drivers build config 00:01:37.019 bus/ifpga: not in enabled drivers build config 00:01:37.019 bus/platform: not in enabled drivers build config 00:01:37.019 bus/uacce: not in enabled drivers build config 00:01:37.019 bus/vmbus: not in enabled drivers build config 00:01:37.019 common/cnxk: not in enabled drivers build config 00:01:37.019 common/mlx5: not in enabled drivers build config 00:01:37.019 common/nfp: not in enabled drivers build config 00:01:37.019 common/nitrox: not in enabled drivers build config 00:01:37.019 common/qat: not in enabled drivers build config 00:01:37.019 common/sfc_efx: not in enabled drivers build config 00:01:37.019 mempool/bucket: not in enabled drivers build config 00:01:37.019 mempool/cnxk: not in enabled drivers build config 00:01:37.019 mempool/dpaa: not in enabled drivers build config 00:01:37.019 mempool/dpaa2: not in enabled drivers build config 00:01:37.019 mempool/octeontx: not in enabled drivers build config 00:01:37.019 mempool/stack: not in enabled drivers build config 00:01:37.019 dma/cnxk: not in enabled drivers build config 00:01:37.019 dma/dpaa: not in enabled drivers build config 00:01:37.019 dma/dpaa2: not in enabled drivers build config 00:01:37.019 dma/hisilicon: not in enabled drivers build config 00:01:37.019 dma/idxd: not in enabled drivers build config 00:01:37.019 dma/ioat: not in enabled drivers build config 00:01:37.019 dma/skeleton: not in enabled drivers build config 00:01:37.019 net/af_packet: not in enabled drivers build config 00:01:37.019 net/af_xdp: not in enabled drivers build config 00:01:37.019 net/ark: not in enabled drivers build config 00:01:37.019 net/atlantic: not in enabled drivers build config 00:01:37.019 net/avp: not in enabled drivers build config 00:01:37.019 net/axgbe: not in enabled drivers build config 00:01:37.019 net/bnx2x: not in enabled drivers build config 00:01:37.019 net/bnxt: not in enabled drivers build config 00:01:37.019 net/bonding: not in enabled drivers build config 00:01:37.019 net/cnxk: not in enabled drivers build config 00:01:37.019 net/cpfl: not in enabled drivers build config 00:01:37.019 net/cxgbe: not in enabled drivers build config 00:01:37.019 net/dpaa: not in enabled drivers build config 00:01:37.019 net/dpaa2: not in enabled drivers build config 00:01:37.019 net/e1000: not in enabled drivers build config 00:01:37.019 net/ena: not in enabled drivers build config 00:01:37.019 net/enetc: not in enabled drivers build config 00:01:37.019 net/enetfec: not in enabled drivers build config 00:01:37.019 net/enic: not in enabled drivers build config 00:01:37.019 net/failsafe: not in enabled drivers build config 00:01:37.019 net/fm10k: not in enabled drivers build config 00:01:37.019 net/gve: not in enabled drivers build config 00:01:37.019 net/hinic: not in enabled drivers build config 00:01:37.019 net/hns3: not in enabled drivers build config 00:01:37.019 net/i40e: not in enabled drivers build config 00:01:37.019 net/iavf: not in enabled drivers build config 00:01:37.019 net/ice: not in enabled drivers build config 00:01:37.019 net/idpf: not in enabled drivers build config 00:01:37.019 net/igc: not in enabled drivers build config 00:01:37.019 net/ionic: not in enabled drivers build config 00:01:37.019 net/ipn3ke: not in enabled drivers build config 00:01:37.019 net/ixgbe: not in enabled drivers build config 00:01:37.019 net/mana: not in enabled drivers build config 00:01:37.019 net/memif: not in enabled drivers build config 00:01:37.019 net/mlx4: not in enabled drivers build config 00:01:37.019 net/mlx5: not in enabled drivers build config 00:01:37.019 net/mvneta: not in enabled drivers build config 00:01:37.019 net/mvpp2: not in enabled drivers build config 00:01:37.019 net/netvsc: not in enabled drivers build config 00:01:37.019 net/nfb: not in enabled drivers build config 00:01:37.019 net/nfp: not in enabled drivers build config 00:01:37.019 net/ngbe: not in enabled drivers build config 00:01:37.019 net/null: not in enabled drivers build config 00:01:37.019 net/octeontx: not in enabled drivers build config 00:01:37.019 net/octeon_ep: not in enabled drivers build config 00:01:37.019 net/pcap: not in enabled drivers build config 00:01:37.019 net/pfe: not in enabled drivers build config 00:01:37.019 net/qede: not in enabled drivers build config 00:01:37.019 net/ring: not in enabled drivers build config 00:01:37.019 net/sfc: not in enabled drivers build config 00:01:37.019 net/softnic: not in enabled drivers build config 00:01:37.019 net/tap: not in enabled drivers build config 00:01:37.019 net/thunderx: not in enabled drivers build config 00:01:37.019 net/txgbe: not in enabled drivers build config 00:01:37.019 net/vdev_netvsc: not in enabled drivers build config 00:01:37.019 net/vhost: not in enabled drivers build config 00:01:37.019 net/virtio: not in enabled drivers build config 00:01:37.019 net/vmxnet3: not in enabled drivers build config 00:01:37.019 raw/*: missing internal dependency, "rawdev" 00:01:37.019 crypto/armv8: not in enabled drivers build config 00:01:37.019 crypto/bcmfs: not in enabled drivers build config 00:01:37.019 crypto/caam_jr: not in enabled drivers build config 00:01:37.019 crypto/ccp: not in enabled drivers build config 00:01:37.019 crypto/cnxk: not in enabled drivers build config 00:01:37.019 crypto/dpaa_sec: not in enabled drivers build config 00:01:37.019 crypto/dpaa2_sec: not in enabled drivers build config 00:01:37.019 crypto/ipsec_mb: not in enabled drivers build config 00:01:37.019 crypto/mlx5: not in enabled drivers build config 00:01:37.019 crypto/mvsam: not in enabled drivers build config 00:01:37.019 crypto/nitrox: not in enabled drivers build config 00:01:37.019 crypto/null: not in enabled drivers build config 00:01:37.019 crypto/octeontx: not in enabled drivers build config 00:01:37.019 crypto/openssl: not in enabled drivers build config 00:01:37.019 crypto/scheduler: not in enabled drivers build config 00:01:37.019 crypto/uadk: not in enabled drivers build config 00:01:37.019 crypto/virtio: not in enabled drivers build config 00:01:37.019 compress/isal: not in enabled drivers build config 00:01:37.019 compress/mlx5: not in enabled drivers build config 00:01:37.019 compress/nitrox: not in enabled drivers build config 00:01:37.019 compress/octeontx: not in enabled drivers build config 00:01:37.019 compress/zlib: not in enabled drivers build config 00:01:37.019 regex/*: missing internal dependency, "regexdev" 00:01:37.019 ml/*: missing internal dependency, "mldev" 00:01:37.019 vdpa/ifc: not in enabled drivers build config 00:01:37.019 vdpa/mlx5: not in enabled drivers build config 00:01:37.019 vdpa/nfp: not in enabled drivers build config 00:01:37.019 vdpa/sfc: not in enabled drivers build config 00:01:37.019 event/*: missing internal dependency, "eventdev" 00:01:37.019 baseband/*: missing internal dependency, "bbdev" 00:01:37.019 gpu/*: missing internal dependency, "gpudev" 00:01:37.019 00:01:37.019 00:01:37.019 Build targets in project: 84 00:01:37.019 00:01:37.019 DPDK 24.03.0 00:01:37.019 00:01:37.019 User defined options 00:01:37.019 buildtype : debug 00:01:37.019 default_library : shared 00:01:37.019 libdir : lib 00:01:37.019 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:37.020 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:37.020 c_link_args : 00:01:37.020 cpu_instruction_set: native 00:01:37.020 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:01:37.020 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:01:37.020 enable_docs : false 00:01:37.020 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:01:37.020 enable_kmods : false 00:01:37.020 max_lcores : 128 00:01:37.020 tests : false 00:01:37.020 00:01:37.020 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:37.020 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:37.020 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:37.020 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:37.020 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:37.020 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:37.020 [5/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:37.020 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:37.020 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:37.020 [8/267] Linking static target lib/librte_kvargs.a 00:01:37.020 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:37.020 [10/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:37.020 [11/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:37.020 [12/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:37.020 [13/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:37.020 [14/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:37.020 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:37.020 [16/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:37.020 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:37.279 [18/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:37.279 [19/267] Linking static target lib/librte_log.a 00:01:37.279 [20/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:37.279 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:37.279 [22/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:37.279 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:37.279 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:37.279 [25/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:37.279 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:37.279 [27/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:37.279 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:37.279 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:37.279 [30/267] Linking static target lib/librte_pci.a 00:01:37.279 [31/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:37.279 [32/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:37.279 [33/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:37.279 [34/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:37.279 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:37.279 [36/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:37.279 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:37.279 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:37.539 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:37.539 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:37.539 [41/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.539 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:37.539 [43/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.539 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:37.539 [45/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:37.539 [46/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:37.539 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:37.539 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:37.539 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:37.539 [50/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:37.539 [51/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:37.539 [52/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:37.539 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:37.539 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:37.539 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:37.539 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:37.539 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:37.539 [58/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:37.539 [59/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:37.539 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:37.539 [61/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:37.539 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:37.539 [63/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:37.539 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:37.539 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:37.539 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:37.539 [67/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:37.539 [68/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:37.539 [69/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:37.539 [70/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:37.539 [71/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:37.539 [72/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:37.539 [73/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:37.539 [74/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:37.539 [75/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:37.539 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:37.539 [77/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:37.539 [78/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:37.539 [79/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:37.539 [80/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:37.539 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:37.539 [82/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:37.539 [83/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:37.539 [84/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:37.539 [85/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:37.539 [86/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:37.539 [87/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:37.539 [88/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:37.539 [89/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:37.539 [90/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:37.539 [91/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:37.539 [92/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:37.539 [93/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:37.539 [94/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:37.801 [95/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:37.801 [96/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:37.801 [97/267] Linking static target lib/librte_telemetry.a 00:01:37.801 [98/267] Linking static target lib/librte_meter.a 00:01:37.801 [99/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:37.801 [100/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:37.801 [101/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:37.801 [102/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:37.801 [103/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:37.801 [104/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:37.801 [105/267] Linking static target lib/librte_timer.a 00:01:37.801 [106/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:37.801 [107/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:37.801 [108/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:37.801 [109/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:37.801 [110/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:37.801 [111/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:37.801 [112/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:37.801 [113/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:37.801 [114/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:37.801 [115/267] Linking static target lib/librte_ring.a 00:01:37.801 [116/267] Linking static target lib/librte_cmdline.a 00:01:37.801 [117/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:37.801 [118/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:37.801 [119/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:37.801 [120/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:37.801 [121/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:37.801 [122/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:37.801 [123/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:37.801 [124/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:37.801 [125/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:37.801 [126/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:37.801 [127/267] Linking static target lib/librte_dmadev.a 00:01:37.801 [128/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:37.801 [129/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:37.801 [130/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:37.801 [131/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:37.801 [132/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.801 [133/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:37.801 [134/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:37.801 [135/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:37.801 [136/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:37.801 [137/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:37.801 [138/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:37.801 [139/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:37.801 [140/267] Linking static target lib/librte_net.a 00:01:37.801 [141/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:37.801 [142/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:37.801 [143/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:37.801 [144/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:37.801 [145/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:37.801 [146/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:37.801 [147/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:37.801 [148/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:37.801 [149/267] Linking target lib/librte_log.so.24.1 00:01:37.801 [150/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:37.801 [151/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:37.801 [152/267] Linking static target lib/librte_compressdev.a 00:01:37.801 [153/267] Linking static target lib/librte_mempool.a 00:01:37.801 [154/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:37.801 [155/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:37.801 [156/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:37.801 [157/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:37.801 [158/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:37.801 [159/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:37.801 [160/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:37.801 [161/267] Linking static target lib/librte_reorder.a 00:01:37.801 [162/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:37.801 [163/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:37.801 [164/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:37.801 [165/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:37.801 [166/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:37.801 [167/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:37.801 [168/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:37.801 [169/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:37.801 [170/267] Linking static target lib/librte_power.a 00:01:37.801 [171/267] Linking static target lib/librte_rcu.a 00:01:37.801 [172/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:37.801 [173/267] Linking static target lib/librte_eal.a 00:01:37.801 [174/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:37.801 [175/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:37.801 [176/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:37.801 [177/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:37.801 [178/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:37.801 [179/267] Linking static target lib/librte_mbuf.a 00:01:38.063 [180/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:38.063 [181/267] Linking static target lib/librte_security.a 00:01:38.063 [182/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:38.063 [183/267] Linking target lib/librte_kvargs.so.24.1 00:01:38.063 [184/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.063 [185/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:38.063 [186/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:38.063 [187/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:38.063 [188/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:38.063 [189/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:38.063 [190/267] Linking static target drivers/librte_bus_vdev.a 00:01:38.063 [191/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:38.063 [192/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:38.063 [193/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:38.063 [194/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:38.063 [195/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:38.063 [196/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.063 [197/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:38.063 [198/267] Linking static target lib/librte_hash.a 00:01:38.063 [199/267] Linking static target drivers/librte_bus_pci.a 00:01:38.063 [200/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:38.063 [201/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.063 [202/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.063 [203/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:38.063 [204/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:38.063 [205/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:38.063 [206/267] Linking static target drivers/librte_mempool_ring.a 00:01:38.323 [207/267] Linking static target lib/librte_cryptodev.a 00:01:38.323 [208/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.323 [209/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:38.323 [210/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.323 [211/267] Linking target lib/librte_telemetry.so.24.1 00:01:38.323 [212/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.323 [213/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:38.323 [214/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.323 [215/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:38.584 [216/267] Linking static target lib/librte_ethdev.a 00:01:38.584 [217/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.584 [218/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.584 [219/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:38.584 [220/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.844 [221/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.845 [222/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.845 [223/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.104 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.104 [225/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.104 [226/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.673 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:39.673 [228/267] Linking static target lib/librte_vhost.a 00:01:40.241 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.623 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.204 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.590 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.590 [233/267] Linking target lib/librte_eal.so.24.1 00:01:49.590 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:49.590 [235/267] Linking target lib/librte_ring.so.24.1 00:01:49.590 [236/267] Linking target lib/librte_pci.so.24.1 00:01:49.590 [237/267] Linking target lib/librte_meter.so.24.1 00:01:49.590 [238/267] Linking target lib/librte_timer.so.24.1 00:01:49.590 [239/267] Linking target lib/librte_dmadev.so.24.1 00:01:49.590 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:01:49.851 [241/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:49.851 [242/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:49.851 [243/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:49.851 [244/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:49.851 [245/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:49.851 [246/267] Linking target drivers/librte_bus_pci.so.24.1 00:01:49.851 [247/267] Linking target lib/librte_rcu.so.24.1 00:01:49.851 [248/267] Linking target lib/librte_mempool.so.24.1 00:01:49.851 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:50.112 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:50.112 [251/267] Linking target lib/librte_mbuf.so.24.1 00:01:50.112 [252/267] Linking target drivers/librte_mempool_ring.so.24.1 00:01:50.112 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:50.112 [254/267] Linking target lib/librte_net.so.24.1 00:01:50.112 [255/267] Linking target lib/librte_compressdev.so.24.1 00:01:50.112 [256/267] Linking target lib/librte_reorder.so.24.1 00:01:50.112 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:01:50.374 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:50.374 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:50.374 [260/267] Linking target lib/librte_hash.so.24.1 00:01:50.374 [261/267] Linking target lib/librte_security.so.24.1 00:01:50.374 [262/267] Linking target lib/librte_cmdline.so.24.1 00:01:50.374 [263/267] Linking target lib/librte_ethdev.so.24.1 00:01:50.635 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:50.635 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:50.635 [266/267] Linking target lib/librte_power.so.24.1 00:01:50.635 [267/267] Linking target lib/librte_vhost.so.24.1 00:01:50.635 INFO: autodetecting backend as ninja 00:01:50.635 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:01:53.935 CC lib/ut/ut.o 00:01:53.935 CC lib/log/log.o 00:01:53.935 CC lib/log/log_flags.o 00:01:53.935 CC lib/ut_mock/mock.o 00:01:53.935 CC lib/log/log_deprecated.o 00:01:54.197 LIB libspdk_ut_mock.a 00:01:54.197 LIB libspdk_ut.a 00:01:54.197 SO libspdk_ut_mock.so.6.0 00:01:54.197 LIB libspdk_log.a 00:01:54.197 SO libspdk_ut.so.2.0 00:01:54.197 SO libspdk_log.so.7.1 00:01:54.197 SYMLINK libspdk_ut_mock.so 00:01:54.197 SYMLINK libspdk_ut.so 00:01:54.457 SYMLINK libspdk_log.so 00:01:54.719 CXX lib/trace_parser/trace.o 00:01:54.719 CC lib/util/base64.o 00:01:54.719 CC lib/util/cpuset.o 00:01:54.719 CC lib/util/bit_array.o 00:01:54.719 CC lib/dma/dma.o 00:01:54.719 CC lib/ioat/ioat.o 00:01:54.719 CC lib/util/crc16.o 00:01:54.719 CC lib/util/crc32.o 00:01:54.719 CC lib/util/crc32c.o 00:01:54.719 CC lib/util/crc32_ieee.o 00:01:54.719 CC lib/util/fd.o 00:01:54.719 CC lib/util/crc64.o 00:01:54.719 CC lib/util/dif.o 00:01:54.719 CC lib/util/fd_group.o 00:01:54.719 CC lib/util/file.o 00:01:54.719 CC lib/util/hexlify.o 00:01:54.719 CC lib/util/iov.o 00:01:54.719 CC lib/util/math.o 00:01:54.719 CC lib/util/net.o 00:01:54.719 CC lib/util/pipe.o 00:01:54.719 CC lib/util/strerror_tls.o 00:01:54.719 CC lib/util/string.o 00:01:54.719 CC lib/util/uuid.o 00:01:54.719 CC lib/util/xor.o 00:01:54.719 CC lib/util/zipf.o 00:01:54.719 CC lib/util/md5.o 00:01:54.980 CC lib/vfio_user/host/vfio_user_pci.o 00:01:54.980 CC lib/vfio_user/host/vfio_user.o 00:01:54.980 LIB libspdk_dma.a 00:01:54.980 SO libspdk_dma.so.5.0 00:01:54.980 LIB libspdk_ioat.a 00:01:54.980 SO libspdk_ioat.so.7.0 00:01:54.980 SYMLINK libspdk_dma.so 00:01:54.980 SYMLINK libspdk_ioat.so 00:01:55.241 LIB libspdk_vfio_user.a 00:01:55.241 SO libspdk_vfio_user.so.5.0 00:01:55.241 LIB libspdk_util.a 00:01:55.241 SYMLINK libspdk_vfio_user.so 00:01:55.241 SO libspdk_util.so.10.1 00:01:55.502 SYMLINK libspdk_util.so 00:01:55.502 LIB libspdk_trace_parser.a 00:01:55.502 SO libspdk_trace_parser.so.6.0 00:01:55.763 SYMLINK libspdk_trace_parser.so 00:01:55.763 CC lib/conf/conf.o 00:01:55.763 CC lib/vmd/vmd.o 00:01:55.763 CC lib/vmd/led.o 00:01:55.763 CC lib/json/json_parse.o 00:01:55.763 CC lib/rdma_utils/rdma_utils.o 00:01:55.763 CC lib/json/json_util.o 00:01:55.763 CC lib/json/json_write.o 00:01:55.763 CC lib/idxd/idxd.o 00:01:55.763 CC lib/idxd/idxd_user.o 00:01:55.763 CC lib/idxd/idxd_kernel.o 00:01:55.763 CC lib/env_dpdk/env.o 00:01:55.763 CC lib/env_dpdk/memory.o 00:01:55.763 CC lib/env_dpdk/pci.o 00:01:55.763 CC lib/env_dpdk/init.o 00:01:55.763 CC lib/env_dpdk/threads.o 00:01:55.763 CC lib/env_dpdk/pci_ioat.o 00:01:55.763 CC lib/env_dpdk/pci_virtio.o 00:01:55.763 CC lib/env_dpdk/pci_vmd.o 00:01:55.763 CC lib/env_dpdk/pci_idxd.o 00:01:55.763 CC lib/env_dpdk/pci_event.o 00:01:55.763 CC lib/env_dpdk/sigbus_handler.o 00:01:55.763 CC lib/env_dpdk/pci_dpdk.o 00:01:55.763 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:55.763 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:56.024 LIB libspdk_conf.a 00:01:56.024 SO libspdk_conf.so.6.0 00:01:56.024 LIB libspdk_json.a 00:01:56.024 LIB libspdk_rdma_utils.a 00:01:56.024 SYMLINK libspdk_conf.so 00:01:56.024 SO libspdk_json.so.6.0 00:01:56.024 SO libspdk_rdma_utils.so.1.0 00:01:56.285 SYMLINK libspdk_json.so 00:01:56.285 SYMLINK libspdk_rdma_utils.so 00:01:56.285 LIB libspdk_idxd.a 00:01:56.285 LIB libspdk_vmd.a 00:01:56.285 SO libspdk_idxd.so.12.1 00:01:56.547 SO libspdk_vmd.so.6.0 00:01:56.547 SYMLINK libspdk_idxd.so 00:01:56.547 SYMLINK libspdk_vmd.so 00:01:56.547 CC lib/jsonrpc/jsonrpc_server.o 00:01:56.547 CC lib/jsonrpc/jsonrpc_client.o 00:01:56.547 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:56.547 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:56.547 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:56.547 CC lib/rdma_provider/common.o 00:01:56.807 LIB libspdk_rdma_provider.a 00:01:56.807 LIB libspdk_jsonrpc.a 00:01:56.807 SO libspdk_rdma_provider.so.7.0 00:01:56.807 SO libspdk_jsonrpc.so.6.0 00:01:56.807 SYMLINK libspdk_rdma_provider.so 00:01:56.807 SYMLINK libspdk_jsonrpc.so 00:01:57.068 LIB libspdk_env_dpdk.a 00:01:57.068 SO libspdk_env_dpdk.so.15.1 00:01:57.329 SYMLINK libspdk_env_dpdk.so 00:01:57.329 CC lib/rpc/rpc.o 00:01:57.590 LIB libspdk_rpc.a 00:01:57.590 SO libspdk_rpc.so.6.0 00:01:57.590 SYMLINK libspdk_rpc.so 00:01:57.850 CC lib/trace/trace.o 00:01:57.850 CC lib/trace/trace_rpc.o 00:01:57.850 CC lib/trace/trace_flags.o 00:01:57.850 CC lib/keyring/keyring.o 00:01:57.850 CC lib/keyring/keyring_rpc.o 00:01:57.850 CC lib/notify/notify.o 00:01:57.850 CC lib/notify/notify_rpc.o 00:01:58.110 LIB libspdk_notify.a 00:01:58.111 SO libspdk_notify.so.6.0 00:01:58.111 LIB libspdk_keyring.a 00:01:58.111 LIB libspdk_trace.a 00:01:58.111 SO libspdk_keyring.so.2.0 00:01:58.111 SO libspdk_trace.so.11.0 00:01:58.371 SYMLINK libspdk_notify.so 00:01:58.371 SYMLINK libspdk_keyring.so 00:01:58.371 SYMLINK libspdk_trace.so 00:01:58.631 CC lib/sock/sock.o 00:01:58.631 CC lib/thread/thread.o 00:01:58.631 CC lib/sock/sock_rpc.o 00:01:58.631 CC lib/thread/iobuf.o 00:01:59.203 LIB libspdk_sock.a 00:01:59.203 SO libspdk_sock.so.10.0 00:01:59.203 SYMLINK libspdk_sock.so 00:01:59.462 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:59.462 CC lib/nvme/nvme_ctrlr.o 00:01:59.462 CC lib/nvme/nvme_fabric.o 00:01:59.463 CC lib/nvme/nvme_ns_cmd.o 00:01:59.463 CC lib/nvme/nvme_ns.o 00:01:59.463 CC lib/nvme/nvme_pcie_common.o 00:01:59.463 CC lib/nvme/nvme_pcie.o 00:01:59.463 CC lib/nvme/nvme_qpair.o 00:01:59.463 CC lib/nvme/nvme.o 00:01:59.463 CC lib/nvme/nvme_quirks.o 00:01:59.463 CC lib/nvme/nvme_transport.o 00:01:59.463 CC lib/nvme/nvme_discovery.o 00:01:59.463 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:59.463 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:59.463 CC lib/nvme/nvme_tcp.o 00:01:59.463 CC lib/nvme/nvme_opal.o 00:01:59.463 CC lib/nvme/nvme_io_msg.o 00:01:59.463 CC lib/nvme/nvme_poll_group.o 00:01:59.463 CC lib/nvme/nvme_zns.o 00:01:59.463 CC lib/nvme/nvme_stubs.o 00:01:59.463 CC lib/nvme/nvme_auth.o 00:01:59.463 CC lib/nvme/nvme_cuse.o 00:01:59.463 CC lib/nvme/nvme_vfio_user.o 00:01:59.463 CC lib/nvme/nvme_rdma.o 00:02:00.032 LIB libspdk_thread.a 00:02:00.032 SO libspdk_thread.so.11.0 00:02:00.032 SYMLINK libspdk_thread.so 00:02:00.293 CC lib/vfu_tgt/tgt_rpc.o 00:02:00.293 CC lib/vfu_tgt/tgt_endpoint.o 00:02:00.293 CC lib/virtio/virtio.o 00:02:00.293 CC lib/fsdev/fsdev.o 00:02:00.293 CC lib/virtio/virtio_pci.o 00:02:00.293 CC lib/virtio/virtio_vhost_user.o 00:02:00.293 CC lib/fsdev/fsdev_io.o 00:02:00.293 CC lib/init/json_config.o 00:02:00.293 CC lib/init/subsystem.o 00:02:00.293 CC lib/virtio/virtio_vfio_user.o 00:02:00.293 CC lib/init/rpc.o 00:02:00.293 CC lib/fsdev/fsdev_rpc.o 00:02:00.293 CC lib/accel/accel.o 00:02:00.293 CC lib/init/subsystem_rpc.o 00:02:00.293 CC lib/accel/accel_rpc.o 00:02:00.293 CC lib/accel/accel_sw.o 00:02:00.293 CC lib/blob/blobstore.o 00:02:00.293 CC lib/blob/request.o 00:02:00.293 CC lib/blob/zeroes.o 00:02:00.293 CC lib/blob/blob_bs_dev.o 00:02:00.555 LIB libspdk_init.a 00:02:00.816 LIB libspdk_vfu_tgt.a 00:02:00.816 SO libspdk_init.so.6.0 00:02:00.816 LIB libspdk_virtio.a 00:02:00.816 SO libspdk_vfu_tgt.so.3.0 00:02:00.816 SO libspdk_virtio.so.7.0 00:02:00.816 SYMLINK libspdk_init.so 00:02:00.816 SYMLINK libspdk_vfu_tgt.so 00:02:00.816 SYMLINK libspdk_virtio.so 00:02:01.078 LIB libspdk_fsdev.a 00:02:01.078 SO libspdk_fsdev.so.2.0 00:02:01.078 SYMLINK libspdk_fsdev.so 00:02:01.078 CC lib/event/app.o 00:02:01.078 CC lib/event/reactor.o 00:02:01.078 CC lib/event/log_rpc.o 00:02:01.078 CC lib/event/app_rpc.o 00:02:01.078 CC lib/event/scheduler_static.o 00:02:01.338 LIB libspdk_nvme.a 00:02:01.338 LIB libspdk_accel.a 00:02:01.338 SO libspdk_accel.so.16.0 00:02:01.338 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:01.338 SO libspdk_nvme.so.15.0 00:02:01.599 SYMLINK libspdk_accel.so 00:02:01.599 LIB libspdk_event.a 00:02:01.599 SO libspdk_event.so.14.0 00:02:01.599 SYMLINK libspdk_event.so 00:02:01.599 SYMLINK libspdk_nvme.so 00:02:01.859 CC lib/bdev/bdev.o 00:02:01.859 CC lib/bdev/bdev_rpc.o 00:02:01.859 CC lib/bdev/bdev_zone.o 00:02:01.859 CC lib/bdev/part.o 00:02:01.859 CC lib/bdev/scsi_nvme.o 00:02:02.121 LIB libspdk_fuse_dispatcher.a 00:02:02.121 SO libspdk_fuse_dispatcher.so.1.0 00:02:02.121 SYMLINK libspdk_fuse_dispatcher.so 00:02:03.063 LIB libspdk_blob.a 00:02:03.063 SO libspdk_blob.so.11.0 00:02:03.063 SYMLINK libspdk_blob.so 00:02:03.635 CC lib/lvol/lvol.o 00:02:03.635 CC lib/blobfs/blobfs.o 00:02:03.635 CC lib/blobfs/tree.o 00:02:04.207 LIB libspdk_bdev.a 00:02:04.207 SO libspdk_bdev.so.17.0 00:02:04.207 LIB libspdk_blobfs.a 00:02:04.468 SO libspdk_blobfs.so.10.0 00:02:04.468 LIB libspdk_lvol.a 00:02:04.468 SYMLINK libspdk_bdev.so 00:02:04.468 SO libspdk_lvol.so.10.0 00:02:04.468 SYMLINK libspdk_blobfs.so 00:02:04.468 SYMLINK libspdk_lvol.so 00:02:04.728 CC lib/nvmf/ctrlr.o 00:02:04.728 CC lib/nvmf/ctrlr_discovery.o 00:02:04.728 CC lib/nvmf/ctrlr_bdev.o 00:02:04.728 CC lib/nvmf/subsystem.o 00:02:04.728 CC lib/nvmf/nvmf.o 00:02:04.728 CC lib/nvmf/nvmf_rpc.o 00:02:04.728 CC lib/nvmf/transport.o 00:02:04.728 CC lib/nvmf/tcp.o 00:02:04.728 CC lib/nvmf/stubs.o 00:02:04.728 CC lib/scsi/dev.o 00:02:04.728 CC lib/nvmf/mdns_server.o 00:02:04.728 CC lib/scsi/lun.o 00:02:04.728 CC lib/nvmf/vfio_user.o 00:02:04.728 CC lib/scsi/port.o 00:02:04.728 CC lib/nvmf/rdma.o 00:02:04.728 CC lib/scsi/scsi.o 00:02:04.728 CC lib/nvmf/auth.o 00:02:04.728 CC lib/scsi/scsi_bdev.o 00:02:04.728 CC lib/ublk/ublk.o 00:02:04.728 CC lib/ublk/ublk_rpc.o 00:02:04.728 CC lib/ftl/ftl_core.o 00:02:04.728 CC lib/scsi/scsi_pr.o 00:02:04.728 CC lib/scsi/scsi_rpc.o 00:02:04.728 CC lib/ftl/ftl_init.o 00:02:04.728 CC lib/scsi/task.o 00:02:04.728 CC lib/ftl/ftl_layout.o 00:02:04.728 CC lib/ftl/ftl_debug.o 00:02:04.728 CC lib/ftl/ftl_io.o 00:02:04.728 CC lib/ftl/ftl_sb.o 00:02:04.728 CC lib/nbd/nbd.o 00:02:04.728 CC lib/ftl/ftl_l2p.o 00:02:04.728 CC lib/nbd/nbd_rpc.o 00:02:04.728 CC lib/ftl/ftl_l2p_flat.o 00:02:04.728 CC lib/ftl/ftl_nv_cache.o 00:02:04.728 CC lib/ftl/ftl_band.o 00:02:04.728 CC lib/ftl/ftl_band_ops.o 00:02:04.728 CC lib/ftl/ftl_writer.o 00:02:04.728 CC lib/ftl/ftl_reloc.o 00:02:04.728 CC lib/ftl/ftl_rq.o 00:02:04.728 CC lib/ftl/ftl_l2p_cache.o 00:02:04.728 CC lib/ftl/ftl_p2l.o 00:02:04.728 CC lib/ftl/ftl_p2l_log.o 00:02:04.728 CC lib/ftl/mngt/ftl_mngt.o 00:02:04.728 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:04.728 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:04.728 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:04.728 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:04.728 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:04.728 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:04.728 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:04.728 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:04.728 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:04.728 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:04.728 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:04.728 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:04.728 CC lib/ftl/utils/ftl_conf.o 00:02:04.728 CC lib/ftl/utils/ftl_md.o 00:02:04.728 CC lib/ftl/utils/ftl_mempool.o 00:02:04.728 CC lib/ftl/utils/ftl_bitmap.o 00:02:04.728 CC lib/ftl/utils/ftl_property.o 00:02:04.728 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:04.728 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:04.728 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:04.728 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:04.728 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:04.728 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:04.728 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:04.728 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:04.728 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:04.728 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:04.728 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:04.728 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:04.728 CC lib/ftl/ftl_trace.o 00:02:04.728 CC lib/ftl/base/ftl_base_dev.o 00:02:04.729 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:04.729 CC lib/ftl/base/ftl_base_bdev.o 00:02:05.297 LIB libspdk_nbd.a 00:02:05.297 SO libspdk_nbd.so.7.0 00:02:05.297 LIB libspdk_scsi.a 00:02:05.297 SYMLINK libspdk_nbd.so 00:02:05.297 SO libspdk_scsi.so.9.0 00:02:05.556 LIB libspdk_ublk.a 00:02:05.557 SYMLINK libspdk_scsi.so 00:02:05.557 SO libspdk_ublk.so.3.0 00:02:05.557 SYMLINK libspdk_ublk.so 00:02:05.819 LIB libspdk_ftl.a 00:02:05.819 CC lib/iscsi/conn.o 00:02:05.819 CC lib/iscsi/init_grp.o 00:02:05.819 CC lib/iscsi/iscsi.o 00:02:05.819 CC lib/iscsi/param.o 00:02:05.819 CC lib/iscsi/portal_grp.o 00:02:05.819 CC lib/iscsi/tgt_node.o 00:02:05.819 CC lib/vhost/vhost.o 00:02:05.819 CC lib/iscsi/iscsi_subsystem.o 00:02:05.819 CC lib/vhost/vhost_rpc.o 00:02:05.819 CC lib/iscsi/iscsi_rpc.o 00:02:05.819 CC lib/iscsi/task.o 00:02:05.819 CC lib/vhost/vhost_scsi.o 00:02:05.819 CC lib/vhost/vhost_blk.o 00:02:05.819 CC lib/vhost/rte_vhost_user.o 00:02:05.819 SO libspdk_ftl.so.9.0 00:02:06.080 SYMLINK libspdk_ftl.so 00:02:06.652 LIB libspdk_nvmf.a 00:02:06.652 SO libspdk_nvmf.so.20.0 00:02:06.912 LIB libspdk_vhost.a 00:02:06.912 SO libspdk_vhost.so.8.0 00:02:06.912 SYMLINK libspdk_nvmf.so 00:02:06.912 SYMLINK libspdk_vhost.so 00:02:06.912 LIB libspdk_iscsi.a 00:02:07.172 SO libspdk_iscsi.so.8.0 00:02:07.172 SYMLINK libspdk_iscsi.so 00:02:07.766 CC module/env_dpdk/env_dpdk_rpc.o 00:02:07.766 CC module/vfu_device/vfu_virtio.o 00:02:07.766 CC module/vfu_device/vfu_virtio_blk.o 00:02:07.766 CC module/vfu_device/vfu_virtio_scsi.o 00:02:07.766 CC module/vfu_device/vfu_virtio_fs.o 00:02:07.766 CC module/vfu_device/vfu_virtio_rpc.o 00:02:08.064 LIB libspdk_env_dpdk_rpc.a 00:02:08.064 CC module/accel/ioat/accel_ioat.o 00:02:08.064 CC module/accel/ioat/accel_ioat_rpc.o 00:02:08.064 CC module/accel/dsa/accel_dsa.o 00:02:08.064 CC module/blob/bdev/blob_bdev.o 00:02:08.064 CC module/accel/dsa/accel_dsa_rpc.o 00:02:08.064 CC module/accel/error/accel_error.o 00:02:08.064 CC module/accel/error/accel_error_rpc.o 00:02:08.064 CC module/keyring/linux/keyring.o 00:02:08.064 CC module/keyring/linux/keyring_rpc.o 00:02:08.064 CC module/sock/posix/posix.o 00:02:08.064 CC module/keyring/file/keyring.o 00:02:08.064 CC module/keyring/file/keyring_rpc.o 00:02:08.064 CC module/scheduler/gscheduler/gscheduler.o 00:02:08.064 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:08.064 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:08.064 CC module/accel/iaa/accel_iaa.o 00:02:08.064 CC module/accel/iaa/accel_iaa_rpc.o 00:02:08.064 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:08.064 CC module/fsdev/aio/fsdev_aio.o 00:02:08.064 CC module/fsdev/aio/linux_aio_mgr.o 00:02:08.064 SO libspdk_env_dpdk_rpc.so.6.0 00:02:08.065 SYMLINK libspdk_env_dpdk_rpc.so 00:02:08.065 LIB libspdk_keyring_file.a 00:02:08.065 LIB libspdk_scheduler_gscheduler.a 00:02:08.065 LIB libspdk_scheduler_dpdk_governor.a 00:02:08.325 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:08.325 LIB libspdk_accel_ioat.a 00:02:08.325 SO libspdk_keyring_file.so.2.0 00:02:08.325 LIB libspdk_accel_error.a 00:02:08.325 SO libspdk_scheduler_gscheduler.so.4.0 00:02:08.325 LIB libspdk_keyring_linux.a 00:02:08.325 LIB libspdk_scheduler_dynamic.a 00:02:08.325 LIB libspdk_accel_iaa.a 00:02:08.325 SO libspdk_accel_ioat.so.6.0 00:02:08.325 SO libspdk_accel_error.so.2.0 00:02:08.325 SO libspdk_keyring_linux.so.1.0 00:02:08.325 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:08.325 SO libspdk_scheduler_dynamic.so.4.0 00:02:08.325 SYMLINK libspdk_keyring_file.so 00:02:08.325 LIB libspdk_blob_bdev.a 00:02:08.325 SYMLINK libspdk_scheduler_gscheduler.so 00:02:08.325 LIB libspdk_accel_dsa.a 00:02:08.325 SO libspdk_accel_iaa.so.3.0 00:02:08.325 SO libspdk_blob_bdev.so.11.0 00:02:08.325 SYMLINK libspdk_accel_error.so 00:02:08.325 SYMLINK libspdk_accel_ioat.so 00:02:08.325 SYMLINK libspdk_keyring_linux.so 00:02:08.325 SO libspdk_accel_dsa.so.5.0 00:02:08.325 SYMLINK libspdk_scheduler_dynamic.so 00:02:08.325 SYMLINK libspdk_accel_iaa.so 00:02:08.325 SYMLINK libspdk_blob_bdev.so 00:02:08.325 LIB libspdk_vfu_device.a 00:02:08.325 SYMLINK libspdk_accel_dsa.so 00:02:08.325 SO libspdk_vfu_device.so.3.0 00:02:08.586 SYMLINK libspdk_vfu_device.so 00:02:08.586 LIB libspdk_fsdev_aio.a 00:02:08.586 SO libspdk_fsdev_aio.so.1.0 00:02:08.847 SYMLINK libspdk_fsdev_aio.so 00:02:08.848 LIB libspdk_sock_posix.a 00:02:08.848 SO libspdk_sock_posix.so.6.0 00:02:08.848 SYMLINK libspdk_sock_posix.so 00:02:08.848 CC module/blobfs/bdev/blobfs_bdev.o 00:02:08.848 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:08.848 CC module/bdev/null/bdev_null.o 00:02:08.848 CC module/bdev/null/bdev_null_rpc.o 00:02:08.848 CC module/bdev/delay/vbdev_delay.o 00:02:08.848 CC module/bdev/ftl/bdev_ftl.o 00:02:08.848 CC module/bdev/error/vbdev_error.o 00:02:08.848 CC module/bdev/error/vbdev_error_rpc.o 00:02:08.848 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:08.848 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:08.848 CC module/bdev/lvol/vbdev_lvol.o 00:02:08.848 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:08.848 CC module/bdev/passthru/vbdev_passthru.o 00:02:08.848 CC module/bdev/gpt/gpt.o 00:02:08.848 CC module/bdev/gpt/vbdev_gpt.o 00:02:08.848 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:08.848 CC module/bdev/nvme/bdev_nvme.o 00:02:08.848 CC module/bdev/malloc/bdev_malloc.o 00:02:08.848 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:08.848 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:08.848 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:08.848 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:08.848 CC module/bdev/nvme/nvme_rpc.o 00:02:08.848 CC module/bdev/nvme/bdev_mdns_client.o 00:02:08.848 CC module/bdev/nvme/vbdev_opal.o 00:02:08.848 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:08.848 CC module/bdev/aio/bdev_aio.o 00:02:08.848 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:08.848 CC module/bdev/aio/bdev_aio_rpc.o 00:02:08.848 CC module/bdev/raid/bdev_raid.o 00:02:08.848 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:08.848 CC module/bdev/raid/bdev_raid_rpc.o 00:02:08.848 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:08.848 CC module/bdev/raid/bdev_raid_sb.o 00:02:08.848 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:08.848 CC module/bdev/raid/raid0.o 00:02:08.848 CC module/bdev/iscsi/bdev_iscsi.o 00:02:08.848 CC module/bdev/raid/raid1.o 00:02:08.848 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:08.848 CC module/bdev/split/vbdev_split.o 00:02:08.848 CC module/bdev/raid/concat.o 00:02:08.848 CC module/bdev/split/vbdev_split_rpc.o 00:02:09.109 LIB libspdk_blobfs_bdev.a 00:02:09.109 LIB libspdk_bdev_null.a 00:02:09.109 SO libspdk_blobfs_bdev.so.6.0 00:02:09.109 SO libspdk_bdev_null.so.6.0 00:02:09.370 SYMLINK libspdk_blobfs_bdev.so 00:02:09.370 LIB libspdk_bdev_error.a 00:02:09.370 LIB libspdk_bdev_split.a 00:02:09.370 SYMLINK libspdk_bdev_null.so 00:02:09.370 LIB libspdk_bdev_gpt.a 00:02:09.370 LIB libspdk_bdev_ftl.a 00:02:09.370 LIB libspdk_bdev_passthru.a 00:02:09.370 SO libspdk_bdev_error.so.6.0 00:02:09.370 SO libspdk_bdev_gpt.so.6.0 00:02:09.370 SO libspdk_bdev_split.so.6.0 00:02:09.370 SO libspdk_bdev_passthru.so.6.0 00:02:09.370 SO libspdk_bdev_ftl.so.6.0 00:02:09.370 LIB libspdk_bdev_zone_block.a 00:02:09.370 LIB libspdk_bdev_delay.a 00:02:09.370 LIB libspdk_bdev_malloc.a 00:02:09.370 LIB libspdk_bdev_aio.a 00:02:09.370 SO libspdk_bdev_zone_block.so.6.0 00:02:09.370 SYMLINK libspdk_bdev_split.so 00:02:09.370 SYMLINK libspdk_bdev_error.so 00:02:09.370 SYMLINK libspdk_bdev_gpt.so 00:02:09.370 SYMLINK libspdk_bdev_passthru.so 00:02:09.370 LIB libspdk_bdev_iscsi.a 00:02:09.370 SO libspdk_bdev_delay.so.6.0 00:02:09.370 SO libspdk_bdev_malloc.so.6.0 00:02:09.370 SO libspdk_bdev_aio.so.6.0 00:02:09.370 SYMLINK libspdk_bdev_ftl.so 00:02:09.370 SO libspdk_bdev_iscsi.so.6.0 00:02:09.370 SYMLINK libspdk_bdev_zone_block.so 00:02:09.370 SYMLINK libspdk_bdev_delay.so 00:02:09.370 SYMLINK libspdk_bdev_malloc.so 00:02:09.370 SYMLINK libspdk_bdev_aio.so 00:02:09.370 LIB libspdk_bdev_lvol.a 00:02:09.370 LIB libspdk_bdev_virtio.a 00:02:09.370 SYMLINK libspdk_bdev_iscsi.so 00:02:09.632 SO libspdk_bdev_lvol.so.6.0 00:02:09.632 SO libspdk_bdev_virtio.so.6.0 00:02:09.632 SYMLINK libspdk_bdev_lvol.so 00:02:09.632 SYMLINK libspdk_bdev_virtio.so 00:02:09.893 LIB libspdk_bdev_raid.a 00:02:09.893 SO libspdk_bdev_raid.so.6.0 00:02:10.154 SYMLINK libspdk_bdev_raid.so 00:02:11.097 LIB libspdk_bdev_nvme.a 00:02:11.357 SO libspdk_bdev_nvme.so.7.1 00:02:11.357 SYMLINK libspdk_bdev_nvme.so 00:02:11.930 CC module/event/subsystems/fsdev/fsdev.o 00:02:11.930 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:12.192 CC module/event/subsystems/sock/sock.o 00:02:12.192 CC module/event/subsystems/vmd/vmd.o 00:02:12.192 CC module/event/subsystems/iobuf/iobuf.o 00:02:12.192 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:12.192 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:12.192 CC module/event/subsystems/scheduler/scheduler.o 00:02:12.192 CC module/event/subsystems/keyring/keyring.o 00:02:12.192 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:12.192 LIB libspdk_event_fsdev.a 00:02:12.192 LIB libspdk_event_vfu_tgt.a 00:02:12.192 SO libspdk_event_fsdev.so.1.0 00:02:12.192 LIB libspdk_event_sock.a 00:02:12.192 LIB libspdk_event_keyring.a 00:02:12.192 SO libspdk_event_vfu_tgt.so.3.0 00:02:12.192 LIB libspdk_event_iobuf.a 00:02:12.192 LIB libspdk_event_vmd.a 00:02:12.192 LIB libspdk_event_scheduler.a 00:02:12.192 LIB libspdk_event_vhost_blk.a 00:02:12.192 SO libspdk_event_sock.so.5.0 00:02:12.192 SO libspdk_event_keyring.so.1.0 00:02:12.192 SO libspdk_event_iobuf.so.3.0 00:02:12.192 SO libspdk_event_vmd.so.6.0 00:02:12.192 SO libspdk_event_scheduler.so.4.0 00:02:12.192 SO libspdk_event_vhost_blk.so.3.0 00:02:12.192 SYMLINK libspdk_event_vfu_tgt.so 00:02:12.454 SYMLINK libspdk_event_fsdev.so 00:02:12.454 SYMLINK libspdk_event_sock.so 00:02:12.454 SYMLINK libspdk_event_keyring.so 00:02:12.454 SYMLINK libspdk_event_vmd.so 00:02:12.454 SYMLINK libspdk_event_iobuf.so 00:02:12.454 SYMLINK libspdk_event_scheduler.so 00:02:12.454 SYMLINK libspdk_event_vhost_blk.so 00:02:12.715 CC module/event/subsystems/accel/accel.o 00:02:12.976 LIB libspdk_event_accel.a 00:02:12.976 SO libspdk_event_accel.so.6.0 00:02:12.976 SYMLINK libspdk_event_accel.so 00:02:13.238 CC module/event/subsystems/bdev/bdev.o 00:02:13.499 LIB libspdk_event_bdev.a 00:02:13.499 SO libspdk_event_bdev.so.6.0 00:02:13.499 SYMLINK libspdk_event_bdev.so 00:02:14.071 CC module/event/subsystems/nbd/nbd.o 00:02:14.071 CC module/event/subsystems/ublk/ublk.o 00:02:14.071 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:14.071 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:14.071 CC module/event/subsystems/scsi/scsi.o 00:02:14.071 LIB libspdk_event_nbd.a 00:02:14.071 LIB libspdk_event_ublk.a 00:02:14.071 LIB libspdk_event_scsi.a 00:02:14.071 SO libspdk_event_nbd.so.6.0 00:02:14.071 SO libspdk_event_ublk.so.3.0 00:02:14.332 SO libspdk_event_scsi.so.6.0 00:02:14.332 LIB libspdk_event_nvmf.a 00:02:14.332 SYMLINK libspdk_event_nbd.so 00:02:14.332 SYMLINK libspdk_event_ublk.so 00:02:14.332 SO libspdk_event_nvmf.so.6.0 00:02:14.332 SYMLINK libspdk_event_scsi.so 00:02:14.332 SYMLINK libspdk_event_nvmf.so 00:02:14.594 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:14.594 CC module/event/subsystems/iscsi/iscsi.o 00:02:14.855 LIB libspdk_event_vhost_scsi.a 00:02:14.855 LIB libspdk_event_iscsi.a 00:02:14.855 SO libspdk_event_vhost_scsi.so.3.0 00:02:14.855 SO libspdk_event_iscsi.so.6.0 00:02:14.855 SYMLINK libspdk_event_vhost_scsi.so 00:02:14.855 SYMLINK libspdk_event_iscsi.so 00:02:15.115 SO libspdk.so.6.0 00:02:15.115 SYMLINK libspdk.so 00:02:15.685 CC app/spdk_top/spdk_top.o 00:02:15.685 CC app/trace_record/trace_record.o 00:02:15.685 CC app/spdk_nvme_identify/identify.o 00:02:15.685 CC app/spdk_nvme_discover/discovery_aer.o 00:02:15.685 CXX app/trace/trace.o 00:02:15.685 CC app/spdk_lspci/spdk_lspci.o 00:02:15.685 TEST_HEADER include/spdk/accel.h 00:02:15.685 TEST_HEADER include/spdk/accel_module.h 00:02:15.685 CC app/spdk_nvme_perf/perf.o 00:02:15.685 TEST_HEADER include/spdk/assert.h 00:02:15.685 TEST_HEADER include/spdk/barrier.h 00:02:15.685 TEST_HEADER include/spdk/base64.h 00:02:15.685 TEST_HEADER include/spdk/bdev.h 00:02:15.685 CC test/rpc_client/rpc_client_test.o 00:02:15.685 TEST_HEADER include/spdk/bdev_module.h 00:02:15.685 TEST_HEADER include/spdk/bdev_zone.h 00:02:15.685 TEST_HEADER include/spdk/bit_array.h 00:02:15.685 TEST_HEADER include/spdk/bit_pool.h 00:02:15.685 TEST_HEADER include/spdk/blob_bdev.h 00:02:15.685 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:15.685 TEST_HEADER include/spdk/blobfs.h 00:02:15.685 TEST_HEADER include/spdk/blob.h 00:02:15.685 TEST_HEADER include/spdk/conf.h 00:02:15.685 TEST_HEADER include/spdk/config.h 00:02:15.685 TEST_HEADER include/spdk/cpuset.h 00:02:15.685 TEST_HEADER include/spdk/crc16.h 00:02:15.685 TEST_HEADER include/spdk/crc32.h 00:02:15.685 TEST_HEADER include/spdk/crc64.h 00:02:15.685 TEST_HEADER include/spdk/dif.h 00:02:15.685 TEST_HEADER include/spdk/dma.h 00:02:15.685 TEST_HEADER include/spdk/endian.h 00:02:15.685 TEST_HEADER include/spdk/env_dpdk.h 00:02:15.685 TEST_HEADER include/spdk/env.h 00:02:15.685 TEST_HEADER include/spdk/event.h 00:02:15.685 TEST_HEADER include/spdk/fd.h 00:02:15.685 TEST_HEADER include/spdk/fd_group.h 00:02:15.685 TEST_HEADER include/spdk/file.h 00:02:15.685 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:15.685 TEST_HEADER include/spdk/fsdev.h 00:02:15.685 TEST_HEADER include/spdk/ftl.h 00:02:15.685 TEST_HEADER include/spdk/fsdev_module.h 00:02:15.685 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:15.685 TEST_HEADER include/spdk/hexlify.h 00:02:15.685 TEST_HEADER include/spdk/gpt_spec.h 00:02:15.685 CC app/nvmf_tgt/nvmf_main.o 00:02:15.685 CC app/spdk_dd/spdk_dd.o 00:02:15.685 TEST_HEADER include/spdk/histogram_data.h 00:02:15.685 CC app/iscsi_tgt/iscsi_tgt.o 00:02:15.685 TEST_HEADER include/spdk/idxd_spec.h 00:02:15.685 TEST_HEADER include/spdk/idxd.h 00:02:15.685 TEST_HEADER include/spdk/ioat.h 00:02:15.685 TEST_HEADER include/spdk/init.h 00:02:15.685 TEST_HEADER include/spdk/ioat_spec.h 00:02:15.685 TEST_HEADER include/spdk/iscsi_spec.h 00:02:15.685 TEST_HEADER include/spdk/json.h 00:02:15.685 TEST_HEADER include/spdk/jsonrpc.h 00:02:15.685 TEST_HEADER include/spdk/keyring.h 00:02:15.685 TEST_HEADER include/spdk/keyring_module.h 00:02:15.685 TEST_HEADER include/spdk/likely.h 00:02:15.685 TEST_HEADER include/spdk/log.h 00:02:15.685 TEST_HEADER include/spdk/lvol.h 00:02:15.685 TEST_HEADER include/spdk/md5.h 00:02:15.685 TEST_HEADER include/spdk/memory.h 00:02:15.685 TEST_HEADER include/spdk/mmio.h 00:02:15.686 TEST_HEADER include/spdk/nbd.h 00:02:15.686 TEST_HEADER include/spdk/net.h 00:02:15.686 TEST_HEADER include/spdk/notify.h 00:02:15.686 CC app/spdk_tgt/spdk_tgt.o 00:02:15.686 TEST_HEADER include/spdk/nvme.h 00:02:15.686 TEST_HEADER include/spdk/nvme_intel.h 00:02:15.686 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:15.686 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:15.686 TEST_HEADER include/spdk/nvme_spec.h 00:02:15.686 TEST_HEADER include/spdk/nvme_zns.h 00:02:15.686 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:15.686 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:15.686 TEST_HEADER include/spdk/nvmf.h 00:02:15.686 TEST_HEADER include/spdk/nvmf_spec.h 00:02:15.686 TEST_HEADER include/spdk/nvmf_transport.h 00:02:15.686 TEST_HEADER include/spdk/opal.h 00:02:15.686 TEST_HEADER include/spdk/opal_spec.h 00:02:15.686 TEST_HEADER include/spdk/pipe.h 00:02:15.686 TEST_HEADER include/spdk/pci_ids.h 00:02:15.686 TEST_HEADER include/spdk/queue.h 00:02:15.686 TEST_HEADER include/spdk/reduce.h 00:02:15.686 TEST_HEADER include/spdk/rpc.h 00:02:15.686 TEST_HEADER include/spdk/scheduler.h 00:02:15.686 TEST_HEADER include/spdk/scsi.h 00:02:15.686 TEST_HEADER include/spdk/scsi_spec.h 00:02:15.686 TEST_HEADER include/spdk/sock.h 00:02:15.686 TEST_HEADER include/spdk/string.h 00:02:15.686 TEST_HEADER include/spdk/stdinc.h 00:02:15.686 TEST_HEADER include/spdk/thread.h 00:02:15.686 TEST_HEADER include/spdk/trace_parser.h 00:02:15.686 TEST_HEADER include/spdk/trace.h 00:02:15.686 TEST_HEADER include/spdk/tree.h 00:02:15.686 TEST_HEADER include/spdk/util.h 00:02:15.686 TEST_HEADER include/spdk/ublk.h 00:02:15.686 TEST_HEADER include/spdk/uuid.h 00:02:15.686 TEST_HEADER include/spdk/version.h 00:02:15.686 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:15.686 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:15.686 TEST_HEADER include/spdk/vhost.h 00:02:15.686 TEST_HEADER include/spdk/xor.h 00:02:15.686 TEST_HEADER include/spdk/vmd.h 00:02:15.686 CXX test/cpp_headers/accel.o 00:02:15.686 TEST_HEADER include/spdk/zipf.h 00:02:15.686 CXX test/cpp_headers/accel_module.o 00:02:15.686 CXX test/cpp_headers/assert.o 00:02:15.686 CXX test/cpp_headers/barrier.o 00:02:15.686 CXX test/cpp_headers/base64.o 00:02:15.686 CXX test/cpp_headers/bdev.o 00:02:15.686 CXX test/cpp_headers/bdev_zone.o 00:02:15.686 CXX test/cpp_headers/bdev_module.o 00:02:15.686 CXX test/cpp_headers/bit_pool.o 00:02:15.686 CXX test/cpp_headers/bit_array.o 00:02:15.686 CXX test/cpp_headers/blobfs_bdev.o 00:02:15.686 CXX test/cpp_headers/blob_bdev.o 00:02:15.686 CXX test/cpp_headers/blob.o 00:02:15.686 CXX test/cpp_headers/blobfs.o 00:02:15.686 CXX test/cpp_headers/conf.o 00:02:15.686 CXX test/cpp_headers/config.o 00:02:15.686 CXX test/cpp_headers/cpuset.o 00:02:15.686 CXX test/cpp_headers/crc16.o 00:02:15.686 CXX test/cpp_headers/crc32.o 00:02:15.686 CXX test/cpp_headers/dif.o 00:02:15.686 CXX test/cpp_headers/crc64.o 00:02:15.686 CXX test/cpp_headers/endian.o 00:02:15.686 CXX test/cpp_headers/dma.o 00:02:15.686 CXX test/cpp_headers/env_dpdk.o 00:02:15.686 CXX test/cpp_headers/env.o 00:02:15.686 CXX test/cpp_headers/event.o 00:02:15.686 CXX test/cpp_headers/file.o 00:02:15.686 CXX test/cpp_headers/fd.o 00:02:15.686 CXX test/cpp_headers/fd_group.o 00:02:15.686 CXX test/cpp_headers/fsdev_module.o 00:02:15.686 CXX test/cpp_headers/fsdev.o 00:02:15.686 CXX test/cpp_headers/histogram_data.o 00:02:15.686 CXX test/cpp_headers/ftl.o 00:02:15.686 CXX test/cpp_headers/hexlify.o 00:02:15.686 CXX test/cpp_headers/gpt_spec.o 00:02:15.686 CXX test/cpp_headers/fuse_dispatcher.o 00:02:15.686 CXX test/cpp_headers/idxd.o 00:02:15.686 CXX test/cpp_headers/init.o 00:02:15.686 CXX test/cpp_headers/ioat_spec.o 00:02:15.686 CXX test/cpp_headers/idxd_spec.o 00:02:15.686 CXX test/cpp_headers/ioat.o 00:02:15.686 CXX test/cpp_headers/iscsi_spec.o 00:02:15.686 CXX test/cpp_headers/jsonrpc.o 00:02:15.686 CXX test/cpp_headers/keyring_module.o 00:02:15.686 CXX test/cpp_headers/json.o 00:02:15.686 CXX test/cpp_headers/keyring.o 00:02:15.686 CXX test/cpp_headers/memory.o 00:02:15.686 CXX test/cpp_headers/likely.o 00:02:15.686 CXX test/cpp_headers/nbd.o 00:02:15.686 CXX test/cpp_headers/mmio.o 00:02:15.686 CXX test/cpp_headers/log.o 00:02:15.686 CXX test/cpp_headers/lvol.o 00:02:15.686 CXX test/cpp_headers/notify.o 00:02:15.686 CXX test/cpp_headers/md5.o 00:02:15.686 CXX test/cpp_headers/net.o 00:02:15.686 CXX test/cpp_headers/nvme.o 00:02:15.686 CXX test/cpp_headers/nvme_intel.o 00:02:15.686 CXX test/cpp_headers/nvme_ocssd.o 00:02:15.686 CXX test/cpp_headers/nvme_spec.o 00:02:15.686 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:15.686 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:15.686 CXX test/cpp_headers/nvme_zns.o 00:02:15.686 CC test/env/vtophys/vtophys.o 00:02:15.686 CC test/app/histogram_perf/histogram_perf.o 00:02:15.686 CC examples/util/zipf/zipf.o 00:02:15.686 CXX test/cpp_headers/nvmf_cmd.o 00:02:15.686 CXX test/cpp_headers/nvmf.o 00:02:15.686 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:15.686 CXX test/cpp_headers/nvmf_spec.o 00:02:15.686 CXX test/cpp_headers/opal_spec.o 00:02:15.686 CXX test/cpp_headers/opal.o 00:02:15.686 CXX test/cpp_headers/nvmf_transport.o 00:02:15.686 CXX test/cpp_headers/pci_ids.o 00:02:15.686 CXX test/cpp_headers/pipe.o 00:02:15.686 CC test/env/pci/pci_ut.o 00:02:15.686 CXX test/cpp_headers/scheduler.o 00:02:15.686 CXX test/cpp_headers/queue.o 00:02:15.686 CC examples/ioat/verify/verify.o 00:02:15.686 CXX test/cpp_headers/reduce.o 00:02:15.686 CXX test/cpp_headers/scsi.o 00:02:15.686 CXX test/cpp_headers/rpc.o 00:02:15.686 CC test/app/jsoncat/jsoncat.o 00:02:15.686 CXX test/cpp_headers/scsi_spec.o 00:02:15.686 LINK spdk_lspci 00:02:15.686 CXX test/cpp_headers/stdinc.o 00:02:15.686 CXX test/cpp_headers/string.o 00:02:15.686 CXX test/cpp_headers/sock.o 00:02:15.686 CXX test/cpp_headers/trace.o 00:02:15.686 CXX test/cpp_headers/thread.o 00:02:15.686 CXX test/cpp_headers/trace_parser.o 00:02:15.686 CC test/env/memory/memory_ut.o 00:02:15.686 CXX test/cpp_headers/util.o 00:02:15.686 CXX test/cpp_headers/ublk.o 00:02:15.686 CXX test/cpp_headers/tree.o 00:02:15.686 CXX test/cpp_headers/version.o 00:02:15.686 CXX test/cpp_headers/uuid.o 00:02:15.686 CXX test/cpp_headers/vfio_user_pci.o 00:02:15.686 CXX test/cpp_headers/vfio_user_spec.o 00:02:15.686 CXX test/cpp_headers/vhost.o 00:02:15.686 CC test/thread/poller_perf/poller_perf.o 00:02:15.686 CXX test/cpp_headers/vmd.o 00:02:15.686 CXX test/cpp_headers/zipf.o 00:02:15.686 CXX test/cpp_headers/xor.o 00:02:15.686 CC test/app/stub/stub.o 00:02:15.686 CC examples/ioat/perf/perf.o 00:02:15.950 CC app/fio/nvme/fio_plugin.o 00:02:15.950 CC test/dma/test_dma/test_dma.o 00:02:15.950 CC test/app/bdev_svc/bdev_svc.o 00:02:15.950 CC app/fio/bdev/fio_plugin.o 00:02:15.950 LINK spdk_nvme_discover 00:02:15.950 LINK rpc_client_test 00:02:15.950 LINK nvmf_tgt 00:02:15.950 LINK interrupt_tgt 00:02:15.950 LINK spdk_trace_record 00:02:15.950 LINK iscsi_tgt 00:02:16.210 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:16.210 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:16.210 CC test/env/mem_callbacks/mem_callbacks.o 00:02:16.210 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:16.210 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:16.210 LINK spdk_tgt 00:02:16.210 LINK zipf 00:02:16.210 LINK spdk_dd 00:02:16.468 LINK vtophys 00:02:16.468 LINK poller_perf 00:02:16.468 LINK histogram_perf 00:02:16.468 LINK env_dpdk_post_init 00:02:16.468 LINK jsoncat 00:02:16.468 LINK bdev_svc 00:02:16.468 LINK verify 00:02:16.468 LINK stub 00:02:16.468 LINK ioat_perf 00:02:16.468 LINK spdk_trace 00:02:16.727 LINK spdk_nvme_identify 00:02:16.727 LINK nvme_fuzz 00:02:16.727 CC examples/sock/hello_world/hello_sock.o 00:02:16.727 CC examples/idxd/perf/perf.o 00:02:16.727 LINK pci_ut 00:02:16.727 CC examples/vmd/lsvmd/lsvmd.o 00:02:16.727 CC examples/vmd/led/led.o 00:02:16.727 LINK test_dma 00:02:16.727 CC test/event/event_perf/event_perf.o 00:02:16.727 LINK vhost_fuzz 00:02:16.727 CC examples/thread/thread/thread_ex.o 00:02:16.727 CC test/event/reactor/reactor.o 00:02:16.727 CC test/event/reactor_perf/reactor_perf.o 00:02:16.727 CC test/event/app_repeat/app_repeat.o 00:02:16.727 LINK spdk_nvme 00:02:16.988 LINK spdk_bdev 00:02:16.988 CC test/event/scheduler/scheduler.o 00:02:16.988 LINK spdk_nvme_perf 00:02:16.988 CC app/vhost/vhost.o 00:02:16.988 LINK lsvmd 00:02:16.988 LINK mem_callbacks 00:02:16.988 LINK led 00:02:16.988 LINK spdk_top 00:02:16.988 LINK reactor_perf 00:02:16.988 LINK hello_sock 00:02:16.988 LINK event_perf 00:02:16.988 LINK reactor 00:02:16.988 LINK app_repeat 00:02:16.988 LINK thread 00:02:16.988 LINK idxd_perf 00:02:17.249 LINK vhost 00:02:17.249 LINK scheduler 00:02:17.249 LINK memory_ut 00:02:17.249 CC test/nvme/aer/aer.o 00:02:17.249 CC test/nvme/err_injection/err_injection.o 00:02:17.249 CC test/nvme/reset/reset.o 00:02:17.249 CC test/nvme/overhead/overhead.o 00:02:17.249 CC test/nvme/sgl/sgl.o 00:02:17.249 CC test/nvme/reserve/reserve.o 00:02:17.249 CC test/nvme/e2edp/nvme_dp.o 00:02:17.249 CC test/nvme/simple_copy/simple_copy.o 00:02:17.249 CC test/nvme/cuse/cuse.o 00:02:17.249 CC test/nvme/startup/startup.o 00:02:17.249 CC test/nvme/boot_partition/boot_partition.o 00:02:17.249 CC test/nvme/fdp/fdp.o 00:02:17.249 CC test/nvme/connect_stress/connect_stress.o 00:02:17.249 CC test/nvme/compliance/nvme_compliance.o 00:02:17.249 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:17.249 CC test/nvme/fused_ordering/fused_ordering.o 00:02:17.249 CC test/accel/dif/dif.o 00:02:17.249 CC test/blobfs/mkfs/mkfs.o 00:02:17.510 CC test/lvol/esnap/esnap.o 00:02:17.510 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:17.510 CC examples/nvme/hello_world/hello_world.o 00:02:17.510 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:17.510 LINK boot_partition 00:02:17.510 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:17.510 CC examples/nvme/hotplug/hotplug.o 00:02:17.510 CC examples/nvme/arbitration/arbitration.o 00:02:17.510 CC examples/nvme/abort/abort.o 00:02:17.510 LINK startup 00:02:17.510 CC examples/nvme/reconnect/reconnect.o 00:02:17.510 LINK err_injection 00:02:17.510 LINK connect_stress 00:02:17.510 LINK doorbell_aers 00:02:17.510 LINK fused_ordering 00:02:17.510 LINK reserve 00:02:17.510 LINK simple_copy 00:02:17.510 LINK mkfs 00:02:17.510 LINK reset 00:02:17.510 LINK sgl 00:02:17.510 CC examples/accel/perf/accel_perf.o 00:02:17.510 LINK aer 00:02:17.510 LINK nvme_dp 00:02:17.771 LINK overhead 00:02:17.771 LINK nvme_compliance 00:02:17.771 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:17.771 CC examples/blob/hello_world/hello_blob.o 00:02:17.771 CC examples/blob/cli/blobcli.o 00:02:17.771 LINK fdp 00:02:17.771 LINK cmb_copy 00:02:17.771 LINK pmr_persistence 00:02:17.771 LINK hello_world 00:02:17.771 LINK hotplug 00:02:17.771 LINK iscsi_fuzz 00:02:17.771 LINK arbitration 00:02:18.033 LINK reconnect 00:02:18.033 LINK abort 00:02:18.033 LINK hello_blob 00:02:18.033 LINK dif 00:02:18.033 LINK hello_fsdev 00:02:18.033 LINK nvme_manage 00:02:18.033 LINK accel_perf 00:02:18.294 LINK blobcli 00:02:18.556 LINK cuse 00:02:18.556 CC test/bdev/bdevio/bdevio.o 00:02:18.816 CC examples/bdev/hello_world/hello_bdev.o 00:02:18.816 CC examples/bdev/bdevperf/bdevperf.o 00:02:19.077 LINK hello_bdev 00:02:19.077 LINK bdevio 00:02:19.648 LINK bdevperf 00:02:20.219 CC examples/nvmf/nvmf/nvmf.o 00:02:20.481 LINK nvmf 00:02:22.395 LINK esnap 00:02:22.395 00:02:22.395 real 0m54.779s 00:02:22.395 user 7m49.520s 00:02:22.395 sys 4m29.006s 00:02:22.395 16:14:08 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:22.395 16:14:08 make -- common/autotest_common.sh@10 -- $ set +x 00:02:22.395 ************************************ 00:02:22.395 END TEST make 00:02:22.395 ************************************ 00:02:22.395 16:14:08 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:22.395 16:14:08 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:22.395 16:14:08 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:22.395 16:14:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:22.395 16:14:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:22.395 16:14:08 -- pm/common@44 -- $ pid=1876528 00:02:22.395 16:14:08 -- pm/common@50 -- $ kill -TERM 1876528 00:02:22.395 16:14:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:22.395 16:14:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:22.395 16:14:08 -- pm/common@44 -- $ pid=1876529 00:02:22.395 16:14:08 -- pm/common@50 -- $ kill -TERM 1876529 00:02:22.395 16:14:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:22.395 16:14:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:22.395 16:14:08 -- pm/common@44 -- $ pid=1876531 00:02:22.395 16:14:08 -- pm/common@50 -- $ kill -TERM 1876531 00:02:22.395 16:14:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:22.395 16:14:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:22.395 16:14:08 -- pm/common@44 -- $ pid=1876555 00:02:22.395 16:14:08 -- pm/common@50 -- $ sudo -E kill -TERM 1876555 00:02:22.395 16:14:08 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:22.396 16:14:08 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:22.657 16:14:08 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:02:22.657 16:14:08 -- common/autotest_common.sh@1693 -- # lcov --version 00:02:22.657 16:14:08 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:02:22.657 16:14:08 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:02:22.657 16:14:08 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:22.657 16:14:08 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:22.657 16:14:08 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:22.657 16:14:08 -- scripts/common.sh@336 -- # IFS=.-: 00:02:22.657 16:14:08 -- scripts/common.sh@336 -- # read -ra ver1 00:02:22.657 16:14:08 -- scripts/common.sh@337 -- # IFS=.-: 00:02:22.657 16:14:08 -- scripts/common.sh@337 -- # read -ra ver2 00:02:22.657 16:14:08 -- scripts/common.sh@338 -- # local 'op=<' 00:02:22.657 16:14:08 -- scripts/common.sh@340 -- # ver1_l=2 00:02:22.657 16:14:08 -- scripts/common.sh@341 -- # ver2_l=1 00:02:22.657 16:14:08 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:22.657 16:14:08 -- scripts/common.sh@344 -- # case "$op" in 00:02:22.657 16:14:08 -- scripts/common.sh@345 -- # : 1 00:02:22.657 16:14:08 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:22.657 16:14:08 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:22.657 16:14:08 -- scripts/common.sh@365 -- # decimal 1 00:02:22.657 16:14:08 -- scripts/common.sh@353 -- # local d=1 00:02:22.657 16:14:08 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:22.657 16:14:08 -- scripts/common.sh@355 -- # echo 1 00:02:22.657 16:14:08 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:22.657 16:14:08 -- scripts/common.sh@366 -- # decimal 2 00:02:22.657 16:14:08 -- scripts/common.sh@353 -- # local d=2 00:02:22.657 16:14:08 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:22.657 16:14:08 -- scripts/common.sh@355 -- # echo 2 00:02:22.657 16:14:08 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:22.657 16:14:08 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:22.657 16:14:08 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:22.657 16:14:08 -- scripts/common.sh@368 -- # return 0 00:02:22.657 16:14:08 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:22.657 16:14:08 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:02:22.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:22.657 --rc genhtml_branch_coverage=1 00:02:22.657 --rc genhtml_function_coverage=1 00:02:22.657 --rc genhtml_legend=1 00:02:22.657 --rc geninfo_all_blocks=1 00:02:22.657 --rc geninfo_unexecuted_blocks=1 00:02:22.657 00:02:22.657 ' 00:02:22.657 16:14:08 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:02:22.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:22.657 --rc genhtml_branch_coverage=1 00:02:22.657 --rc genhtml_function_coverage=1 00:02:22.657 --rc genhtml_legend=1 00:02:22.657 --rc geninfo_all_blocks=1 00:02:22.657 --rc geninfo_unexecuted_blocks=1 00:02:22.657 00:02:22.657 ' 00:02:22.657 16:14:08 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:02:22.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:22.657 --rc genhtml_branch_coverage=1 00:02:22.657 --rc genhtml_function_coverage=1 00:02:22.657 --rc genhtml_legend=1 00:02:22.657 --rc geninfo_all_blocks=1 00:02:22.657 --rc geninfo_unexecuted_blocks=1 00:02:22.657 00:02:22.657 ' 00:02:22.657 16:14:08 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:02:22.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:22.657 --rc genhtml_branch_coverage=1 00:02:22.657 --rc genhtml_function_coverage=1 00:02:22.657 --rc genhtml_legend=1 00:02:22.657 --rc geninfo_all_blocks=1 00:02:22.657 --rc geninfo_unexecuted_blocks=1 00:02:22.657 00:02:22.657 ' 00:02:22.657 16:14:08 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:22.657 16:14:08 -- nvmf/common.sh@7 -- # uname -s 00:02:22.657 16:14:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:22.657 16:14:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:22.657 16:14:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:22.657 16:14:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:22.657 16:14:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:22.657 16:14:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:22.657 16:14:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:22.657 16:14:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:22.657 16:14:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:22.657 16:14:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:22.657 16:14:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:02:22.657 16:14:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:02:22.657 16:14:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:22.657 16:14:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:22.657 16:14:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:22.657 16:14:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:22.657 16:14:08 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:22.657 16:14:08 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:22.657 16:14:08 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:22.657 16:14:08 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:22.657 16:14:08 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:22.657 16:14:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:22.657 16:14:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:22.657 16:14:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:22.657 16:14:08 -- paths/export.sh@5 -- # export PATH 00:02:22.657 16:14:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:22.657 16:14:08 -- nvmf/common.sh@51 -- # : 0 00:02:22.657 16:14:08 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:22.657 16:14:08 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:22.657 16:14:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:22.657 16:14:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:22.657 16:14:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:22.657 16:14:08 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:22.657 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:22.657 16:14:08 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:22.657 16:14:08 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:22.657 16:14:08 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:22.657 16:14:08 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:22.657 16:14:08 -- spdk/autotest.sh@32 -- # uname -s 00:02:22.657 16:14:08 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:22.657 16:14:08 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:22.657 16:14:08 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:22.657 16:14:08 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:22.657 16:14:08 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:22.657 16:14:08 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:22.657 16:14:08 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:22.657 16:14:08 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:22.657 16:14:08 -- spdk/autotest.sh@48 -- # udevadm_pid=1941727 00:02:22.657 16:14:08 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:22.657 16:14:08 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:22.657 16:14:08 -- pm/common@17 -- # local monitor 00:02:22.657 16:14:08 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:22.657 16:14:08 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:22.657 16:14:08 -- pm/common@21 -- # date +%s 00:02:22.657 16:14:08 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:22.658 16:14:08 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:22.658 16:14:08 -- pm/common@21 -- # date +%s 00:02:22.658 16:14:08 -- pm/common@25 -- # sleep 1 00:02:22.658 16:14:08 -- pm/common@21 -- # date +%s 00:02:22.658 16:14:08 -- pm/common@21 -- # date +%s 00:02:22.658 16:14:08 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732115648 00:02:22.658 16:14:08 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732115648 00:02:22.658 16:14:08 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732115648 00:02:22.658 16:14:08 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732115648 00:02:22.658 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732115648_collect-cpu-load.pm.log 00:02:22.658 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732115648_collect-vmstat.pm.log 00:02:22.658 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732115648_collect-cpu-temp.pm.log 00:02:22.658 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732115648_collect-bmc-pm.bmc.pm.log 00:02:23.600 16:14:09 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:23.600 16:14:09 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:23.600 16:14:09 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:23.600 16:14:09 -- common/autotest_common.sh@10 -- # set +x 00:02:23.601 16:14:09 -- spdk/autotest.sh@59 -- # create_test_list 00:02:23.601 16:14:09 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:23.601 16:14:09 -- common/autotest_common.sh@10 -- # set +x 00:02:23.861 16:14:09 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:23.862 16:14:09 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:23.862 16:14:09 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:23.862 16:14:09 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:23.862 16:14:09 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:23.862 16:14:09 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:23.862 16:14:09 -- common/autotest_common.sh@1457 -- # uname 00:02:23.862 16:14:09 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:23.862 16:14:09 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:23.862 16:14:09 -- common/autotest_common.sh@1477 -- # uname 00:02:23.862 16:14:09 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:23.862 16:14:09 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:23.862 16:14:09 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:23.862 lcov: LCOV version 1.15 00:02:23.862 16:14:09 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:38.773 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:38.773 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:56.918 16:14:39 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:02:56.918 16:14:39 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:56.918 16:14:39 -- common/autotest_common.sh@10 -- # set +x 00:02:56.918 16:14:39 -- spdk/autotest.sh@78 -- # rm -f 00:02:56.918 16:14:39 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:57.488 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:02:57.488 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:02:57.488 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:02:57.488 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:02:57.488 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:02:57.489 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:02:57.489 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:02:57.489 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:02:57.489 0000:65:00.0 (144d a80a): Already using the nvme driver 00:02:57.748 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:02:57.748 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:02:57.748 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:02:57.748 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:02:57.748 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:02:57.748 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:02:57.748 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:02:57.748 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:02:58.008 16:14:43 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:02:58.008 16:14:43 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:02:58.008 16:14:43 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:02:58.008 16:14:43 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:02:58.008 16:14:43 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:02:58.008 16:14:43 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:02:58.008 16:14:43 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:02:58.008 16:14:43 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:58.008 16:14:43 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:02:58.008 16:14:43 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:02:58.008 16:14:43 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:02:58.008 16:14:43 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:02:58.008 16:14:43 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:02:58.008 16:14:43 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:02:58.008 16:14:43 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:58.008 No valid GPT data, bailing 00:02:58.008 16:14:43 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:58.008 16:14:43 -- scripts/common.sh@394 -- # pt= 00:02:58.008 16:14:43 -- scripts/common.sh@395 -- # return 1 00:02:58.008 16:14:43 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:58.269 1+0 records in 00:02:58.269 1+0 records out 00:02:58.269 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00377234 s, 278 MB/s 00:02:58.269 16:14:43 -- spdk/autotest.sh@105 -- # sync 00:02:58.269 16:14:43 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:58.269 16:14:43 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:58.269 16:14:43 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:06.413 16:14:52 -- spdk/autotest.sh@111 -- # uname -s 00:03:06.413 16:14:52 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:06.413 16:14:52 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:06.413 16:14:52 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:09.813 Hugepages 00:03:09.813 node hugesize free / total 00:03:09.813 node0 1048576kB 0 / 0 00:03:09.813 node0 2048kB 0 / 0 00:03:09.813 node1 1048576kB 0 / 0 00:03:09.813 node1 2048kB 0 / 0 00:03:09.813 00:03:09.813 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:09.813 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:03:09.813 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:03:09.813 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:03:09.813 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:03:09.813 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:03:09.813 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:03:09.813 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:03:09.813 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:03:09.813 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:03:09.813 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:03:09.813 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:03:09.813 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:03:09.813 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:03:09.813 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:03:09.813 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:03:09.813 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:03:09.813 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:03:09.813 16:14:55 -- spdk/autotest.sh@117 -- # uname -s 00:03:09.813 16:14:55 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:09.813 16:14:55 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:09.813 16:14:55 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:13.111 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:13.111 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:13.111 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:13.111 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:13.111 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:13.111 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:13.111 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:13.111 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:13.111 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:13.111 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:13.111 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:13.111 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:13.111 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:13.111 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:13.111 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:13.111 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:15.022 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:15.283 16:15:01 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:16.224 16:15:02 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:16.224 16:15:02 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:16.224 16:15:02 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:16.224 16:15:02 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:16.224 16:15:02 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:16.224 16:15:02 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:16.224 16:15:02 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:16.224 16:15:02 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:16.224 16:15:02 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:16.484 16:15:02 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:16.484 16:15:02 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:03:16.484 16:15:02 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:19.782 Waiting for block devices as requested 00:03:19.782 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:19.782 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:20.043 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:20.043 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:20.043 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:20.304 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:20.304 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:20.304 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:20.304 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:03:20.566 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:20.566 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:20.827 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:20.827 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:20.828 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:21.089 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:21.089 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:21.089 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:21.350 16:15:07 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:21.350 16:15:07 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:03:21.350 16:15:07 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:21.350 16:15:07 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:03:21.350 16:15:07 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:21.350 16:15:07 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:03:21.350 16:15:07 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:21.350 16:15:07 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:21.350 16:15:07 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:21.350 16:15:07 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:21.350 16:15:07 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:21.350 16:15:07 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:21.350 16:15:07 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:21.350 16:15:07 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:03:21.350 16:15:07 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:21.350 16:15:07 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:21.350 16:15:07 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:21.350 16:15:07 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:21.350 16:15:07 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:21.350 16:15:07 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:21.350 16:15:07 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:21.350 16:15:07 -- common/autotest_common.sh@1543 -- # continue 00:03:21.350 16:15:07 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:21.350 16:15:07 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:21.350 16:15:07 -- common/autotest_common.sh@10 -- # set +x 00:03:21.610 16:15:07 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:21.610 16:15:07 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:21.610 16:15:07 -- common/autotest_common.sh@10 -- # set +x 00:03:21.610 16:15:07 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:24.925 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:24.925 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:24.925 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:24.925 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:24.925 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:24.925 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:24.925 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:24.925 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:24.925 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:24.925 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:24.925 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:25.185 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:25.185 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:25.185 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:25.185 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:25.185 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:25.185 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:25.446 16:15:11 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:25.446 16:15:11 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:25.446 16:15:11 -- common/autotest_common.sh@10 -- # set +x 00:03:25.446 16:15:11 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:25.446 16:15:11 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:25.446 16:15:11 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:25.446 16:15:11 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:25.446 16:15:11 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:25.446 16:15:11 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:25.446 16:15:11 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:25.446 16:15:11 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:25.446 16:15:11 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:25.446 16:15:11 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:25.446 16:15:11 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:25.446 16:15:11 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:25.446 16:15:11 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:25.707 16:15:11 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:25.707 16:15:11 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:03:25.707 16:15:11 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:25.707 16:15:11 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:03:25.707 16:15:11 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:03:25.707 16:15:11 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:03:25.707 16:15:11 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:03:25.707 16:15:11 -- common/autotest_common.sh@1572 -- # return 0 00:03:25.707 16:15:11 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:03:25.707 16:15:11 -- common/autotest_common.sh@1580 -- # return 0 00:03:25.707 16:15:11 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:25.707 16:15:11 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:25.707 16:15:11 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:25.707 16:15:11 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:25.707 16:15:11 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:25.707 16:15:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:25.707 16:15:11 -- common/autotest_common.sh@10 -- # set +x 00:03:25.707 16:15:11 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:25.707 16:15:11 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:25.707 16:15:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:25.707 16:15:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:25.707 16:15:11 -- common/autotest_common.sh@10 -- # set +x 00:03:25.707 ************************************ 00:03:25.707 START TEST env 00:03:25.707 ************************************ 00:03:25.708 16:15:11 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:25.708 * Looking for test storage... 00:03:25.708 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:25.708 16:15:11 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:25.708 16:15:11 env -- common/autotest_common.sh@1693 -- # lcov --version 00:03:25.708 16:15:11 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:25.969 16:15:11 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:25.969 16:15:11 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:25.969 16:15:11 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:25.969 16:15:11 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:25.970 16:15:11 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:25.970 16:15:11 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:25.970 16:15:11 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:25.970 16:15:11 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:25.970 16:15:11 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:25.970 16:15:11 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:25.970 16:15:11 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:25.970 16:15:11 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:25.970 16:15:11 env -- scripts/common.sh@344 -- # case "$op" in 00:03:25.970 16:15:11 env -- scripts/common.sh@345 -- # : 1 00:03:25.970 16:15:11 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:25.970 16:15:11 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:25.970 16:15:11 env -- scripts/common.sh@365 -- # decimal 1 00:03:25.970 16:15:11 env -- scripts/common.sh@353 -- # local d=1 00:03:25.970 16:15:11 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:25.970 16:15:11 env -- scripts/common.sh@355 -- # echo 1 00:03:25.970 16:15:11 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:25.970 16:15:11 env -- scripts/common.sh@366 -- # decimal 2 00:03:25.970 16:15:11 env -- scripts/common.sh@353 -- # local d=2 00:03:25.970 16:15:11 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:25.970 16:15:11 env -- scripts/common.sh@355 -- # echo 2 00:03:25.970 16:15:11 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:25.970 16:15:11 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:25.970 16:15:11 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:25.970 16:15:11 env -- scripts/common.sh@368 -- # return 0 00:03:25.970 16:15:11 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:25.970 16:15:11 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:25.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.970 --rc genhtml_branch_coverage=1 00:03:25.970 --rc genhtml_function_coverage=1 00:03:25.970 --rc genhtml_legend=1 00:03:25.970 --rc geninfo_all_blocks=1 00:03:25.970 --rc geninfo_unexecuted_blocks=1 00:03:25.970 00:03:25.970 ' 00:03:25.970 16:15:11 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:25.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.970 --rc genhtml_branch_coverage=1 00:03:25.970 --rc genhtml_function_coverage=1 00:03:25.970 --rc genhtml_legend=1 00:03:25.970 --rc geninfo_all_blocks=1 00:03:25.970 --rc geninfo_unexecuted_blocks=1 00:03:25.970 00:03:25.970 ' 00:03:25.970 16:15:11 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:25.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.970 --rc genhtml_branch_coverage=1 00:03:25.970 --rc genhtml_function_coverage=1 00:03:25.970 --rc genhtml_legend=1 00:03:25.970 --rc geninfo_all_blocks=1 00:03:25.970 --rc geninfo_unexecuted_blocks=1 00:03:25.970 00:03:25.970 ' 00:03:25.970 16:15:11 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:25.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.970 --rc genhtml_branch_coverage=1 00:03:25.970 --rc genhtml_function_coverage=1 00:03:25.970 --rc genhtml_legend=1 00:03:25.970 --rc geninfo_all_blocks=1 00:03:25.970 --rc geninfo_unexecuted_blocks=1 00:03:25.970 00:03:25.970 ' 00:03:25.970 16:15:11 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:25.970 16:15:11 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:25.970 16:15:11 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:25.970 16:15:11 env -- common/autotest_common.sh@10 -- # set +x 00:03:25.970 ************************************ 00:03:25.970 START TEST env_memory 00:03:25.970 ************************************ 00:03:25.970 16:15:11 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:25.970 00:03:25.970 00:03:25.970 CUnit - A unit testing framework for C - Version 2.1-3 00:03:25.970 http://cunit.sourceforge.net/ 00:03:25.970 00:03:25.970 00:03:25.970 Suite: memory 00:03:25.970 Test: alloc and free memory map ...[2024-11-20 16:15:11.812476] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:25.970 passed 00:03:25.970 Test: mem map translation ...[2024-11-20 16:15:11.838148] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:25.970 [2024-11-20 16:15:11.838176] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:25.970 [2024-11-20 16:15:11.838222] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:25.970 [2024-11-20 16:15:11.838230] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:25.970 passed 00:03:25.970 Test: mem map registration ...[2024-11-20 16:15:11.893561] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:25.970 [2024-11-20 16:15:11.893584] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:25.970 passed 00:03:26.232 Test: mem map adjacent registrations ...passed 00:03:26.232 00:03:26.232 Run Summary: Type Total Ran Passed Failed Inactive 00:03:26.232 suites 1 1 n/a 0 0 00:03:26.232 tests 4 4 4 0 0 00:03:26.232 asserts 152 152 152 0 n/a 00:03:26.232 00:03:26.232 Elapsed time = 0.194 seconds 00:03:26.232 00:03:26.232 real 0m0.209s 00:03:26.232 user 0m0.197s 00:03:26.232 sys 0m0.011s 00:03:26.232 16:15:11 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:26.232 16:15:11 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:26.232 ************************************ 00:03:26.232 END TEST env_memory 00:03:26.232 ************************************ 00:03:26.232 16:15:12 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:26.232 16:15:12 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:26.232 16:15:12 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:26.232 16:15:12 env -- common/autotest_common.sh@10 -- # set +x 00:03:26.232 ************************************ 00:03:26.232 START TEST env_vtophys 00:03:26.232 ************************************ 00:03:26.232 16:15:12 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:26.232 EAL: lib.eal log level changed from notice to debug 00:03:26.232 EAL: Detected lcore 0 as core 0 on socket 0 00:03:26.232 EAL: Detected lcore 1 as core 1 on socket 0 00:03:26.232 EAL: Detected lcore 2 as core 2 on socket 0 00:03:26.232 EAL: Detected lcore 3 as core 3 on socket 0 00:03:26.232 EAL: Detected lcore 4 as core 4 on socket 0 00:03:26.232 EAL: Detected lcore 5 as core 5 on socket 0 00:03:26.232 EAL: Detected lcore 6 as core 6 on socket 0 00:03:26.232 EAL: Detected lcore 7 as core 7 on socket 0 00:03:26.232 EAL: Detected lcore 8 as core 8 on socket 0 00:03:26.232 EAL: Detected lcore 9 as core 9 on socket 0 00:03:26.232 EAL: Detected lcore 10 as core 10 on socket 0 00:03:26.232 EAL: Detected lcore 11 as core 11 on socket 0 00:03:26.232 EAL: Detected lcore 12 as core 12 on socket 0 00:03:26.232 EAL: Detected lcore 13 as core 13 on socket 0 00:03:26.232 EAL: Detected lcore 14 as core 14 on socket 0 00:03:26.232 EAL: Detected lcore 15 as core 15 on socket 0 00:03:26.232 EAL: Detected lcore 16 as core 16 on socket 0 00:03:26.232 EAL: Detected lcore 17 as core 17 on socket 0 00:03:26.232 EAL: Detected lcore 18 as core 18 on socket 0 00:03:26.232 EAL: Detected lcore 19 as core 19 on socket 0 00:03:26.232 EAL: Detected lcore 20 as core 20 on socket 0 00:03:26.232 EAL: Detected lcore 21 as core 21 on socket 0 00:03:26.232 EAL: Detected lcore 22 as core 22 on socket 0 00:03:26.232 EAL: Detected lcore 23 as core 23 on socket 0 00:03:26.232 EAL: Detected lcore 24 as core 24 on socket 0 00:03:26.232 EAL: Detected lcore 25 as core 25 on socket 0 00:03:26.232 EAL: Detected lcore 26 as core 26 on socket 0 00:03:26.232 EAL: Detected lcore 27 as core 27 on socket 0 00:03:26.232 EAL: Detected lcore 28 as core 28 on socket 0 00:03:26.232 EAL: Detected lcore 29 as core 29 on socket 0 00:03:26.232 EAL: Detected lcore 30 as core 30 on socket 0 00:03:26.232 EAL: Detected lcore 31 as core 31 on socket 0 00:03:26.232 EAL: Detected lcore 32 as core 32 on socket 0 00:03:26.232 EAL: Detected lcore 33 as core 33 on socket 0 00:03:26.232 EAL: Detected lcore 34 as core 34 on socket 0 00:03:26.232 EAL: Detected lcore 35 as core 35 on socket 0 00:03:26.232 EAL: Detected lcore 36 as core 0 on socket 1 00:03:26.232 EAL: Detected lcore 37 as core 1 on socket 1 00:03:26.232 EAL: Detected lcore 38 as core 2 on socket 1 00:03:26.232 EAL: Detected lcore 39 as core 3 on socket 1 00:03:26.232 EAL: Detected lcore 40 as core 4 on socket 1 00:03:26.232 EAL: Detected lcore 41 as core 5 on socket 1 00:03:26.232 EAL: Detected lcore 42 as core 6 on socket 1 00:03:26.232 EAL: Detected lcore 43 as core 7 on socket 1 00:03:26.232 EAL: Detected lcore 44 as core 8 on socket 1 00:03:26.232 EAL: Detected lcore 45 as core 9 on socket 1 00:03:26.232 EAL: Detected lcore 46 as core 10 on socket 1 00:03:26.232 EAL: Detected lcore 47 as core 11 on socket 1 00:03:26.232 EAL: Detected lcore 48 as core 12 on socket 1 00:03:26.232 EAL: Detected lcore 49 as core 13 on socket 1 00:03:26.232 EAL: Detected lcore 50 as core 14 on socket 1 00:03:26.232 EAL: Detected lcore 51 as core 15 on socket 1 00:03:26.232 EAL: Detected lcore 52 as core 16 on socket 1 00:03:26.232 EAL: Detected lcore 53 as core 17 on socket 1 00:03:26.232 EAL: Detected lcore 54 as core 18 on socket 1 00:03:26.232 EAL: Detected lcore 55 as core 19 on socket 1 00:03:26.232 EAL: Detected lcore 56 as core 20 on socket 1 00:03:26.232 EAL: Detected lcore 57 as core 21 on socket 1 00:03:26.232 EAL: Detected lcore 58 as core 22 on socket 1 00:03:26.232 EAL: Detected lcore 59 as core 23 on socket 1 00:03:26.232 EAL: Detected lcore 60 as core 24 on socket 1 00:03:26.232 EAL: Detected lcore 61 as core 25 on socket 1 00:03:26.232 EAL: Detected lcore 62 as core 26 on socket 1 00:03:26.232 EAL: Detected lcore 63 as core 27 on socket 1 00:03:26.232 EAL: Detected lcore 64 as core 28 on socket 1 00:03:26.232 EAL: Detected lcore 65 as core 29 on socket 1 00:03:26.232 EAL: Detected lcore 66 as core 30 on socket 1 00:03:26.232 EAL: Detected lcore 67 as core 31 on socket 1 00:03:26.232 EAL: Detected lcore 68 as core 32 on socket 1 00:03:26.232 EAL: Detected lcore 69 as core 33 on socket 1 00:03:26.232 EAL: Detected lcore 70 as core 34 on socket 1 00:03:26.232 EAL: Detected lcore 71 as core 35 on socket 1 00:03:26.232 EAL: Detected lcore 72 as core 0 on socket 0 00:03:26.232 EAL: Detected lcore 73 as core 1 on socket 0 00:03:26.232 EAL: Detected lcore 74 as core 2 on socket 0 00:03:26.232 EAL: Detected lcore 75 as core 3 on socket 0 00:03:26.232 EAL: Detected lcore 76 as core 4 on socket 0 00:03:26.232 EAL: Detected lcore 77 as core 5 on socket 0 00:03:26.232 EAL: Detected lcore 78 as core 6 on socket 0 00:03:26.232 EAL: Detected lcore 79 as core 7 on socket 0 00:03:26.232 EAL: Detected lcore 80 as core 8 on socket 0 00:03:26.233 EAL: Detected lcore 81 as core 9 on socket 0 00:03:26.233 EAL: Detected lcore 82 as core 10 on socket 0 00:03:26.233 EAL: Detected lcore 83 as core 11 on socket 0 00:03:26.233 EAL: Detected lcore 84 as core 12 on socket 0 00:03:26.233 EAL: Detected lcore 85 as core 13 on socket 0 00:03:26.233 EAL: Detected lcore 86 as core 14 on socket 0 00:03:26.233 EAL: Detected lcore 87 as core 15 on socket 0 00:03:26.233 EAL: Detected lcore 88 as core 16 on socket 0 00:03:26.233 EAL: Detected lcore 89 as core 17 on socket 0 00:03:26.233 EAL: Detected lcore 90 as core 18 on socket 0 00:03:26.233 EAL: Detected lcore 91 as core 19 on socket 0 00:03:26.233 EAL: Detected lcore 92 as core 20 on socket 0 00:03:26.233 EAL: Detected lcore 93 as core 21 on socket 0 00:03:26.233 EAL: Detected lcore 94 as core 22 on socket 0 00:03:26.233 EAL: Detected lcore 95 as core 23 on socket 0 00:03:26.233 EAL: Detected lcore 96 as core 24 on socket 0 00:03:26.233 EAL: Detected lcore 97 as core 25 on socket 0 00:03:26.233 EAL: Detected lcore 98 as core 26 on socket 0 00:03:26.233 EAL: Detected lcore 99 as core 27 on socket 0 00:03:26.233 EAL: Detected lcore 100 as core 28 on socket 0 00:03:26.233 EAL: Detected lcore 101 as core 29 on socket 0 00:03:26.233 EAL: Detected lcore 102 as core 30 on socket 0 00:03:26.233 EAL: Detected lcore 103 as core 31 on socket 0 00:03:26.233 EAL: Detected lcore 104 as core 32 on socket 0 00:03:26.233 EAL: Detected lcore 105 as core 33 on socket 0 00:03:26.233 EAL: Detected lcore 106 as core 34 on socket 0 00:03:26.233 EAL: Detected lcore 107 as core 35 on socket 0 00:03:26.233 EAL: Detected lcore 108 as core 0 on socket 1 00:03:26.233 EAL: Detected lcore 109 as core 1 on socket 1 00:03:26.233 EAL: Detected lcore 110 as core 2 on socket 1 00:03:26.233 EAL: Detected lcore 111 as core 3 on socket 1 00:03:26.233 EAL: Detected lcore 112 as core 4 on socket 1 00:03:26.233 EAL: Detected lcore 113 as core 5 on socket 1 00:03:26.233 EAL: Detected lcore 114 as core 6 on socket 1 00:03:26.233 EAL: Detected lcore 115 as core 7 on socket 1 00:03:26.233 EAL: Detected lcore 116 as core 8 on socket 1 00:03:26.233 EAL: Detected lcore 117 as core 9 on socket 1 00:03:26.233 EAL: Detected lcore 118 as core 10 on socket 1 00:03:26.233 EAL: Detected lcore 119 as core 11 on socket 1 00:03:26.233 EAL: Detected lcore 120 as core 12 on socket 1 00:03:26.233 EAL: Detected lcore 121 as core 13 on socket 1 00:03:26.233 EAL: Detected lcore 122 as core 14 on socket 1 00:03:26.233 EAL: Detected lcore 123 as core 15 on socket 1 00:03:26.233 EAL: Detected lcore 124 as core 16 on socket 1 00:03:26.233 EAL: Detected lcore 125 as core 17 on socket 1 00:03:26.233 EAL: Detected lcore 126 as core 18 on socket 1 00:03:26.233 EAL: Detected lcore 127 as core 19 on socket 1 00:03:26.233 EAL: Skipped lcore 128 as core 20 on socket 1 00:03:26.233 EAL: Skipped lcore 129 as core 21 on socket 1 00:03:26.233 EAL: Skipped lcore 130 as core 22 on socket 1 00:03:26.233 EAL: Skipped lcore 131 as core 23 on socket 1 00:03:26.233 EAL: Skipped lcore 132 as core 24 on socket 1 00:03:26.233 EAL: Skipped lcore 133 as core 25 on socket 1 00:03:26.233 EAL: Skipped lcore 134 as core 26 on socket 1 00:03:26.233 EAL: Skipped lcore 135 as core 27 on socket 1 00:03:26.233 EAL: Skipped lcore 136 as core 28 on socket 1 00:03:26.233 EAL: Skipped lcore 137 as core 29 on socket 1 00:03:26.233 EAL: Skipped lcore 138 as core 30 on socket 1 00:03:26.233 EAL: Skipped lcore 139 as core 31 on socket 1 00:03:26.233 EAL: Skipped lcore 140 as core 32 on socket 1 00:03:26.233 EAL: Skipped lcore 141 as core 33 on socket 1 00:03:26.233 EAL: Skipped lcore 142 as core 34 on socket 1 00:03:26.233 EAL: Skipped lcore 143 as core 35 on socket 1 00:03:26.233 EAL: Maximum logical cores by configuration: 128 00:03:26.233 EAL: Detected CPU lcores: 128 00:03:26.233 EAL: Detected NUMA nodes: 2 00:03:26.233 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:26.233 EAL: Detected shared linkage of DPDK 00:03:26.233 EAL: No shared files mode enabled, IPC will be disabled 00:03:26.233 EAL: Bus pci wants IOVA as 'DC' 00:03:26.233 EAL: Buses did not request a specific IOVA mode. 00:03:26.233 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:26.233 EAL: Selected IOVA mode 'VA' 00:03:26.233 EAL: Probing VFIO support... 00:03:26.233 EAL: IOMMU type 1 (Type 1) is supported 00:03:26.233 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:26.233 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:26.233 EAL: VFIO support initialized 00:03:26.233 EAL: Ask a virtual area of 0x2e000 bytes 00:03:26.233 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:26.233 EAL: Setting up physically contiguous memory... 00:03:26.233 EAL: Setting maximum number of open files to 524288 00:03:26.233 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:26.233 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:26.233 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:26.233 EAL: Ask a virtual area of 0x61000 bytes 00:03:26.233 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:26.233 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:26.233 EAL: Ask a virtual area of 0x400000000 bytes 00:03:26.233 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:26.233 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:26.233 EAL: Ask a virtual area of 0x61000 bytes 00:03:26.233 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:26.233 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:26.233 EAL: Ask a virtual area of 0x400000000 bytes 00:03:26.233 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:26.233 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:26.233 EAL: Ask a virtual area of 0x61000 bytes 00:03:26.233 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:26.233 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:26.233 EAL: Ask a virtual area of 0x400000000 bytes 00:03:26.233 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:26.233 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:26.233 EAL: Ask a virtual area of 0x61000 bytes 00:03:26.233 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:26.233 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:26.233 EAL: Ask a virtual area of 0x400000000 bytes 00:03:26.233 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:26.233 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:26.233 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:26.233 EAL: Ask a virtual area of 0x61000 bytes 00:03:26.233 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:26.233 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:26.233 EAL: Ask a virtual area of 0x400000000 bytes 00:03:26.233 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:26.233 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:26.233 EAL: Ask a virtual area of 0x61000 bytes 00:03:26.233 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:26.233 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:26.233 EAL: Ask a virtual area of 0x400000000 bytes 00:03:26.233 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:26.233 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:26.233 EAL: Ask a virtual area of 0x61000 bytes 00:03:26.233 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:26.233 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:26.233 EAL: Ask a virtual area of 0x400000000 bytes 00:03:26.233 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:26.233 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:26.233 EAL: Ask a virtual area of 0x61000 bytes 00:03:26.233 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:26.233 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:26.233 EAL: Ask a virtual area of 0x400000000 bytes 00:03:26.233 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:26.233 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:26.233 EAL: Hugepages will be freed exactly as allocated. 00:03:26.233 EAL: No shared files mode enabled, IPC is disabled 00:03:26.233 EAL: No shared files mode enabled, IPC is disabled 00:03:26.233 EAL: TSC frequency is ~2400000 KHz 00:03:26.233 EAL: Main lcore 0 is ready (tid=7f4af7eeda00;cpuset=[0]) 00:03:26.233 EAL: Trying to obtain current memory policy. 00:03:26.233 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:26.233 EAL: Restoring previous memory policy: 0 00:03:26.233 EAL: request: mp_malloc_sync 00:03:26.233 EAL: No shared files mode enabled, IPC is disabled 00:03:26.233 EAL: Heap on socket 0 was expanded by 2MB 00:03:26.233 EAL: No shared files mode enabled, IPC is disabled 00:03:26.233 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:26.233 EAL: Mem event callback 'spdk:(nil)' registered 00:03:26.233 00:03:26.233 00:03:26.233 CUnit - A unit testing framework for C - Version 2.1-3 00:03:26.233 http://cunit.sourceforge.net/ 00:03:26.233 00:03:26.233 00:03:26.233 Suite: components_suite 00:03:26.233 Test: vtophys_malloc_test ...passed 00:03:26.233 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:26.233 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:26.233 EAL: Restoring previous memory policy: 4 00:03:26.233 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.233 EAL: request: mp_malloc_sync 00:03:26.233 EAL: No shared files mode enabled, IPC is disabled 00:03:26.233 EAL: Heap on socket 0 was expanded by 4MB 00:03:26.233 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.233 EAL: request: mp_malloc_sync 00:03:26.233 EAL: No shared files mode enabled, IPC is disabled 00:03:26.233 EAL: Heap on socket 0 was shrunk by 4MB 00:03:26.233 EAL: Trying to obtain current memory policy. 00:03:26.233 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:26.233 EAL: Restoring previous memory policy: 4 00:03:26.233 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.233 EAL: request: mp_malloc_sync 00:03:26.233 EAL: No shared files mode enabled, IPC is disabled 00:03:26.233 EAL: Heap on socket 0 was expanded by 6MB 00:03:26.233 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.233 EAL: request: mp_malloc_sync 00:03:26.233 EAL: No shared files mode enabled, IPC is disabled 00:03:26.233 EAL: Heap on socket 0 was shrunk by 6MB 00:03:26.233 EAL: Trying to obtain current memory policy. 00:03:26.233 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:26.233 EAL: Restoring previous memory policy: 4 00:03:26.233 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.233 EAL: request: mp_malloc_sync 00:03:26.233 EAL: No shared files mode enabled, IPC is disabled 00:03:26.233 EAL: Heap on socket 0 was expanded by 10MB 00:03:26.233 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.233 EAL: request: mp_malloc_sync 00:03:26.234 EAL: No shared files mode enabled, IPC is disabled 00:03:26.234 EAL: Heap on socket 0 was shrunk by 10MB 00:03:26.234 EAL: Trying to obtain current memory policy. 00:03:26.234 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:26.234 EAL: Restoring previous memory policy: 4 00:03:26.234 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.234 EAL: request: mp_malloc_sync 00:03:26.234 EAL: No shared files mode enabled, IPC is disabled 00:03:26.234 EAL: Heap on socket 0 was expanded by 18MB 00:03:26.234 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.234 EAL: request: mp_malloc_sync 00:03:26.234 EAL: No shared files mode enabled, IPC is disabled 00:03:26.234 EAL: Heap on socket 0 was shrunk by 18MB 00:03:26.234 EAL: Trying to obtain current memory policy. 00:03:26.234 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:26.234 EAL: Restoring previous memory policy: 4 00:03:26.234 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.234 EAL: request: mp_malloc_sync 00:03:26.234 EAL: No shared files mode enabled, IPC is disabled 00:03:26.234 EAL: Heap on socket 0 was expanded by 34MB 00:03:26.234 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.234 EAL: request: mp_malloc_sync 00:03:26.234 EAL: No shared files mode enabled, IPC is disabled 00:03:26.234 EAL: Heap on socket 0 was shrunk by 34MB 00:03:26.234 EAL: Trying to obtain current memory policy. 00:03:26.234 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:26.234 EAL: Restoring previous memory policy: 4 00:03:26.234 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.234 EAL: request: mp_malloc_sync 00:03:26.234 EAL: No shared files mode enabled, IPC is disabled 00:03:26.234 EAL: Heap on socket 0 was expanded by 66MB 00:03:26.234 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.234 EAL: request: mp_malloc_sync 00:03:26.234 EAL: No shared files mode enabled, IPC is disabled 00:03:26.234 EAL: Heap on socket 0 was shrunk by 66MB 00:03:26.234 EAL: Trying to obtain current memory policy. 00:03:26.234 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:26.494 EAL: Restoring previous memory policy: 4 00:03:26.494 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.494 EAL: request: mp_malloc_sync 00:03:26.494 EAL: No shared files mode enabled, IPC is disabled 00:03:26.494 EAL: Heap on socket 0 was expanded by 130MB 00:03:26.494 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.494 EAL: request: mp_malloc_sync 00:03:26.494 EAL: No shared files mode enabled, IPC is disabled 00:03:26.494 EAL: Heap on socket 0 was shrunk by 130MB 00:03:26.494 EAL: Trying to obtain current memory policy. 00:03:26.494 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:26.494 EAL: Restoring previous memory policy: 4 00:03:26.494 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.494 EAL: request: mp_malloc_sync 00:03:26.494 EAL: No shared files mode enabled, IPC is disabled 00:03:26.494 EAL: Heap on socket 0 was expanded by 258MB 00:03:26.494 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.494 EAL: request: mp_malloc_sync 00:03:26.494 EAL: No shared files mode enabled, IPC is disabled 00:03:26.494 EAL: Heap on socket 0 was shrunk by 258MB 00:03:26.494 EAL: Trying to obtain current memory policy. 00:03:26.494 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:26.494 EAL: Restoring previous memory policy: 4 00:03:26.494 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.494 EAL: request: mp_malloc_sync 00:03:26.494 EAL: No shared files mode enabled, IPC is disabled 00:03:26.494 EAL: Heap on socket 0 was expanded by 514MB 00:03:26.494 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.755 EAL: request: mp_malloc_sync 00:03:26.755 EAL: No shared files mode enabled, IPC is disabled 00:03:26.755 EAL: Heap on socket 0 was shrunk by 514MB 00:03:26.755 EAL: Trying to obtain current memory policy. 00:03:26.755 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:26.755 EAL: Restoring previous memory policy: 4 00:03:26.755 EAL: Calling mem event callback 'spdk:(nil)' 00:03:26.755 EAL: request: mp_malloc_sync 00:03:26.755 EAL: No shared files mode enabled, IPC is disabled 00:03:26.755 EAL: Heap on socket 0 was expanded by 1026MB 00:03:26.755 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.016 EAL: request: mp_malloc_sync 00:03:27.016 EAL: No shared files mode enabled, IPC is disabled 00:03:27.016 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:27.016 passed 00:03:27.016 00:03:27.016 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.016 suites 1 1 n/a 0 0 00:03:27.016 tests 2 2 2 0 0 00:03:27.016 asserts 497 497 497 0 n/a 00:03:27.016 00:03:27.016 Elapsed time = 0.647 seconds 00:03:27.016 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.016 EAL: request: mp_malloc_sync 00:03:27.016 EAL: No shared files mode enabled, IPC is disabled 00:03:27.016 EAL: Heap on socket 0 was shrunk by 2MB 00:03:27.016 EAL: No shared files mode enabled, IPC is disabled 00:03:27.017 EAL: No shared files mode enabled, IPC is disabled 00:03:27.017 EAL: No shared files mode enabled, IPC is disabled 00:03:27.017 00:03:27.017 real 0m0.773s 00:03:27.017 user 0m0.416s 00:03:27.017 sys 0m0.331s 00:03:27.017 16:15:12 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:27.017 16:15:12 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:27.017 ************************************ 00:03:27.017 END TEST env_vtophys 00:03:27.017 ************************************ 00:03:27.017 16:15:12 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:27.017 16:15:12 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:27.017 16:15:12 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:27.017 16:15:12 env -- common/autotest_common.sh@10 -- # set +x 00:03:27.017 ************************************ 00:03:27.017 START TEST env_pci 00:03:27.017 ************************************ 00:03:27.017 16:15:12 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:27.017 00:03:27.017 00:03:27.017 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.017 http://cunit.sourceforge.net/ 00:03:27.017 00:03:27.017 00:03:27.017 Suite: pci 00:03:27.017 Test: pci_hook ...[2024-11-20 16:15:12.906242] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1961276 has claimed it 00:03:27.017 EAL: Cannot find device (10000:00:01.0) 00:03:27.017 EAL: Failed to attach device on primary process 00:03:27.017 passed 00:03:27.017 00:03:27.017 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.017 suites 1 1 n/a 0 0 00:03:27.017 tests 1 1 1 0 0 00:03:27.017 asserts 25 25 25 0 n/a 00:03:27.017 00:03:27.017 Elapsed time = 0.030 seconds 00:03:27.017 00:03:27.017 real 0m0.049s 00:03:27.017 user 0m0.012s 00:03:27.017 sys 0m0.037s 00:03:27.017 16:15:12 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:27.017 16:15:12 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:27.017 ************************************ 00:03:27.017 END TEST env_pci 00:03:27.017 ************************************ 00:03:27.277 16:15:12 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:27.278 16:15:12 env -- env/env.sh@15 -- # uname 00:03:27.278 16:15:12 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:27.278 16:15:12 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:27.278 16:15:12 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:27.278 16:15:12 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:27.278 16:15:12 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:27.278 16:15:12 env -- common/autotest_common.sh@10 -- # set +x 00:03:27.278 ************************************ 00:03:27.278 START TEST env_dpdk_post_init 00:03:27.278 ************************************ 00:03:27.278 16:15:13 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:27.278 EAL: Detected CPU lcores: 128 00:03:27.278 EAL: Detected NUMA nodes: 2 00:03:27.278 EAL: Detected shared linkage of DPDK 00:03:27.278 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:27.278 EAL: Selected IOVA mode 'VA' 00:03:27.278 EAL: VFIO support initialized 00:03:27.278 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:27.278 EAL: Using IOMMU type 1 (Type 1) 00:03:27.538 EAL: Ignore mapping IO port bar(1) 00:03:27.538 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:03:27.798 EAL: Ignore mapping IO port bar(1) 00:03:27.798 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:03:27.798 EAL: Ignore mapping IO port bar(1) 00:03:28.058 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:03:28.058 EAL: Ignore mapping IO port bar(1) 00:03:28.319 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:03:28.319 EAL: Ignore mapping IO port bar(1) 00:03:28.319 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:03:28.579 EAL: Ignore mapping IO port bar(1) 00:03:28.579 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:03:28.840 EAL: Ignore mapping IO port bar(1) 00:03:28.840 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:03:29.100 EAL: Ignore mapping IO port bar(1) 00:03:29.100 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:03:29.361 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:03:29.361 EAL: Ignore mapping IO port bar(1) 00:03:29.621 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:03:29.621 EAL: Ignore mapping IO port bar(1) 00:03:29.881 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:03:29.881 EAL: Ignore mapping IO port bar(1) 00:03:30.142 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:03:30.142 EAL: Ignore mapping IO port bar(1) 00:03:30.142 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:03:30.403 EAL: Ignore mapping IO port bar(1) 00:03:30.403 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:03:30.662 EAL: Ignore mapping IO port bar(1) 00:03:30.663 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:03:30.924 EAL: Ignore mapping IO port bar(1) 00:03:30.924 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:03:30.924 EAL: Ignore mapping IO port bar(1) 00:03:31.184 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:03:31.184 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:03:31.184 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:03:31.184 Starting DPDK initialization... 00:03:31.184 Starting SPDK post initialization... 00:03:31.184 SPDK NVMe probe 00:03:31.184 Attaching to 0000:65:00.0 00:03:31.184 Attached to 0000:65:00.0 00:03:31.184 Cleaning up... 00:03:33.097 00:03:33.097 real 0m5.734s 00:03:33.097 user 0m0.089s 00:03:33.097 sys 0m0.192s 00:03:33.097 16:15:18 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:33.097 16:15:18 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:33.097 ************************************ 00:03:33.097 END TEST env_dpdk_post_init 00:03:33.097 ************************************ 00:03:33.097 16:15:18 env -- env/env.sh@26 -- # uname 00:03:33.097 16:15:18 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:33.097 16:15:18 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:33.097 16:15:18 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:33.097 16:15:18 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:33.097 16:15:18 env -- common/autotest_common.sh@10 -- # set +x 00:03:33.097 ************************************ 00:03:33.097 START TEST env_mem_callbacks 00:03:33.097 ************************************ 00:03:33.097 16:15:18 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:33.097 EAL: Detected CPU lcores: 128 00:03:33.097 EAL: Detected NUMA nodes: 2 00:03:33.097 EAL: Detected shared linkage of DPDK 00:03:33.097 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:33.097 EAL: Selected IOVA mode 'VA' 00:03:33.097 EAL: VFIO support initialized 00:03:33.097 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:33.097 00:03:33.097 00:03:33.097 CUnit - A unit testing framework for C - Version 2.1-3 00:03:33.097 http://cunit.sourceforge.net/ 00:03:33.097 00:03:33.097 00:03:33.097 Suite: memory 00:03:33.097 Test: test ... 00:03:33.097 register 0x200000200000 2097152 00:03:33.097 malloc 3145728 00:03:33.098 register 0x200000400000 4194304 00:03:33.098 buf 0x200000500000 len 3145728 PASSED 00:03:33.098 malloc 64 00:03:33.098 buf 0x2000004fff40 len 64 PASSED 00:03:33.098 malloc 4194304 00:03:33.098 register 0x200000800000 6291456 00:03:33.098 buf 0x200000a00000 len 4194304 PASSED 00:03:33.098 free 0x200000500000 3145728 00:03:33.098 free 0x2000004fff40 64 00:03:33.098 unregister 0x200000400000 4194304 PASSED 00:03:33.098 free 0x200000a00000 4194304 00:03:33.098 unregister 0x200000800000 6291456 PASSED 00:03:33.098 malloc 8388608 00:03:33.098 register 0x200000400000 10485760 00:03:33.098 buf 0x200000600000 len 8388608 PASSED 00:03:33.098 free 0x200000600000 8388608 00:03:33.098 unregister 0x200000400000 10485760 PASSED 00:03:33.098 passed 00:03:33.098 00:03:33.098 Run Summary: Type Total Ran Passed Failed Inactive 00:03:33.098 suites 1 1 n/a 0 0 00:03:33.098 tests 1 1 1 0 0 00:03:33.098 asserts 15 15 15 0 n/a 00:03:33.098 00:03:33.098 Elapsed time = 0.007 seconds 00:03:33.098 00:03:33.098 real 0m0.063s 00:03:33.098 user 0m0.023s 00:03:33.098 sys 0m0.040s 00:03:33.098 16:15:18 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:33.098 16:15:18 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:33.098 ************************************ 00:03:33.098 END TEST env_mem_callbacks 00:03:33.098 ************************************ 00:03:33.098 00:03:33.098 real 0m7.424s 00:03:33.098 user 0m1.007s 00:03:33.098 sys 0m0.971s 00:03:33.098 16:15:18 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:33.098 16:15:18 env -- common/autotest_common.sh@10 -- # set +x 00:03:33.098 ************************************ 00:03:33.098 END TEST env 00:03:33.098 ************************************ 00:03:33.098 16:15:18 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:33.098 16:15:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:33.098 16:15:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:33.098 16:15:18 -- common/autotest_common.sh@10 -- # set +x 00:03:33.098 ************************************ 00:03:33.098 START TEST rpc 00:03:33.098 ************************************ 00:03:33.098 16:15:19 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:33.359 * Looking for test storage... 00:03:33.359 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:33.359 16:15:19 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:33.359 16:15:19 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:33.359 16:15:19 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:33.359 16:15:19 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:33.359 16:15:19 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:33.359 16:15:19 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:33.359 16:15:19 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:33.359 16:15:19 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:33.359 16:15:19 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:33.359 16:15:19 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:33.359 16:15:19 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:33.359 16:15:19 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:33.359 16:15:19 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:33.359 16:15:19 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:33.359 16:15:19 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:33.359 16:15:19 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:33.359 16:15:19 rpc -- scripts/common.sh@345 -- # : 1 00:03:33.359 16:15:19 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:33.359 16:15:19 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:33.359 16:15:19 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:33.359 16:15:19 rpc -- scripts/common.sh@353 -- # local d=1 00:03:33.359 16:15:19 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:33.359 16:15:19 rpc -- scripts/common.sh@355 -- # echo 1 00:03:33.359 16:15:19 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:33.359 16:15:19 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:33.359 16:15:19 rpc -- scripts/common.sh@353 -- # local d=2 00:03:33.359 16:15:19 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:33.359 16:15:19 rpc -- scripts/common.sh@355 -- # echo 2 00:03:33.359 16:15:19 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:33.359 16:15:19 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:33.359 16:15:19 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:33.359 16:15:19 rpc -- scripts/common.sh@368 -- # return 0 00:03:33.359 16:15:19 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:33.359 16:15:19 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:33.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.359 --rc genhtml_branch_coverage=1 00:03:33.359 --rc genhtml_function_coverage=1 00:03:33.359 --rc genhtml_legend=1 00:03:33.359 --rc geninfo_all_blocks=1 00:03:33.359 --rc geninfo_unexecuted_blocks=1 00:03:33.359 00:03:33.359 ' 00:03:33.359 16:15:19 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:33.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.359 --rc genhtml_branch_coverage=1 00:03:33.359 --rc genhtml_function_coverage=1 00:03:33.359 --rc genhtml_legend=1 00:03:33.359 --rc geninfo_all_blocks=1 00:03:33.359 --rc geninfo_unexecuted_blocks=1 00:03:33.359 00:03:33.359 ' 00:03:33.359 16:15:19 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:33.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.359 --rc genhtml_branch_coverage=1 00:03:33.359 --rc genhtml_function_coverage=1 00:03:33.359 --rc genhtml_legend=1 00:03:33.359 --rc geninfo_all_blocks=1 00:03:33.359 --rc geninfo_unexecuted_blocks=1 00:03:33.359 00:03:33.359 ' 00:03:33.359 16:15:19 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:33.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.359 --rc genhtml_branch_coverage=1 00:03:33.359 --rc genhtml_function_coverage=1 00:03:33.359 --rc genhtml_legend=1 00:03:33.359 --rc geninfo_all_blocks=1 00:03:33.359 --rc geninfo_unexecuted_blocks=1 00:03:33.359 00:03:33.359 ' 00:03:33.359 16:15:19 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:33.359 16:15:19 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1962736 00:03:33.359 16:15:19 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:33.359 16:15:19 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1962736 00:03:33.359 16:15:19 rpc -- common/autotest_common.sh@835 -- # '[' -z 1962736 ']' 00:03:33.359 16:15:19 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:33.359 16:15:19 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:33.359 16:15:19 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:33.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:33.359 16:15:19 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:33.359 16:15:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:33.359 [2024-11-20 16:15:19.253152] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:03:33.359 [2024-11-20 16:15:19.253207] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1962736 ] 00:03:33.620 [2024-11-20 16:15:19.329020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:33.620 [2024-11-20 16:15:19.367110] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:33.620 [2024-11-20 16:15:19.367144] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1962736' to capture a snapshot of events at runtime. 00:03:33.620 [2024-11-20 16:15:19.367152] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:33.620 [2024-11-20 16:15:19.367159] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:33.620 [2024-11-20 16:15:19.367165] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1962736 for offline analysis/debug. 00:03:33.620 [2024-11-20 16:15:19.367810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:33.620 16:15:19 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:33.620 16:15:19 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:33.620 16:15:19 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:33.620 16:15:19 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:33.620 16:15:19 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:33.620 16:15:19 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:33.620 16:15:19 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:33.620 16:15:19 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:33.620 16:15:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:33.882 ************************************ 00:03:33.882 START TEST rpc_integrity 00:03:33.882 ************************************ 00:03:33.882 16:15:19 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:33.882 16:15:19 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:33.882 16:15:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:33.882 16:15:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:33.882 16:15:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:33.882 16:15:19 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:33.882 16:15:19 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:33.882 16:15:19 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:33.882 16:15:19 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:33.882 16:15:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:33.882 16:15:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:33.882 16:15:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:33.882 16:15:19 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:33.882 16:15:19 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:33.882 16:15:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:33.882 16:15:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:33.882 16:15:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:33.882 16:15:19 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:33.882 { 00:03:33.882 "name": "Malloc0", 00:03:33.882 "aliases": [ 00:03:33.882 "3abb8900-d8d2-4065-81f7-e6f6a7147c2f" 00:03:33.882 ], 00:03:33.882 "product_name": "Malloc disk", 00:03:33.882 "block_size": 512, 00:03:33.882 "num_blocks": 16384, 00:03:33.882 "uuid": "3abb8900-d8d2-4065-81f7-e6f6a7147c2f", 00:03:33.882 "assigned_rate_limits": { 00:03:33.882 "rw_ios_per_sec": 0, 00:03:33.882 "rw_mbytes_per_sec": 0, 00:03:33.882 "r_mbytes_per_sec": 0, 00:03:33.882 "w_mbytes_per_sec": 0 00:03:33.882 }, 00:03:33.882 "claimed": false, 00:03:33.882 "zoned": false, 00:03:33.882 "supported_io_types": { 00:03:33.882 "read": true, 00:03:33.882 "write": true, 00:03:33.882 "unmap": true, 00:03:33.882 "flush": true, 00:03:33.882 "reset": true, 00:03:33.882 "nvme_admin": false, 00:03:33.882 "nvme_io": false, 00:03:33.882 "nvme_io_md": false, 00:03:33.882 "write_zeroes": true, 00:03:33.882 "zcopy": true, 00:03:33.882 "get_zone_info": false, 00:03:33.882 "zone_management": false, 00:03:33.882 "zone_append": false, 00:03:33.882 "compare": false, 00:03:33.882 "compare_and_write": false, 00:03:33.882 "abort": true, 00:03:33.882 "seek_hole": false, 00:03:33.882 "seek_data": false, 00:03:33.882 "copy": true, 00:03:33.882 "nvme_iov_md": false 00:03:33.882 }, 00:03:33.882 "memory_domains": [ 00:03:33.882 { 00:03:33.882 "dma_device_id": "system", 00:03:33.882 "dma_device_type": 1 00:03:33.882 }, 00:03:33.882 { 00:03:33.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:33.882 "dma_device_type": 2 00:03:33.882 } 00:03:33.882 ], 00:03:33.882 "driver_specific": {} 00:03:33.882 } 00:03:33.882 ]' 00:03:33.882 16:15:19 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:33.882 16:15:19 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:33.882 16:15:19 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:33.882 16:15:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:33.882 16:15:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:33.882 [2024-11-20 16:15:19.738910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:33.882 [2024-11-20 16:15:19.738942] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:33.882 [2024-11-20 16:15:19.738955] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x171c800 00:03:33.882 [2024-11-20 16:15:19.738963] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:33.882 [2024-11-20 16:15:19.740317] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:33.882 [2024-11-20 16:15:19.740339] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:33.882 Passthru0 00:03:33.882 16:15:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:33.882 16:15:19 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:33.882 16:15:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:33.882 16:15:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:33.882 16:15:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:33.882 16:15:19 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:33.882 { 00:03:33.882 "name": "Malloc0", 00:03:33.882 "aliases": [ 00:03:33.882 "3abb8900-d8d2-4065-81f7-e6f6a7147c2f" 00:03:33.882 ], 00:03:33.882 "product_name": "Malloc disk", 00:03:33.882 "block_size": 512, 00:03:33.882 "num_blocks": 16384, 00:03:33.882 "uuid": "3abb8900-d8d2-4065-81f7-e6f6a7147c2f", 00:03:33.882 "assigned_rate_limits": { 00:03:33.882 "rw_ios_per_sec": 0, 00:03:33.882 "rw_mbytes_per_sec": 0, 00:03:33.882 "r_mbytes_per_sec": 0, 00:03:33.882 "w_mbytes_per_sec": 0 00:03:33.882 }, 00:03:33.882 "claimed": true, 00:03:33.882 "claim_type": "exclusive_write", 00:03:33.882 "zoned": false, 00:03:33.882 "supported_io_types": { 00:03:33.882 "read": true, 00:03:33.882 "write": true, 00:03:33.882 "unmap": true, 00:03:33.882 "flush": true, 00:03:33.882 "reset": true, 00:03:33.882 "nvme_admin": false, 00:03:33.882 "nvme_io": false, 00:03:33.882 "nvme_io_md": false, 00:03:33.882 "write_zeroes": true, 00:03:33.882 "zcopy": true, 00:03:33.882 "get_zone_info": false, 00:03:33.882 "zone_management": false, 00:03:33.882 "zone_append": false, 00:03:33.882 "compare": false, 00:03:33.882 "compare_and_write": false, 00:03:33.882 "abort": true, 00:03:33.882 "seek_hole": false, 00:03:33.882 "seek_data": false, 00:03:33.882 "copy": true, 00:03:33.882 "nvme_iov_md": false 00:03:33.882 }, 00:03:33.882 "memory_domains": [ 00:03:33.882 { 00:03:33.882 "dma_device_id": "system", 00:03:33.882 "dma_device_type": 1 00:03:33.882 }, 00:03:33.882 { 00:03:33.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:33.882 "dma_device_type": 2 00:03:33.882 } 00:03:33.882 ], 00:03:33.882 "driver_specific": {} 00:03:33.882 }, 00:03:33.882 { 00:03:33.882 "name": "Passthru0", 00:03:33.882 "aliases": [ 00:03:33.882 "b522b884-28f8-515e-8630-6e04ffb2cae2" 00:03:33.882 ], 00:03:33.882 "product_name": "passthru", 00:03:33.882 "block_size": 512, 00:03:33.882 "num_blocks": 16384, 00:03:33.882 "uuid": "b522b884-28f8-515e-8630-6e04ffb2cae2", 00:03:33.882 "assigned_rate_limits": { 00:03:33.882 "rw_ios_per_sec": 0, 00:03:33.882 "rw_mbytes_per_sec": 0, 00:03:33.882 "r_mbytes_per_sec": 0, 00:03:33.882 "w_mbytes_per_sec": 0 00:03:33.882 }, 00:03:33.882 "claimed": false, 00:03:33.882 "zoned": false, 00:03:33.882 "supported_io_types": { 00:03:33.882 "read": true, 00:03:33.882 "write": true, 00:03:33.882 "unmap": true, 00:03:33.882 "flush": true, 00:03:33.882 "reset": true, 00:03:33.882 "nvme_admin": false, 00:03:33.882 "nvme_io": false, 00:03:33.882 "nvme_io_md": false, 00:03:33.882 "write_zeroes": true, 00:03:33.882 "zcopy": true, 00:03:33.882 "get_zone_info": false, 00:03:33.882 "zone_management": false, 00:03:33.882 "zone_append": false, 00:03:33.882 "compare": false, 00:03:33.883 "compare_and_write": false, 00:03:33.883 "abort": true, 00:03:33.883 "seek_hole": false, 00:03:33.883 "seek_data": false, 00:03:33.883 "copy": true, 00:03:33.883 "nvme_iov_md": false 00:03:33.883 }, 00:03:33.883 "memory_domains": [ 00:03:33.883 { 00:03:33.883 "dma_device_id": "system", 00:03:33.883 "dma_device_type": 1 00:03:33.883 }, 00:03:33.883 { 00:03:33.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:33.883 "dma_device_type": 2 00:03:33.883 } 00:03:33.883 ], 00:03:33.883 "driver_specific": { 00:03:33.883 "passthru": { 00:03:33.883 "name": "Passthru0", 00:03:33.883 "base_bdev_name": "Malloc0" 00:03:33.883 } 00:03:33.883 } 00:03:33.883 } 00:03:33.883 ]' 00:03:33.883 16:15:19 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:33.883 16:15:19 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:33.883 16:15:19 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:33.883 16:15:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:33.883 16:15:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:33.883 16:15:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:33.883 16:15:19 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:33.883 16:15:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:33.883 16:15:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:33.883 16:15:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:33.883 16:15:19 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:34.144 16:15:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.144 16:15:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.144 16:15:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:34.144 16:15:19 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:34.144 16:15:19 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:34.144 16:15:19 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:34.144 00:03:34.144 real 0m0.294s 00:03:34.144 user 0m0.197s 00:03:34.144 sys 0m0.035s 00:03:34.144 16:15:19 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:34.144 16:15:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.144 ************************************ 00:03:34.144 END TEST rpc_integrity 00:03:34.144 ************************************ 00:03:34.144 16:15:19 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:34.144 16:15:19 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:34.144 16:15:19 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:34.144 16:15:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:34.144 ************************************ 00:03:34.144 START TEST rpc_plugins 00:03:34.144 ************************************ 00:03:34.144 16:15:19 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:34.144 16:15:19 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:34.144 16:15:19 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.144 16:15:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:34.144 16:15:19 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:34.144 16:15:19 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:34.144 16:15:19 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:34.145 16:15:19 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.145 16:15:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:34.145 16:15:19 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:34.145 16:15:19 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:34.145 { 00:03:34.145 "name": "Malloc1", 00:03:34.145 "aliases": [ 00:03:34.145 "caf58613-b835-4b8a-8ada-cde209d1c3ec" 00:03:34.145 ], 00:03:34.145 "product_name": "Malloc disk", 00:03:34.145 "block_size": 4096, 00:03:34.145 "num_blocks": 256, 00:03:34.145 "uuid": "caf58613-b835-4b8a-8ada-cde209d1c3ec", 00:03:34.145 "assigned_rate_limits": { 00:03:34.145 "rw_ios_per_sec": 0, 00:03:34.145 "rw_mbytes_per_sec": 0, 00:03:34.145 "r_mbytes_per_sec": 0, 00:03:34.145 "w_mbytes_per_sec": 0 00:03:34.145 }, 00:03:34.145 "claimed": false, 00:03:34.145 "zoned": false, 00:03:34.145 "supported_io_types": { 00:03:34.145 "read": true, 00:03:34.145 "write": true, 00:03:34.145 "unmap": true, 00:03:34.145 "flush": true, 00:03:34.145 "reset": true, 00:03:34.145 "nvme_admin": false, 00:03:34.145 "nvme_io": false, 00:03:34.145 "nvme_io_md": false, 00:03:34.145 "write_zeroes": true, 00:03:34.145 "zcopy": true, 00:03:34.145 "get_zone_info": false, 00:03:34.145 "zone_management": false, 00:03:34.145 "zone_append": false, 00:03:34.145 "compare": false, 00:03:34.145 "compare_and_write": false, 00:03:34.145 "abort": true, 00:03:34.145 "seek_hole": false, 00:03:34.145 "seek_data": false, 00:03:34.145 "copy": true, 00:03:34.145 "nvme_iov_md": false 00:03:34.145 }, 00:03:34.145 "memory_domains": [ 00:03:34.145 { 00:03:34.145 "dma_device_id": "system", 00:03:34.145 "dma_device_type": 1 00:03:34.145 }, 00:03:34.145 { 00:03:34.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:34.145 "dma_device_type": 2 00:03:34.145 } 00:03:34.145 ], 00:03:34.145 "driver_specific": {} 00:03:34.145 } 00:03:34.145 ]' 00:03:34.145 16:15:20 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:34.145 16:15:20 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:34.145 16:15:20 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:34.145 16:15:20 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.145 16:15:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:34.145 16:15:20 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:34.145 16:15:20 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:34.145 16:15:20 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.145 16:15:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:34.145 16:15:20 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:34.145 16:15:20 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:34.145 16:15:20 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:34.405 16:15:20 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:34.405 00:03:34.405 real 0m0.151s 00:03:34.405 user 0m0.088s 00:03:34.405 sys 0m0.026s 00:03:34.405 16:15:20 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:34.405 16:15:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:34.405 ************************************ 00:03:34.405 END TEST rpc_plugins 00:03:34.405 ************************************ 00:03:34.405 16:15:20 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:34.405 16:15:20 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:34.405 16:15:20 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:34.405 16:15:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:34.405 ************************************ 00:03:34.405 START TEST rpc_trace_cmd_test 00:03:34.405 ************************************ 00:03:34.405 16:15:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:34.405 16:15:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:34.405 16:15:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:34.405 16:15:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.405 16:15:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:34.406 16:15:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:34.406 16:15:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:34.406 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1962736", 00:03:34.406 "tpoint_group_mask": "0x8", 00:03:34.406 "iscsi_conn": { 00:03:34.406 "mask": "0x2", 00:03:34.406 "tpoint_mask": "0x0" 00:03:34.406 }, 00:03:34.406 "scsi": { 00:03:34.406 "mask": "0x4", 00:03:34.406 "tpoint_mask": "0x0" 00:03:34.406 }, 00:03:34.406 "bdev": { 00:03:34.406 "mask": "0x8", 00:03:34.406 "tpoint_mask": "0xffffffffffffffff" 00:03:34.406 }, 00:03:34.406 "nvmf_rdma": { 00:03:34.406 "mask": "0x10", 00:03:34.406 "tpoint_mask": "0x0" 00:03:34.406 }, 00:03:34.406 "nvmf_tcp": { 00:03:34.406 "mask": "0x20", 00:03:34.406 "tpoint_mask": "0x0" 00:03:34.406 }, 00:03:34.406 "ftl": { 00:03:34.406 "mask": "0x40", 00:03:34.406 "tpoint_mask": "0x0" 00:03:34.406 }, 00:03:34.406 "blobfs": { 00:03:34.406 "mask": "0x80", 00:03:34.406 "tpoint_mask": "0x0" 00:03:34.406 }, 00:03:34.406 "dsa": { 00:03:34.406 "mask": "0x200", 00:03:34.406 "tpoint_mask": "0x0" 00:03:34.406 }, 00:03:34.406 "thread": { 00:03:34.406 "mask": "0x400", 00:03:34.406 "tpoint_mask": "0x0" 00:03:34.406 }, 00:03:34.406 "nvme_pcie": { 00:03:34.406 "mask": "0x800", 00:03:34.406 "tpoint_mask": "0x0" 00:03:34.406 }, 00:03:34.406 "iaa": { 00:03:34.406 "mask": "0x1000", 00:03:34.406 "tpoint_mask": "0x0" 00:03:34.406 }, 00:03:34.406 "nvme_tcp": { 00:03:34.406 "mask": "0x2000", 00:03:34.406 "tpoint_mask": "0x0" 00:03:34.406 }, 00:03:34.406 "bdev_nvme": { 00:03:34.406 "mask": "0x4000", 00:03:34.406 "tpoint_mask": "0x0" 00:03:34.406 }, 00:03:34.406 "sock": { 00:03:34.406 "mask": "0x8000", 00:03:34.406 "tpoint_mask": "0x0" 00:03:34.406 }, 00:03:34.406 "blob": { 00:03:34.406 "mask": "0x10000", 00:03:34.406 "tpoint_mask": "0x0" 00:03:34.406 }, 00:03:34.406 "bdev_raid": { 00:03:34.406 "mask": "0x20000", 00:03:34.406 "tpoint_mask": "0x0" 00:03:34.406 }, 00:03:34.406 "scheduler": { 00:03:34.406 "mask": "0x40000", 00:03:34.406 "tpoint_mask": "0x0" 00:03:34.406 } 00:03:34.406 }' 00:03:34.406 16:15:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:34.406 16:15:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:34.406 16:15:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:34.406 16:15:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:34.406 16:15:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:34.406 16:15:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:34.406 16:15:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:34.670 16:15:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:34.670 16:15:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:34.670 16:15:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:34.670 00:03:34.670 real 0m0.251s 00:03:34.670 user 0m0.209s 00:03:34.670 sys 0m0.034s 00:03:34.670 16:15:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:34.670 16:15:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:34.670 ************************************ 00:03:34.670 END TEST rpc_trace_cmd_test 00:03:34.670 ************************************ 00:03:34.670 16:15:20 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:34.670 16:15:20 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:34.670 16:15:20 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:34.670 16:15:20 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:34.670 16:15:20 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:34.670 16:15:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:34.670 ************************************ 00:03:34.670 START TEST rpc_daemon_integrity 00:03:34.670 ************************************ 00:03:34.670 16:15:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:34.670 16:15:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:34.670 16:15:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.670 16:15:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.670 16:15:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:34.670 16:15:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:34.670 16:15:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:34.670 16:15:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:34.670 16:15:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:34.670 16:15:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.670 16:15:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.670 16:15:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:34.670 16:15:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:34.670 16:15:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:34.670 16:15:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.670 16:15:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.670 16:15:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:34.670 16:15:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:34.670 { 00:03:34.670 "name": "Malloc2", 00:03:34.670 "aliases": [ 00:03:34.670 "425ee383-00db-4909-a1df-8cfcc763557a" 00:03:34.670 ], 00:03:34.670 "product_name": "Malloc disk", 00:03:34.670 "block_size": 512, 00:03:34.670 "num_blocks": 16384, 00:03:34.670 "uuid": "425ee383-00db-4909-a1df-8cfcc763557a", 00:03:34.670 "assigned_rate_limits": { 00:03:34.670 "rw_ios_per_sec": 0, 00:03:34.670 "rw_mbytes_per_sec": 0, 00:03:34.670 "r_mbytes_per_sec": 0, 00:03:34.670 "w_mbytes_per_sec": 0 00:03:34.670 }, 00:03:34.670 "claimed": false, 00:03:34.670 "zoned": false, 00:03:34.670 "supported_io_types": { 00:03:34.670 "read": true, 00:03:34.670 "write": true, 00:03:34.670 "unmap": true, 00:03:34.670 "flush": true, 00:03:34.670 "reset": true, 00:03:34.670 "nvme_admin": false, 00:03:34.670 "nvme_io": false, 00:03:34.670 "nvme_io_md": false, 00:03:34.670 "write_zeroes": true, 00:03:34.670 "zcopy": true, 00:03:34.670 "get_zone_info": false, 00:03:34.670 "zone_management": false, 00:03:34.670 "zone_append": false, 00:03:34.670 "compare": false, 00:03:34.670 "compare_and_write": false, 00:03:34.670 "abort": true, 00:03:34.670 "seek_hole": false, 00:03:34.670 "seek_data": false, 00:03:34.670 "copy": true, 00:03:34.670 "nvme_iov_md": false 00:03:34.670 }, 00:03:34.670 "memory_domains": [ 00:03:34.670 { 00:03:34.670 "dma_device_id": "system", 00:03:34.670 "dma_device_type": 1 00:03:34.670 }, 00:03:34.670 { 00:03:34.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:34.670 "dma_device_type": 2 00:03:34.670 } 00:03:34.670 ], 00:03:34.670 "driver_specific": {} 00:03:34.670 } 00:03:34.670 ]' 00:03:34.670 16:15:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:34.932 16:15:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:34.932 16:15:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:34.932 16:15:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.932 16:15:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.932 [2024-11-20 16:15:20.661410] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:34.932 [2024-11-20 16:15:20.661443] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:34.932 [2024-11-20 16:15:20.661455] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15dab50 00:03:34.932 [2024-11-20 16:15:20.661463] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:34.932 [2024-11-20 16:15:20.662798] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:34.932 [2024-11-20 16:15:20.662818] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:34.932 Passthru0 00:03:34.932 16:15:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:34.932 16:15:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:34.932 16:15:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.932 16:15:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.932 16:15:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:34.932 16:15:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:34.932 { 00:03:34.932 "name": "Malloc2", 00:03:34.932 "aliases": [ 00:03:34.932 "425ee383-00db-4909-a1df-8cfcc763557a" 00:03:34.932 ], 00:03:34.932 "product_name": "Malloc disk", 00:03:34.932 "block_size": 512, 00:03:34.932 "num_blocks": 16384, 00:03:34.932 "uuid": "425ee383-00db-4909-a1df-8cfcc763557a", 00:03:34.932 "assigned_rate_limits": { 00:03:34.932 "rw_ios_per_sec": 0, 00:03:34.932 "rw_mbytes_per_sec": 0, 00:03:34.932 "r_mbytes_per_sec": 0, 00:03:34.932 "w_mbytes_per_sec": 0 00:03:34.932 }, 00:03:34.932 "claimed": true, 00:03:34.932 "claim_type": "exclusive_write", 00:03:34.932 "zoned": false, 00:03:34.932 "supported_io_types": { 00:03:34.932 "read": true, 00:03:34.932 "write": true, 00:03:34.932 "unmap": true, 00:03:34.932 "flush": true, 00:03:34.932 "reset": true, 00:03:34.932 "nvme_admin": false, 00:03:34.932 "nvme_io": false, 00:03:34.932 "nvme_io_md": false, 00:03:34.932 "write_zeroes": true, 00:03:34.932 "zcopy": true, 00:03:34.932 "get_zone_info": false, 00:03:34.932 "zone_management": false, 00:03:34.932 "zone_append": false, 00:03:34.932 "compare": false, 00:03:34.932 "compare_and_write": false, 00:03:34.932 "abort": true, 00:03:34.932 "seek_hole": false, 00:03:34.932 "seek_data": false, 00:03:34.932 "copy": true, 00:03:34.932 "nvme_iov_md": false 00:03:34.932 }, 00:03:34.932 "memory_domains": [ 00:03:34.932 { 00:03:34.932 "dma_device_id": "system", 00:03:34.932 "dma_device_type": 1 00:03:34.932 }, 00:03:34.932 { 00:03:34.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:34.932 "dma_device_type": 2 00:03:34.932 } 00:03:34.932 ], 00:03:34.932 "driver_specific": {} 00:03:34.932 }, 00:03:34.932 { 00:03:34.932 "name": "Passthru0", 00:03:34.932 "aliases": [ 00:03:34.932 "4523f369-d228-538d-a4e7-84229f2e2ffe" 00:03:34.932 ], 00:03:34.932 "product_name": "passthru", 00:03:34.932 "block_size": 512, 00:03:34.932 "num_blocks": 16384, 00:03:34.932 "uuid": "4523f369-d228-538d-a4e7-84229f2e2ffe", 00:03:34.932 "assigned_rate_limits": { 00:03:34.932 "rw_ios_per_sec": 0, 00:03:34.932 "rw_mbytes_per_sec": 0, 00:03:34.932 "r_mbytes_per_sec": 0, 00:03:34.932 "w_mbytes_per_sec": 0 00:03:34.932 }, 00:03:34.932 "claimed": false, 00:03:34.932 "zoned": false, 00:03:34.932 "supported_io_types": { 00:03:34.932 "read": true, 00:03:34.932 "write": true, 00:03:34.932 "unmap": true, 00:03:34.932 "flush": true, 00:03:34.932 "reset": true, 00:03:34.932 "nvme_admin": false, 00:03:34.932 "nvme_io": false, 00:03:34.932 "nvme_io_md": false, 00:03:34.932 "write_zeroes": true, 00:03:34.932 "zcopy": true, 00:03:34.932 "get_zone_info": false, 00:03:34.932 "zone_management": false, 00:03:34.932 "zone_append": false, 00:03:34.932 "compare": false, 00:03:34.932 "compare_and_write": false, 00:03:34.932 "abort": true, 00:03:34.932 "seek_hole": false, 00:03:34.932 "seek_data": false, 00:03:34.932 "copy": true, 00:03:34.932 "nvme_iov_md": false 00:03:34.932 }, 00:03:34.932 "memory_domains": [ 00:03:34.932 { 00:03:34.932 "dma_device_id": "system", 00:03:34.932 "dma_device_type": 1 00:03:34.932 }, 00:03:34.932 { 00:03:34.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:34.932 "dma_device_type": 2 00:03:34.932 } 00:03:34.932 ], 00:03:34.932 "driver_specific": { 00:03:34.932 "passthru": { 00:03:34.932 "name": "Passthru0", 00:03:34.932 "base_bdev_name": "Malloc2" 00:03:34.932 } 00:03:34.932 } 00:03:34.932 } 00:03:34.932 ]' 00:03:34.932 16:15:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:34.932 16:15:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:34.932 16:15:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:34.932 16:15:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.932 16:15:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.932 16:15:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:34.932 16:15:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:34.932 16:15:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.932 16:15:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.932 16:15:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:34.932 16:15:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:34.932 16:15:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:34.932 16:15:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.932 16:15:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:34.932 16:15:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:34.932 16:15:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:34.932 16:15:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:34.932 00:03:34.932 real 0m0.299s 00:03:34.932 user 0m0.190s 00:03:34.932 sys 0m0.042s 00:03:34.933 16:15:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:34.933 16:15:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.933 ************************************ 00:03:34.933 END TEST rpc_daemon_integrity 00:03:34.933 ************************************ 00:03:34.933 16:15:20 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:34.933 16:15:20 rpc -- rpc/rpc.sh@84 -- # killprocess 1962736 00:03:34.933 16:15:20 rpc -- common/autotest_common.sh@954 -- # '[' -z 1962736 ']' 00:03:34.933 16:15:20 rpc -- common/autotest_common.sh@958 -- # kill -0 1962736 00:03:34.933 16:15:20 rpc -- common/autotest_common.sh@959 -- # uname 00:03:34.933 16:15:20 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:34.933 16:15:20 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1962736 00:03:35.193 16:15:20 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:35.193 16:15:20 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:35.193 16:15:20 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1962736' 00:03:35.193 killing process with pid 1962736 00:03:35.193 16:15:20 rpc -- common/autotest_common.sh@973 -- # kill 1962736 00:03:35.193 16:15:20 rpc -- common/autotest_common.sh@978 -- # wait 1962736 00:03:35.193 00:03:35.193 real 0m2.113s 00:03:35.193 user 0m2.798s 00:03:35.193 sys 0m0.695s 00:03:35.193 16:15:21 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:35.193 16:15:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:35.193 ************************************ 00:03:35.193 END TEST rpc 00:03:35.193 ************************************ 00:03:35.453 16:15:21 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:35.453 16:15:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:35.453 16:15:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:35.453 16:15:21 -- common/autotest_common.sh@10 -- # set +x 00:03:35.453 ************************************ 00:03:35.453 START TEST skip_rpc 00:03:35.453 ************************************ 00:03:35.453 16:15:21 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:35.453 * Looking for test storage... 00:03:35.453 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:35.453 16:15:21 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:35.453 16:15:21 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:35.453 16:15:21 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:35.453 16:15:21 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:35.453 16:15:21 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:35.453 16:15:21 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:35.453 16:15:21 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:35.453 16:15:21 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:35.453 16:15:21 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:35.453 16:15:21 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:35.453 16:15:21 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:35.453 16:15:21 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:35.453 16:15:21 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:35.453 16:15:21 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:35.453 16:15:21 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:35.453 16:15:21 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:35.453 16:15:21 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:35.453 16:15:21 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:35.453 16:15:21 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:35.453 16:15:21 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:35.453 16:15:21 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:35.453 16:15:21 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:35.453 16:15:21 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:35.453 16:15:21 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:35.453 16:15:21 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:35.453 16:15:21 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:35.453 16:15:21 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:35.453 16:15:21 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:35.453 16:15:21 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:35.453 16:15:21 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:35.453 16:15:21 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:35.714 16:15:21 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:35.714 16:15:21 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:35.714 16:15:21 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:35.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.714 --rc genhtml_branch_coverage=1 00:03:35.714 --rc genhtml_function_coverage=1 00:03:35.714 --rc genhtml_legend=1 00:03:35.714 --rc geninfo_all_blocks=1 00:03:35.714 --rc geninfo_unexecuted_blocks=1 00:03:35.714 00:03:35.714 ' 00:03:35.714 16:15:21 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:35.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.714 --rc genhtml_branch_coverage=1 00:03:35.714 --rc genhtml_function_coverage=1 00:03:35.714 --rc genhtml_legend=1 00:03:35.714 --rc geninfo_all_blocks=1 00:03:35.714 --rc geninfo_unexecuted_blocks=1 00:03:35.714 00:03:35.714 ' 00:03:35.714 16:15:21 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:35.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.714 --rc genhtml_branch_coverage=1 00:03:35.714 --rc genhtml_function_coverage=1 00:03:35.714 --rc genhtml_legend=1 00:03:35.714 --rc geninfo_all_blocks=1 00:03:35.714 --rc geninfo_unexecuted_blocks=1 00:03:35.714 00:03:35.714 ' 00:03:35.714 16:15:21 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:35.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.714 --rc genhtml_branch_coverage=1 00:03:35.714 --rc genhtml_function_coverage=1 00:03:35.714 --rc genhtml_legend=1 00:03:35.714 --rc geninfo_all_blocks=1 00:03:35.714 --rc geninfo_unexecuted_blocks=1 00:03:35.714 00:03:35.714 ' 00:03:35.714 16:15:21 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:35.714 16:15:21 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:35.714 16:15:21 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:35.714 16:15:21 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:35.714 16:15:21 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:35.714 16:15:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:35.714 ************************************ 00:03:35.714 START TEST skip_rpc 00:03:35.714 ************************************ 00:03:35.714 16:15:21 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:03:35.714 16:15:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:35.714 16:15:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1963281 00:03:35.714 16:15:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:35.714 16:15:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:35.714 [2024-11-20 16:15:21.489078] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:03:35.714 [2024-11-20 16:15:21.489130] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1963281 ] 00:03:35.714 [2024-11-20 16:15:21.558490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:35.714 [2024-11-20 16:15:21.594773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:40.989 16:15:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:40.989 16:15:26 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:03:40.989 16:15:26 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:40.989 16:15:26 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:03:40.989 16:15:26 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:40.989 16:15:26 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:03:40.989 16:15:26 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:40.989 16:15:26 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:03:40.989 16:15:26 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:40.989 16:15:26 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:40.989 16:15:26 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:40.989 16:15:26 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:03:40.989 16:15:26 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:40.989 16:15:26 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:40.989 16:15:26 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:40.989 16:15:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:40.989 16:15:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1963281 00:03:40.989 16:15:26 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 1963281 ']' 00:03:40.989 16:15:26 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 1963281 00:03:40.989 16:15:26 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:03:40.989 16:15:26 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:40.989 16:15:26 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1963281 00:03:40.989 16:15:26 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:40.989 16:15:26 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:40.989 16:15:26 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1963281' 00:03:40.989 killing process with pid 1963281 00:03:40.989 16:15:26 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 1963281 00:03:40.989 16:15:26 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 1963281 00:03:40.989 00:03:40.989 real 0m5.284s 00:03:40.989 user 0m5.102s 00:03:40.989 sys 0m0.229s 00:03:40.989 16:15:26 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:40.989 16:15:26 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:40.989 ************************************ 00:03:40.989 END TEST skip_rpc 00:03:40.989 ************************************ 00:03:40.989 16:15:26 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:40.989 16:15:26 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:40.989 16:15:26 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:40.989 16:15:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:40.989 ************************************ 00:03:40.989 START TEST skip_rpc_with_json 00:03:40.989 ************************************ 00:03:40.989 16:15:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:03:40.989 16:15:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:40.989 16:15:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1964467 00:03:40.989 16:15:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:40.989 16:15:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1964467 00:03:40.989 16:15:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:40.989 16:15:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 1964467 ']' 00:03:40.989 16:15:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:40.989 16:15:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:40.989 16:15:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:40.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:40.989 16:15:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:40.989 16:15:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:40.989 [2024-11-20 16:15:26.869323] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:03:40.989 [2024-11-20 16:15:26.869378] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1964467 ] 00:03:40.989 [2024-11-20 16:15:26.944673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:41.248 [2024-11-20 16:15:26.984858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:41.817 16:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:41.817 16:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:03:41.817 16:15:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:41.817 16:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.817 16:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:41.817 [2024-11-20 16:15:27.677496] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:41.817 request: 00:03:41.817 { 00:03:41.817 "trtype": "tcp", 00:03:41.817 "method": "nvmf_get_transports", 00:03:41.817 "req_id": 1 00:03:41.817 } 00:03:41.817 Got JSON-RPC error response 00:03:41.817 response: 00:03:41.817 { 00:03:41.817 "code": -19, 00:03:41.817 "message": "No such device" 00:03:41.817 } 00:03:41.817 16:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:41.818 16:15:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:41.818 16:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.818 16:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:41.818 [2024-11-20 16:15:27.689620] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:41.818 16:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:41.818 16:15:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:41.818 16:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:41.818 16:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:42.078 16:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:42.078 16:15:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:42.078 { 00:03:42.078 "subsystems": [ 00:03:42.078 { 00:03:42.078 "subsystem": "fsdev", 00:03:42.078 "config": [ 00:03:42.078 { 00:03:42.078 "method": "fsdev_set_opts", 00:03:42.078 "params": { 00:03:42.078 "fsdev_io_pool_size": 65535, 00:03:42.078 "fsdev_io_cache_size": 256 00:03:42.078 } 00:03:42.078 } 00:03:42.078 ] 00:03:42.078 }, 00:03:42.078 { 00:03:42.078 "subsystem": "vfio_user_target", 00:03:42.078 "config": null 00:03:42.078 }, 00:03:42.078 { 00:03:42.078 "subsystem": "keyring", 00:03:42.078 "config": [] 00:03:42.078 }, 00:03:42.078 { 00:03:42.078 "subsystem": "iobuf", 00:03:42.078 "config": [ 00:03:42.078 { 00:03:42.078 "method": "iobuf_set_options", 00:03:42.078 "params": { 00:03:42.078 "small_pool_count": 8192, 00:03:42.078 "large_pool_count": 1024, 00:03:42.078 "small_bufsize": 8192, 00:03:42.078 "large_bufsize": 135168, 00:03:42.078 "enable_numa": false 00:03:42.078 } 00:03:42.078 } 00:03:42.078 ] 00:03:42.078 }, 00:03:42.078 { 00:03:42.078 "subsystem": "sock", 00:03:42.078 "config": [ 00:03:42.078 { 00:03:42.078 "method": "sock_set_default_impl", 00:03:42.078 "params": { 00:03:42.078 "impl_name": "posix" 00:03:42.078 } 00:03:42.078 }, 00:03:42.078 { 00:03:42.078 "method": "sock_impl_set_options", 00:03:42.078 "params": { 00:03:42.078 "impl_name": "ssl", 00:03:42.078 "recv_buf_size": 4096, 00:03:42.078 "send_buf_size": 4096, 00:03:42.078 "enable_recv_pipe": true, 00:03:42.078 "enable_quickack": false, 00:03:42.078 "enable_placement_id": 0, 00:03:42.078 "enable_zerocopy_send_server": true, 00:03:42.078 "enable_zerocopy_send_client": false, 00:03:42.078 "zerocopy_threshold": 0, 00:03:42.078 "tls_version": 0, 00:03:42.078 "enable_ktls": false 00:03:42.078 } 00:03:42.078 }, 00:03:42.078 { 00:03:42.078 "method": "sock_impl_set_options", 00:03:42.078 "params": { 00:03:42.078 "impl_name": "posix", 00:03:42.078 "recv_buf_size": 2097152, 00:03:42.078 "send_buf_size": 2097152, 00:03:42.078 "enable_recv_pipe": true, 00:03:42.078 "enable_quickack": false, 00:03:42.078 "enable_placement_id": 0, 00:03:42.078 "enable_zerocopy_send_server": true, 00:03:42.078 "enable_zerocopy_send_client": false, 00:03:42.078 "zerocopy_threshold": 0, 00:03:42.078 "tls_version": 0, 00:03:42.078 "enable_ktls": false 00:03:42.078 } 00:03:42.078 } 00:03:42.078 ] 00:03:42.078 }, 00:03:42.078 { 00:03:42.078 "subsystem": "vmd", 00:03:42.078 "config": [] 00:03:42.078 }, 00:03:42.078 { 00:03:42.078 "subsystem": "accel", 00:03:42.078 "config": [ 00:03:42.078 { 00:03:42.078 "method": "accel_set_options", 00:03:42.078 "params": { 00:03:42.078 "small_cache_size": 128, 00:03:42.078 "large_cache_size": 16, 00:03:42.078 "task_count": 2048, 00:03:42.078 "sequence_count": 2048, 00:03:42.078 "buf_count": 2048 00:03:42.078 } 00:03:42.078 } 00:03:42.078 ] 00:03:42.078 }, 00:03:42.078 { 00:03:42.078 "subsystem": "bdev", 00:03:42.078 "config": [ 00:03:42.078 { 00:03:42.078 "method": "bdev_set_options", 00:03:42.078 "params": { 00:03:42.078 "bdev_io_pool_size": 65535, 00:03:42.078 "bdev_io_cache_size": 256, 00:03:42.078 "bdev_auto_examine": true, 00:03:42.078 "iobuf_small_cache_size": 128, 00:03:42.078 "iobuf_large_cache_size": 16 00:03:42.078 } 00:03:42.078 }, 00:03:42.078 { 00:03:42.078 "method": "bdev_raid_set_options", 00:03:42.078 "params": { 00:03:42.078 "process_window_size_kb": 1024, 00:03:42.078 "process_max_bandwidth_mb_sec": 0 00:03:42.078 } 00:03:42.078 }, 00:03:42.078 { 00:03:42.078 "method": "bdev_iscsi_set_options", 00:03:42.078 "params": { 00:03:42.078 "timeout_sec": 30 00:03:42.078 } 00:03:42.078 }, 00:03:42.078 { 00:03:42.078 "method": "bdev_nvme_set_options", 00:03:42.078 "params": { 00:03:42.078 "action_on_timeout": "none", 00:03:42.078 "timeout_us": 0, 00:03:42.078 "timeout_admin_us": 0, 00:03:42.078 "keep_alive_timeout_ms": 10000, 00:03:42.078 "arbitration_burst": 0, 00:03:42.078 "low_priority_weight": 0, 00:03:42.078 "medium_priority_weight": 0, 00:03:42.078 "high_priority_weight": 0, 00:03:42.078 "nvme_adminq_poll_period_us": 10000, 00:03:42.078 "nvme_ioq_poll_period_us": 0, 00:03:42.078 "io_queue_requests": 0, 00:03:42.078 "delay_cmd_submit": true, 00:03:42.078 "transport_retry_count": 4, 00:03:42.078 "bdev_retry_count": 3, 00:03:42.078 "transport_ack_timeout": 0, 00:03:42.078 "ctrlr_loss_timeout_sec": 0, 00:03:42.078 "reconnect_delay_sec": 0, 00:03:42.078 "fast_io_fail_timeout_sec": 0, 00:03:42.078 "disable_auto_failback": false, 00:03:42.078 "generate_uuids": false, 00:03:42.078 "transport_tos": 0, 00:03:42.078 "nvme_error_stat": false, 00:03:42.078 "rdma_srq_size": 0, 00:03:42.078 "io_path_stat": false, 00:03:42.078 "allow_accel_sequence": false, 00:03:42.078 "rdma_max_cq_size": 0, 00:03:42.078 "rdma_cm_event_timeout_ms": 0, 00:03:42.078 "dhchap_digests": [ 00:03:42.078 "sha256", 00:03:42.078 "sha384", 00:03:42.078 "sha512" 00:03:42.078 ], 00:03:42.078 "dhchap_dhgroups": [ 00:03:42.078 "null", 00:03:42.078 "ffdhe2048", 00:03:42.078 "ffdhe3072", 00:03:42.078 "ffdhe4096", 00:03:42.078 "ffdhe6144", 00:03:42.078 "ffdhe8192" 00:03:42.078 ] 00:03:42.078 } 00:03:42.078 }, 00:03:42.078 { 00:03:42.078 "method": "bdev_nvme_set_hotplug", 00:03:42.078 "params": { 00:03:42.078 "period_us": 100000, 00:03:42.078 "enable": false 00:03:42.078 } 00:03:42.078 }, 00:03:42.078 { 00:03:42.078 "method": "bdev_wait_for_examine" 00:03:42.078 } 00:03:42.078 ] 00:03:42.078 }, 00:03:42.078 { 00:03:42.078 "subsystem": "scsi", 00:03:42.078 "config": null 00:03:42.078 }, 00:03:42.078 { 00:03:42.078 "subsystem": "scheduler", 00:03:42.078 "config": [ 00:03:42.078 { 00:03:42.078 "method": "framework_set_scheduler", 00:03:42.078 "params": { 00:03:42.078 "name": "static" 00:03:42.078 } 00:03:42.078 } 00:03:42.078 ] 00:03:42.078 }, 00:03:42.078 { 00:03:42.078 "subsystem": "vhost_scsi", 00:03:42.078 "config": [] 00:03:42.078 }, 00:03:42.078 { 00:03:42.078 "subsystem": "vhost_blk", 00:03:42.078 "config": [] 00:03:42.078 }, 00:03:42.078 { 00:03:42.078 "subsystem": "ublk", 00:03:42.078 "config": [] 00:03:42.078 }, 00:03:42.078 { 00:03:42.078 "subsystem": "nbd", 00:03:42.078 "config": [] 00:03:42.078 }, 00:03:42.078 { 00:03:42.078 "subsystem": "nvmf", 00:03:42.078 "config": [ 00:03:42.078 { 00:03:42.078 "method": "nvmf_set_config", 00:03:42.078 "params": { 00:03:42.078 "discovery_filter": "match_any", 00:03:42.078 "admin_cmd_passthru": { 00:03:42.078 "identify_ctrlr": false 00:03:42.078 }, 00:03:42.078 "dhchap_digests": [ 00:03:42.078 "sha256", 00:03:42.078 "sha384", 00:03:42.078 "sha512" 00:03:42.078 ], 00:03:42.078 "dhchap_dhgroups": [ 00:03:42.078 "null", 00:03:42.078 "ffdhe2048", 00:03:42.078 "ffdhe3072", 00:03:42.078 "ffdhe4096", 00:03:42.078 "ffdhe6144", 00:03:42.078 "ffdhe8192" 00:03:42.078 ] 00:03:42.078 } 00:03:42.078 }, 00:03:42.079 { 00:03:42.079 "method": "nvmf_set_max_subsystems", 00:03:42.079 "params": { 00:03:42.079 "max_subsystems": 1024 00:03:42.079 } 00:03:42.079 }, 00:03:42.079 { 00:03:42.079 "method": "nvmf_set_crdt", 00:03:42.079 "params": { 00:03:42.079 "crdt1": 0, 00:03:42.079 "crdt2": 0, 00:03:42.079 "crdt3": 0 00:03:42.079 } 00:03:42.079 }, 00:03:42.079 { 00:03:42.079 "method": "nvmf_create_transport", 00:03:42.079 "params": { 00:03:42.079 "trtype": "TCP", 00:03:42.079 "max_queue_depth": 128, 00:03:42.079 "max_io_qpairs_per_ctrlr": 127, 00:03:42.079 "in_capsule_data_size": 4096, 00:03:42.079 "max_io_size": 131072, 00:03:42.079 "io_unit_size": 131072, 00:03:42.079 "max_aq_depth": 128, 00:03:42.079 "num_shared_buffers": 511, 00:03:42.079 "buf_cache_size": 4294967295, 00:03:42.079 "dif_insert_or_strip": false, 00:03:42.079 "zcopy": false, 00:03:42.079 "c2h_success": true, 00:03:42.079 "sock_priority": 0, 00:03:42.079 "abort_timeout_sec": 1, 00:03:42.079 "ack_timeout": 0, 00:03:42.079 "data_wr_pool_size": 0 00:03:42.079 } 00:03:42.079 } 00:03:42.079 ] 00:03:42.079 }, 00:03:42.079 { 00:03:42.079 "subsystem": "iscsi", 00:03:42.079 "config": [ 00:03:42.079 { 00:03:42.079 "method": "iscsi_set_options", 00:03:42.079 "params": { 00:03:42.079 "node_base": "iqn.2016-06.io.spdk", 00:03:42.079 "max_sessions": 128, 00:03:42.079 "max_connections_per_session": 2, 00:03:42.079 "max_queue_depth": 64, 00:03:42.079 "default_time2wait": 2, 00:03:42.079 "default_time2retain": 20, 00:03:42.079 "first_burst_length": 8192, 00:03:42.079 "immediate_data": true, 00:03:42.079 "allow_duplicated_isid": false, 00:03:42.079 "error_recovery_level": 0, 00:03:42.079 "nop_timeout": 60, 00:03:42.079 "nop_in_interval": 30, 00:03:42.079 "disable_chap": false, 00:03:42.079 "require_chap": false, 00:03:42.079 "mutual_chap": false, 00:03:42.079 "chap_group": 0, 00:03:42.079 "max_large_datain_per_connection": 64, 00:03:42.079 "max_r2t_per_connection": 4, 00:03:42.079 "pdu_pool_size": 36864, 00:03:42.079 "immediate_data_pool_size": 16384, 00:03:42.079 "data_out_pool_size": 2048 00:03:42.079 } 00:03:42.079 } 00:03:42.079 ] 00:03:42.079 } 00:03:42.079 ] 00:03:42.079 } 00:03:42.079 16:15:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:42.079 16:15:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1964467 00:03:42.079 16:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1964467 ']' 00:03:42.079 16:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1964467 00:03:42.079 16:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:42.079 16:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:42.079 16:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1964467 00:03:42.079 16:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:42.079 16:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:42.079 16:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1964467' 00:03:42.079 killing process with pid 1964467 00:03:42.079 16:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1964467 00:03:42.079 16:15:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1964467 00:03:42.339 16:15:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1964647 00:03:42.340 16:15:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:42.340 16:15:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:47.625 16:15:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1964647 00:03:47.625 16:15:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1964647 ']' 00:03:47.625 16:15:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1964647 00:03:47.625 16:15:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:47.625 16:15:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:47.625 16:15:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1964647 00:03:47.625 16:15:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:47.625 16:15:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:47.625 16:15:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1964647' 00:03:47.625 killing process with pid 1964647 00:03:47.625 16:15:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1964647 00:03:47.625 16:15:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1964647 00:03:47.625 16:15:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:47.625 16:15:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:47.625 00:03:47.625 real 0m6.614s 00:03:47.625 user 0m6.536s 00:03:47.625 sys 0m0.552s 00:03:47.625 16:15:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:47.625 16:15:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:47.625 ************************************ 00:03:47.625 END TEST skip_rpc_with_json 00:03:47.625 ************************************ 00:03:47.625 16:15:33 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:47.625 16:15:33 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:47.625 16:15:33 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:47.625 16:15:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:47.625 ************************************ 00:03:47.625 START TEST skip_rpc_with_delay 00:03:47.625 ************************************ 00:03:47.625 16:15:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:03:47.625 16:15:33 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:47.625 16:15:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:03:47.625 16:15:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:47.625 16:15:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:47.625 16:15:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:47.625 16:15:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:47.625 16:15:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:47.625 16:15:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:47.625 16:15:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:47.625 16:15:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:47.625 16:15:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:47.625 16:15:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:47.625 [2024-11-20 16:15:33.564794] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:47.625 16:15:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:03:47.625 16:15:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:47.625 16:15:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:47.625 16:15:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:47.625 00:03:47.625 real 0m0.078s 00:03:47.625 user 0m0.050s 00:03:47.625 sys 0m0.027s 00:03:47.625 16:15:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:47.625 16:15:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:47.625 ************************************ 00:03:47.625 END TEST skip_rpc_with_delay 00:03:47.625 ************************************ 00:03:47.887 16:15:33 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:47.887 16:15:33 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:47.887 16:15:33 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:47.887 16:15:33 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:47.887 16:15:33 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:47.887 16:15:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:47.887 ************************************ 00:03:47.887 START TEST exit_on_failed_rpc_init 00:03:47.887 ************************************ 00:03:47.887 16:15:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:03:47.887 16:15:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1965959 00:03:47.887 16:15:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1965959 00:03:47.887 16:15:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 1965959 ']' 00:03:47.887 16:15:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:47.887 16:15:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:47.887 16:15:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:47.887 16:15:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:47.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:47.887 16:15:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:47.887 16:15:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:47.887 [2024-11-20 16:15:33.722312] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:03:47.887 [2024-11-20 16:15:33.722378] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1965959 ] 00:03:47.887 [2024-11-20 16:15:33.800797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:47.887 [2024-11-20 16:15:33.842972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:48.879 16:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:48.879 16:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:03:48.879 16:15:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:48.879 16:15:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:48.879 16:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:03:48.879 16:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:48.879 16:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:48.879 16:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:48.879 16:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:48.879 16:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:48.879 16:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:48.879 16:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:48.879 16:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:48.879 16:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:48.879 16:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:48.879 [2024-11-20 16:15:34.578045] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:03:48.879 [2024-11-20 16:15:34.578096] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1966040 ] 00:03:48.879 [2024-11-20 16:15:34.665608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:48.879 [2024-11-20 16:15:34.701173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:48.879 [2024-11-20 16:15:34.701221] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:48.879 [2024-11-20 16:15:34.701231] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:48.879 [2024-11-20 16:15:34.701238] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:48.879 16:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:03:48.879 16:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:48.879 16:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:03:48.879 16:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:03:48.879 16:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:03:48.879 16:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:48.879 16:15:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:48.879 16:15:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1965959 00:03:48.879 16:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 1965959 ']' 00:03:48.879 16:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 1965959 00:03:48.879 16:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:03:48.879 16:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:48.879 16:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1965959 00:03:48.879 16:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:48.879 16:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:48.879 16:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1965959' 00:03:48.879 killing process with pid 1965959 00:03:48.879 16:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 1965959 00:03:48.879 16:15:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 1965959 00:03:49.140 00:03:49.140 real 0m1.354s 00:03:49.140 user 0m1.583s 00:03:49.140 sys 0m0.387s 00:03:49.140 16:15:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:49.140 16:15:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:49.140 ************************************ 00:03:49.140 END TEST exit_on_failed_rpc_init 00:03:49.140 ************************************ 00:03:49.140 16:15:35 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:49.140 00:03:49.140 real 0m13.842s 00:03:49.140 user 0m13.500s 00:03:49.140 sys 0m1.505s 00:03:49.140 16:15:35 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:49.140 16:15:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:49.140 ************************************ 00:03:49.140 END TEST skip_rpc 00:03:49.140 ************************************ 00:03:49.140 16:15:35 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:49.140 16:15:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:49.140 16:15:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:49.140 16:15:35 -- common/autotest_common.sh@10 -- # set +x 00:03:49.401 ************************************ 00:03:49.401 START TEST rpc_client 00:03:49.401 ************************************ 00:03:49.401 16:15:35 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:49.401 * Looking for test storage... 00:03:49.401 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:49.401 16:15:35 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:49.401 16:15:35 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:03:49.401 16:15:35 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:49.401 16:15:35 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:49.401 16:15:35 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:49.401 16:15:35 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:49.401 16:15:35 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:49.401 16:15:35 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:03:49.401 16:15:35 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:03:49.401 16:15:35 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:03:49.401 16:15:35 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:03:49.401 16:15:35 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:03:49.401 16:15:35 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:03:49.401 16:15:35 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:03:49.401 16:15:35 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:49.401 16:15:35 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:03:49.401 16:15:35 rpc_client -- scripts/common.sh@345 -- # : 1 00:03:49.401 16:15:35 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:49.401 16:15:35 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:49.401 16:15:35 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:03:49.401 16:15:35 rpc_client -- scripts/common.sh@353 -- # local d=1 00:03:49.401 16:15:35 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:49.401 16:15:35 rpc_client -- scripts/common.sh@355 -- # echo 1 00:03:49.401 16:15:35 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:03:49.401 16:15:35 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:03:49.401 16:15:35 rpc_client -- scripts/common.sh@353 -- # local d=2 00:03:49.401 16:15:35 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:49.401 16:15:35 rpc_client -- scripts/common.sh@355 -- # echo 2 00:03:49.401 16:15:35 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:03:49.401 16:15:35 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:49.401 16:15:35 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:49.401 16:15:35 rpc_client -- scripts/common.sh@368 -- # return 0 00:03:49.401 16:15:35 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:49.401 16:15:35 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:49.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.401 --rc genhtml_branch_coverage=1 00:03:49.401 --rc genhtml_function_coverage=1 00:03:49.401 --rc genhtml_legend=1 00:03:49.401 --rc geninfo_all_blocks=1 00:03:49.401 --rc geninfo_unexecuted_blocks=1 00:03:49.401 00:03:49.401 ' 00:03:49.401 16:15:35 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:49.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.401 --rc genhtml_branch_coverage=1 00:03:49.401 --rc genhtml_function_coverage=1 00:03:49.401 --rc genhtml_legend=1 00:03:49.401 --rc geninfo_all_blocks=1 00:03:49.401 --rc geninfo_unexecuted_blocks=1 00:03:49.401 00:03:49.401 ' 00:03:49.401 16:15:35 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:49.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.401 --rc genhtml_branch_coverage=1 00:03:49.401 --rc genhtml_function_coverage=1 00:03:49.401 --rc genhtml_legend=1 00:03:49.401 --rc geninfo_all_blocks=1 00:03:49.401 --rc geninfo_unexecuted_blocks=1 00:03:49.401 00:03:49.401 ' 00:03:49.401 16:15:35 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:49.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.401 --rc genhtml_branch_coverage=1 00:03:49.401 --rc genhtml_function_coverage=1 00:03:49.401 --rc genhtml_legend=1 00:03:49.401 --rc geninfo_all_blocks=1 00:03:49.402 --rc geninfo_unexecuted_blocks=1 00:03:49.402 00:03:49.402 ' 00:03:49.402 16:15:35 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:49.402 OK 00:03:49.664 16:15:35 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:49.664 00:03:49.664 real 0m0.228s 00:03:49.664 user 0m0.128s 00:03:49.664 sys 0m0.111s 00:03:49.664 16:15:35 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:49.664 16:15:35 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:49.664 ************************************ 00:03:49.664 END TEST rpc_client 00:03:49.664 ************************************ 00:03:49.664 16:15:35 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:49.664 16:15:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:49.664 16:15:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:49.664 16:15:35 -- common/autotest_common.sh@10 -- # set +x 00:03:49.664 ************************************ 00:03:49.664 START TEST json_config 00:03:49.664 ************************************ 00:03:49.664 16:15:35 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:49.664 16:15:35 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:49.664 16:15:35 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:03:49.664 16:15:35 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:49.664 16:15:35 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:49.664 16:15:35 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:49.664 16:15:35 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:49.664 16:15:35 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:49.664 16:15:35 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:03:49.664 16:15:35 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:03:49.664 16:15:35 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:03:49.664 16:15:35 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:03:49.664 16:15:35 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:03:49.664 16:15:35 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:03:49.664 16:15:35 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:03:49.664 16:15:35 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:49.664 16:15:35 json_config -- scripts/common.sh@344 -- # case "$op" in 00:03:49.664 16:15:35 json_config -- scripts/common.sh@345 -- # : 1 00:03:49.664 16:15:35 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:49.664 16:15:35 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:49.664 16:15:35 json_config -- scripts/common.sh@365 -- # decimal 1 00:03:49.664 16:15:35 json_config -- scripts/common.sh@353 -- # local d=1 00:03:49.664 16:15:35 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:49.664 16:15:35 json_config -- scripts/common.sh@355 -- # echo 1 00:03:49.664 16:15:35 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:03:49.664 16:15:35 json_config -- scripts/common.sh@366 -- # decimal 2 00:03:49.664 16:15:35 json_config -- scripts/common.sh@353 -- # local d=2 00:03:49.664 16:15:35 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:49.664 16:15:35 json_config -- scripts/common.sh@355 -- # echo 2 00:03:49.664 16:15:35 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:03:49.664 16:15:35 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:49.664 16:15:35 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:49.664 16:15:35 json_config -- scripts/common.sh@368 -- # return 0 00:03:49.664 16:15:35 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:49.664 16:15:35 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:49.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.664 --rc genhtml_branch_coverage=1 00:03:49.664 --rc genhtml_function_coverage=1 00:03:49.664 --rc genhtml_legend=1 00:03:49.664 --rc geninfo_all_blocks=1 00:03:49.664 --rc geninfo_unexecuted_blocks=1 00:03:49.664 00:03:49.664 ' 00:03:49.664 16:15:35 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:49.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.664 --rc genhtml_branch_coverage=1 00:03:49.664 --rc genhtml_function_coverage=1 00:03:49.664 --rc genhtml_legend=1 00:03:49.664 --rc geninfo_all_blocks=1 00:03:49.664 --rc geninfo_unexecuted_blocks=1 00:03:49.664 00:03:49.664 ' 00:03:49.664 16:15:35 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:49.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.664 --rc genhtml_branch_coverage=1 00:03:49.664 --rc genhtml_function_coverage=1 00:03:49.664 --rc genhtml_legend=1 00:03:49.664 --rc geninfo_all_blocks=1 00:03:49.664 --rc geninfo_unexecuted_blocks=1 00:03:49.664 00:03:49.664 ' 00:03:49.664 16:15:35 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:49.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.664 --rc genhtml_branch_coverage=1 00:03:49.664 --rc genhtml_function_coverage=1 00:03:49.664 --rc genhtml_legend=1 00:03:49.664 --rc geninfo_all_blocks=1 00:03:49.664 --rc geninfo_unexecuted_blocks=1 00:03:49.664 00:03:49.664 ' 00:03:49.664 16:15:35 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:49.664 16:15:35 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:49.664 16:15:35 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:49.664 16:15:35 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:49.664 16:15:35 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:49.664 16:15:35 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:49.664 16:15:35 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:49.664 16:15:35 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:49.664 16:15:35 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:49.664 16:15:35 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:49.664 16:15:35 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:49.664 16:15:35 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:49.926 16:15:35 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:03:49.926 16:15:35 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:03:49.926 16:15:35 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:49.926 16:15:35 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:49.926 16:15:35 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:49.926 16:15:35 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:49.926 16:15:35 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:49.926 16:15:35 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:03:49.926 16:15:35 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:49.926 16:15:35 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:49.926 16:15:35 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:49.926 16:15:35 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:49.926 16:15:35 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:49.926 16:15:35 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:49.926 16:15:35 json_config -- paths/export.sh@5 -- # export PATH 00:03:49.926 16:15:35 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:49.926 16:15:35 json_config -- nvmf/common.sh@51 -- # : 0 00:03:49.926 16:15:35 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:49.926 16:15:35 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:49.926 16:15:35 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:49.926 16:15:35 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:49.926 16:15:35 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:49.926 16:15:35 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:49.926 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:49.926 16:15:35 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:49.926 16:15:35 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:49.926 16:15:35 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:49.926 16:15:35 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:49.926 16:15:35 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:49.926 16:15:35 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:49.926 16:15:35 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:49.926 16:15:35 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:49.926 16:15:35 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:49.926 16:15:35 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:49.926 16:15:35 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:49.926 16:15:35 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:49.926 16:15:35 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:49.926 16:15:35 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:49.926 16:15:35 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:49.926 16:15:35 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:49.926 16:15:35 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:49.926 16:15:35 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:49.926 16:15:35 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:03:49.926 INFO: JSON configuration test init 00:03:49.926 16:15:35 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:03:49.926 16:15:35 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:03:49.926 16:15:35 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:49.926 16:15:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:49.926 16:15:35 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:03:49.926 16:15:35 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:49.926 16:15:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:49.926 16:15:35 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:03:49.926 16:15:35 json_config -- json_config/common.sh@9 -- # local app=target 00:03:49.926 16:15:35 json_config -- json_config/common.sh@10 -- # shift 00:03:49.926 16:15:35 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:49.926 16:15:35 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:49.926 16:15:35 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:49.926 16:15:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:49.926 16:15:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:49.926 16:15:35 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1966502 00:03:49.926 16:15:35 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:49.926 Waiting for target to run... 00:03:49.926 16:15:35 json_config -- json_config/common.sh@25 -- # waitforlisten 1966502 /var/tmp/spdk_tgt.sock 00:03:49.926 16:15:35 json_config -- common/autotest_common.sh@835 -- # '[' -z 1966502 ']' 00:03:49.926 16:15:35 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:49.926 16:15:35 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:49.926 16:15:35 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:49.926 16:15:35 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:49.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:49.926 16:15:35 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:49.926 16:15:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:49.926 [2024-11-20 16:15:35.717735] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:03:49.926 [2024-11-20 16:15:35.717787] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1966502 ] 00:03:50.188 [2024-11-20 16:15:36.011294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:50.188 [2024-11-20 16:15:36.040699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:50.760 16:15:36 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:50.760 16:15:36 json_config -- common/autotest_common.sh@868 -- # return 0 00:03:50.760 16:15:36 json_config -- json_config/common.sh@26 -- # echo '' 00:03:50.760 00:03:50.760 16:15:36 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:03:50.760 16:15:36 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:03:50.760 16:15:36 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:50.760 16:15:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:50.760 16:15:36 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:03:50.760 16:15:36 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:03:50.760 16:15:36 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:50.760 16:15:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:50.760 16:15:36 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:50.760 16:15:36 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:03:50.760 16:15:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:51.331 16:15:37 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:03:51.331 16:15:37 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:03:51.331 16:15:37 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:51.331 16:15:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:51.331 16:15:37 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:03:51.331 16:15:37 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:51.331 16:15:37 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:03:51.331 16:15:37 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:03:51.331 16:15:37 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:03:51.331 16:15:37 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:03:51.331 16:15:37 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:03:51.331 16:15:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:51.592 16:15:37 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:03:51.592 16:15:37 json_config -- json_config/json_config.sh@51 -- # local get_types 00:03:51.592 16:15:37 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:03:51.592 16:15:37 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:03:51.592 16:15:37 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:03:51.592 16:15:37 json_config -- json_config/json_config.sh@54 -- # sort 00:03:51.592 16:15:37 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:03:51.592 16:15:37 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:03:51.592 16:15:37 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:03:51.592 16:15:37 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:03:51.592 16:15:37 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:51.592 16:15:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:51.592 16:15:37 json_config -- json_config/json_config.sh@62 -- # return 0 00:03:51.592 16:15:37 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:03:51.592 16:15:37 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:03:51.592 16:15:37 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:03:51.592 16:15:37 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:03:51.592 16:15:37 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:03:51.592 16:15:37 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:03:51.592 16:15:37 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:51.592 16:15:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:51.592 16:15:37 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:51.592 16:15:37 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:03:51.592 16:15:37 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:03:51.592 16:15:37 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:51.592 16:15:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:51.592 MallocForNvmf0 00:03:51.854 16:15:37 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:51.854 16:15:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:51.854 MallocForNvmf1 00:03:51.854 16:15:37 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:03:51.854 16:15:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:03:52.115 [2024-11-20 16:15:37.890800] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:52.115 16:15:37 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:52.115 16:15:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:52.449 16:15:38 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:52.449 16:15:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:52.449 16:15:38 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:52.449 16:15:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:52.711 16:15:38 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:52.711 16:15:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:52.711 [2024-11-20 16:15:38.601060] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:52.711 16:15:38 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:03:52.711 16:15:38 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:52.711 16:15:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:52.711 16:15:38 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:03:52.711 16:15:38 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:52.711 16:15:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:52.972 16:15:38 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:03:52.972 16:15:38 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:52.972 16:15:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:52.972 MallocBdevForConfigChangeCheck 00:03:52.972 16:15:38 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:03:52.972 16:15:38 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:52.972 16:15:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:52.972 16:15:38 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:03:52.972 16:15:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:53.545 16:15:39 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:03:53.545 INFO: shutting down applications... 00:03:53.545 16:15:39 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:03:53.545 16:15:39 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:03:53.545 16:15:39 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:03:53.545 16:15:39 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:03:53.805 Calling clear_iscsi_subsystem 00:03:53.805 Calling clear_nvmf_subsystem 00:03:53.805 Calling clear_nbd_subsystem 00:03:53.805 Calling clear_ublk_subsystem 00:03:53.805 Calling clear_vhost_blk_subsystem 00:03:53.805 Calling clear_vhost_scsi_subsystem 00:03:53.805 Calling clear_bdev_subsystem 00:03:53.806 16:15:39 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:03:53.806 16:15:39 json_config -- json_config/json_config.sh@350 -- # count=100 00:03:53.806 16:15:39 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:03:53.806 16:15:39 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:53.806 16:15:39 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:03:53.806 16:15:39 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:03:54.378 16:15:40 json_config -- json_config/json_config.sh@352 -- # break 00:03:54.378 16:15:40 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:03:54.378 16:15:40 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:03:54.378 16:15:40 json_config -- json_config/common.sh@31 -- # local app=target 00:03:54.378 16:15:40 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:54.378 16:15:40 json_config -- json_config/common.sh@35 -- # [[ -n 1966502 ]] 00:03:54.378 16:15:40 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1966502 00:03:54.378 16:15:40 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:54.378 16:15:40 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:54.378 16:15:40 json_config -- json_config/common.sh@41 -- # kill -0 1966502 00:03:54.378 16:15:40 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:03:54.639 16:15:40 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:03:54.639 16:15:40 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:54.639 16:15:40 json_config -- json_config/common.sh@41 -- # kill -0 1966502 00:03:54.639 16:15:40 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:54.639 16:15:40 json_config -- json_config/common.sh@43 -- # break 00:03:54.639 16:15:40 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:54.639 16:15:40 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:54.639 SPDK target shutdown done 00:03:54.639 16:15:40 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:03:54.639 INFO: relaunching applications... 00:03:54.639 16:15:40 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:54.639 16:15:40 json_config -- json_config/common.sh@9 -- # local app=target 00:03:54.639 16:15:40 json_config -- json_config/common.sh@10 -- # shift 00:03:54.639 16:15:40 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:54.639 16:15:40 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:54.639 16:15:40 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:54.639 16:15:40 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:54.639 16:15:40 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:54.639 16:15:40 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1967633 00:03:54.639 16:15:40 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:54.639 Waiting for target to run... 00:03:54.639 16:15:40 json_config -- json_config/common.sh@25 -- # waitforlisten 1967633 /var/tmp/spdk_tgt.sock 00:03:54.639 16:15:40 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:54.639 16:15:40 json_config -- common/autotest_common.sh@835 -- # '[' -z 1967633 ']' 00:03:54.639 16:15:40 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:54.639 16:15:40 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:54.639 16:15:40 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:54.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:54.639 16:15:40 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:54.639 16:15:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:54.900 [2024-11-20 16:15:40.613937] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:03:54.900 [2024-11-20 16:15:40.614009] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1967633 ] 00:03:55.160 [2024-11-20 16:15:40.918629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:55.160 [2024-11-20 16:15:40.947820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:55.730 [2024-11-20 16:15:41.460978] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:55.730 [2024-11-20 16:15:41.493347] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:55.730 16:15:41 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:55.730 16:15:41 json_config -- common/autotest_common.sh@868 -- # return 0 00:03:55.730 16:15:41 json_config -- json_config/common.sh@26 -- # echo '' 00:03:55.730 00:03:55.730 16:15:41 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:03:55.730 16:15:41 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:03:55.730 INFO: Checking if target configuration is the same... 00:03:55.730 16:15:41 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:55.730 16:15:41 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:03:55.730 16:15:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:55.730 + '[' 2 -ne 2 ']' 00:03:55.730 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:55.731 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:55.731 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:55.731 +++ basename /dev/fd/62 00:03:55.731 ++ mktemp /tmp/62.XXX 00:03:55.731 + tmp_file_1=/tmp/62.SWR 00:03:55.731 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:55.731 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:55.731 + tmp_file_2=/tmp/spdk_tgt_config.json.DAE 00:03:55.731 + ret=0 00:03:55.731 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:55.991 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:55.991 + diff -u /tmp/62.SWR /tmp/spdk_tgt_config.json.DAE 00:03:55.991 + echo 'INFO: JSON config files are the same' 00:03:55.991 INFO: JSON config files are the same 00:03:55.991 + rm /tmp/62.SWR /tmp/spdk_tgt_config.json.DAE 00:03:55.991 + exit 0 00:03:55.991 16:15:41 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:03:55.991 16:15:41 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:03:55.991 INFO: changing configuration and checking if this can be detected... 00:03:55.991 16:15:41 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:55.991 16:15:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:56.253 16:15:42 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:56.253 16:15:42 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:03:56.253 16:15:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:56.253 + '[' 2 -ne 2 ']' 00:03:56.253 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:56.253 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:56.253 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:56.253 +++ basename /dev/fd/62 00:03:56.253 ++ mktemp /tmp/62.XXX 00:03:56.253 + tmp_file_1=/tmp/62.9DY 00:03:56.253 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:56.253 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:56.253 + tmp_file_2=/tmp/spdk_tgt_config.json.kuM 00:03:56.253 + ret=0 00:03:56.253 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:56.514 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:56.514 + diff -u /tmp/62.9DY /tmp/spdk_tgt_config.json.kuM 00:03:56.514 + ret=1 00:03:56.514 + echo '=== Start of file: /tmp/62.9DY ===' 00:03:56.514 + cat /tmp/62.9DY 00:03:56.514 + echo '=== End of file: /tmp/62.9DY ===' 00:03:56.514 + echo '' 00:03:56.514 + echo '=== Start of file: /tmp/spdk_tgt_config.json.kuM ===' 00:03:56.514 + cat /tmp/spdk_tgt_config.json.kuM 00:03:56.514 + echo '=== End of file: /tmp/spdk_tgt_config.json.kuM ===' 00:03:56.514 + echo '' 00:03:56.514 + rm /tmp/62.9DY /tmp/spdk_tgt_config.json.kuM 00:03:56.775 + exit 1 00:03:56.775 16:15:42 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:03:56.775 INFO: configuration change detected. 00:03:56.775 16:15:42 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:03:56.775 16:15:42 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:03:56.775 16:15:42 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:56.775 16:15:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:56.775 16:15:42 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:03:56.775 16:15:42 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:03:56.775 16:15:42 json_config -- json_config/json_config.sh@324 -- # [[ -n 1967633 ]] 00:03:56.775 16:15:42 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:03:56.775 16:15:42 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:03:56.775 16:15:42 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:56.775 16:15:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:56.775 16:15:42 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:03:56.775 16:15:42 json_config -- json_config/json_config.sh@200 -- # uname -s 00:03:56.775 16:15:42 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:03:56.775 16:15:42 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:03:56.775 16:15:42 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:03:56.775 16:15:42 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:03:56.775 16:15:42 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:56.775 16:15:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:56.775 16:15:42 json_config -- json_config/json_config.sh@330 -- # killprocess 1967633 00:03:56.775 16:15:42 json_config -- common/autotest_common.sh@954 -- # '[' -z 1967633 ']' 00:03:56.775 16:15:42 json_config -- common/autotest_common.sh@958 -- # kill -0 1967633 00:03:56.775 16:15:42 json_config -- common/autotest_common.sh@959 -- # uname 00:03:56.775 16:15:42 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:56.775 16:15:42 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1967633 00:03:56.775 16:15:42 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:56.775 16:15:42 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:56.775 16:15:42 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1967633' 00:03:56.775 killing process with pid 1967633 00:03:56.775 16:15:42 json_config -- common/autotest_common.sh@973 -- # kill 1967633 00:03:56.775 16:15:42 json_config -- common/autotest_common.sh@978 -- # wait 1967633 00:03:57.036 16:15:42 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:57.036 16:15:42 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:03:57.036 16:15:42 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:57.036 16:15:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:57.036 16:15:42 json_config -- json_config/json_config.sh@335 -- # return 0 00:03:57.036 16:15:42 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:03:57.036 INFO: Success 00:03:57.036 00:03:57.036 real 0m7.490s 00:03:57.036 user 0m9.095s 00:03:57.036 sys 0m1.993s 00:03:57.036 16:15:42 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:57.036 16:15:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:57.036 ************************************ 00:03:57.036 END TEST json_config 00:03:57.036 ************************************ 00:03:57.036 16:15:42 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:03:57.036 16:15:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:57.036 16:15:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:57.036 16:15:42 -- common/autotest_common.sh@10 -- # set +x 00:03:57.297 ************************************ 00:03:57.297 START TEST json_config_extra_key 00:03:57.297 ************************************ 00:03:57.297 16:15:42 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:03:57.297 16:15:43 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:57.297 16:15:43 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:57.297 16:15:43 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:03:57.297 16:15:43 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:57.297 16:15:43 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:57.297 16:15:43 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:57.297 16:15:43 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:57.297 16:15:43 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:03:57.297 16:15:43 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:03:57.297 16:15:43 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:03:57.297 16:15:43 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:03:57.297 16:15:43 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:03:57.297 16:15:43 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:03:57.297 16:15:43 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:03:57.298 16:15:43 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:57.298 16:15:43 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:03:57.298 16:15:43 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:03:57.298 16:15:43 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:57.298 16:15:43 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:57.298 16:15:43 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:03:57.298 16:15:43 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:03:57.298 16:15:43 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:57.298 16:15:43 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:03:57.298 16:15:43 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:03:57.298 16:15:43 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:03:57.298 16:15:43 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:03:57.298 16:15:43 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:57.298 16:15:43 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:03:57.298 16:15:43 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:03:57.298 16:15:43 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:57.298 16:15:43 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:57.298 16:15:43 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:03:57.298 16:15:43 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:57.298 16:15:43 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:57.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.298 --rc genhtml_branch_coverage=1 00:03:57.298 --rc genhtml_function_coverage=1 00:03:57.298 --rc genhtml_legend=1 00:03:57.298 --rc geninfo_all_blocks=1 00:03:57.298 --rc geninfo_unexecuted_blocks=1 00:03:57.298 00:03:57.298 ' 00:03:57.298 16:15:43 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:57.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.298 --rc genhtml_branch_coverage=1 00:03:57.298 --rc genhtml_function_coverage=1 00:03:57.298 --rc genhtml_legend=1 00:03:57.298 --rc geninfo_all_blocks=1 00:03:57.298 --rc geninfo_unexecuted_blocks=1 00:03:57.298 00:03:57.298 ' 00:03:57.298 16:15:43 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:57.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.298 --rc genhtml_branch_coverage=1 00:03:57.298 --rc genhtml_function_coverage=1 00:03:57.298 --rc genhtml_legend=1 00:03:57.298 --rc geninfo_all_blocks=1 00:03:57.298 --rc geninfo_unexecuted_blocks=1 00:03:57.298 00:03:57.298 ' 00:03:57.298 16:15:43 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:57.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.298 --rc genhtml_branch_coverage=1 00:03:57.298 --rc genhtml_function_coverage=1 00:03:57.298 --rc genhtml_legend=1 00:03:57.298 --rc geninfo_all_blocks=1 00:03:57.298 --rc geninfo_unexecuted_blocks=1 00:03:57.298 00:03:57.298 ' 00:03:57.298 16:15:43 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:57.298 16:15:43 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:03:57.298 16:15:43 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:57.298 16:15:43 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:57.298 16:15:43 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:57.298 16:15:43 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:57.298 16:15:43 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:57.298 16:15:43 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:57.298 16:15:43 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:57.298 16:15:43 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:57.298 16:15:43 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:57.298 16:15:43 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:57.298 16:15:43 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:03:57.298 16:15:43 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:03:57.298 16:15:43 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:57.298 16:15:43 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:57.298 16:15:43 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:57.298 16:15:43 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:57.298 16:15:43 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:57.298 16:15:43 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:03:57.298 16:15:43 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:57.298 16:15:43 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:57.298 16:15:43 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:57.298 16:15:43 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.298 16:15:43 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.298 16:15:43 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.298 16:15:43 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:03:57.298 16:15:43 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.298 16:15:43 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:03:57.298 16:15:43 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:57.298 16:15:43 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:57.298 16:15:43 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:57.298 16:15:43 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:57.298 16:15:43 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:57.298 16:15:43 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:57.298 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:57.298 16:15:43 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:57.298 16:15:43 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:57.298 16:15:43 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:57.298 16:15:43 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:57.298 16:15:43 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:03:57.298 16:15:43 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:03:57.298 16:15:43 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:03:57.298 16:15:43 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:03:57.298 16:15:43 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:03:57.298 16:15:43 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:03:57.298 16:15:43 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:03:57.298 16:15:43 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:03:57.298 16:15:43 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:57.298 16:15:43 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:03:57.298 INFO: launching applications... 00:03:57.298 16:15:43 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:03:57.298 16:15:43 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:03:57.298 16:15:43 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:03:57.298 16:15:43 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:57.298 16:15:43 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:57.298 16:15:43 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:03:57.298 16:15:43 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:57.298 16:15:43 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:57.298 16:15:43 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1968106 00:03:57.298 16:15:43 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:57.299 Waiting for target to run... 00:03:57.299 16:15:43 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1968106 /var/tmp/spdk_tgt.sock 00:03:57.299 16:15:43 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 1968106 ']' 00:03:57.299 16:15:43 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:57.299 16:15:43 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:03:57.299 16:15:43 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:57.299 16:15:43 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:57.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:57.299 16:15:43 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:57.299 16:15:43 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:03:57.560 [2024-11-20 16:15:43.260425] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:03:57.560 [2024-11-20 16:15:43.260479] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1968106 ] 00:03:57.820 [2024-11-20 16:15:43.562240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:57.820 [2024-11-20 16:15:43.593385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:58.394 16:15:44 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:58.394 16:15:44 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:03:58.394 16:15:44 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:03:58.394 00:03:58.394 16:15:44 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:03:58.394 INFO: shutting down applications... 00:03:58.394 16:15:44 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:03:58.394 16:15:44 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:03:58.394 16:15:44 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:58.394 16:15:44 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1968106 ]] 00:03:58.394 16:15:44 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1968106 00:03:58.394 16:15:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:58.394 16:15:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:58.394 16:15:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1968106 00:03:58.394 16:15:44 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:03:58.655 16:15:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:03:58.655 16:15:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:58.655 16:15:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1968106 00:03:58.655 16:15:44 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:58.655 16:15:44 json_config_extra_key -- json_config/common.sh@43 -- # break 00:03:58.655 16:15:44 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:58.655 16:15:44 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:58.655 SPDK target shutdown done 00:03:58.655 16:15:44 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:03:58.655 Success 00:03:58.655 00:03:58.655 real 0m1.607s 00:03:58.655 user 0m1.283s 00:03:58.655 sys 0m0.404s 00:03:58.655 16:15:44 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:58.655 16:15:44 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:03:58.655 ************************************ 00:03:58.655 END TEST json_config_extra_key 00:03:58.655 ************************************ 00:03:58.916 16:15:44 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:58.916 16:15:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:58.916 16:15:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:58.916 16:15:44 -- common/autotest_common.sh@10 -- # set +x 00:03:58.916 ************************************ 00:03:58.916 START TEST alias_rpc 00:03:58.916 ************************************ 00:03:58.916 16:15:44 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:58.916 * Looking for test storage... 00:03:58.916 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:03:58.916 16:15:44 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:58.916 16:15:44 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:58.916 16:15:44 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:58.916 16:15:44 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:58.916 16:15:44 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:58.916 16:15:44 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:58.916 16:15:44 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:58.916 16:15:44 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:58.916 16:15:44 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:58.916 16:15:44 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:58.916 16:15:44 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:58.916 16:15:44 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:58.916 16:15:44 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:58.916 16:15:44 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:58.916 16:15:44 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:58.916 16:15:44 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:58.916 16:15:44 alias_rpc -- scripts/common.sh@345 -- # : 1 00:03:58.916 16:15:44 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:58.916 16:15:44 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:58.916 16:15:44 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:58.916 16:15:44 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:03:58.916 16:15:44 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:58.916 16:15:44 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:03:58.916 16:15:44 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:58.916 16:15:44 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:59.177 16:15:44 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:03:59.177 16:15:44 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:59.177 16:15:44 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:03:59.177 16:15:44 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:59.177 16:15:44 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:59.177 16:15:44 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:59.177 16:15:44 alias_rpc -- scripts/common.sh@368 -- # return 0 00:03:59.177 16:15:44 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:59.177 16:15:44 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:59.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.177 --rc genhtml_branch_coverage=1 00:03:59.177 --rc genhtml_function_coverage=1 00:03:59.177 --rc genhtml_legend=1 00:03:59.177 --rc geninfo_all_blocks=1 00:03:59.177 --rc geninfo_unexecuted_blocks=1 00:03:59.177 00:03:59.177 ' 00:03:59.177 16:15:44 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:59.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.178 --rc genhtml_branch_coverage=1 00:03:59.178 --rc genhtml_function_coverage=1 00:03:59.178 --rc genhtml_legend=1 00:03:59.178 --rc geninfo_all_blocks=1 00:03:59.178 --rc geninfo_unexecuted_blocks=1 00:03:59.178 00:03:59.178 ' 00:03:59.178 16:15:44 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:59.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.178 --rc genhtml_branch_coverage=1 00:03:59.178 --rc genhtml_function_coverage=1 00:03:59.178 --rc genhtml_legend=1 00:03:59.178 --rc geninfo_all_blocks=1 00:03:59.178 --rc geninfo_unexecuted_blocks=1 00:03:59.178 00:03:59.178 ' 00:03:59.178 16:15:44 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:59.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.178 --rc genhtml_branch_coverage=1 00:03:59.178 --rc genhtml_function_coverage=1 00:03:59.178 --rc genhtml_legend=1 00:03:59.178 --rc geninfo_all_blocks=1 00:03:59.178 --rc geninfo_unexecuted_blocks=1 00:03:59.178 00:03:59.178 ' 00:03:59.178 16:15:44 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:03:59.178 16:15:44 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1968496 00:03:59.178 16:15:44 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1968496 00:03:59.178 16:15:44 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:59.178 16:15:44 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 1968496 ']' 00:03:59.178 16:15:44 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:59.178 16:15:44 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:59.178 16:15:44 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:59.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:59.178 16:15:44 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:59.178 16:15:44 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:59.178 [2024-11-20 16:15:44.948872] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:03:59.178 [2024-11-20 16:15:44.948942] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1968496 ] 00:03:59.178 [2024-11-20 16:15:45.024638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:59.178 [2024-11-20 16:15:45.067667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:00.120 16:15:45 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:00.120 16:15:45 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:00.120 16:15:45 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:00.120 16:15:45 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1968496 00:04:00.120 16:15:45 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 1968496 ']' 00:04:00.120 16:15:45 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 1968496 00:04:00.120 16:15:45 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:00.120 16:15:45 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:00.120 16:15:45 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1968496 00:04:00.120 16:15:45 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:00.120 16:15:45 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:00.120 16:15:45 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1968496' 00:04:00.120 killing process with pid 1968496 00:04:00.120 16:15:45 alias_rpc -- common/autotest_common.sh@973 -- # kill 1968496 00:04:00.120 16:15:45 alias_rpc -- common/autotest_common.sh@978 -- # wait 1968496 00:04:00.381 00:04:00.381 real 0m1.511s 00:04:00.381 user 0m1.665s 00:04:00.381 sys 0m0.409s 00:04:00.381 16:15:46 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.381 16:15:46 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.381 ************************************ 00:04:00.381 END TEST alias_rpc 00:04:00.381 ************************************ 00:04:00.381 16:15:46 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:00.381 16:15:46 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:00.381 16:15:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.381 16:15:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.381 16:15:46 -- common/autotest_common.sh@10 -- # set +x 00:04:00.381 ************************************ 00:04:00.381 START TEST spdkcli_tcp 00:04:00.381 ************************************ 00:04:00.381 16:15:46 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:00.642 * Looking for test storage... 00:04:00.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:00.642 16:15:46 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:00.642 16:15:46 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:00.642 16:15:46 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:00.642 16:15:46 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:00.642 16:15:46 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:00.643 16:15:46 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:00.643 16:15:46 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:00.643 16:15:46 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:00.643 16:15:46 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:00.643 16:15:46 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:00.643 16:15:46 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:00.643 16:15:46 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:00.643 16:15:46 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:00.643 16:15:46 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:00.643 16:15:46 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:00.643 16:15:46 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:00.643 16:15:46 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:00.643 16:15:46 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:00.643 16:15:46 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:00.643 16:15:46 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:00.643 16:15:46 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:00.643 16:15:46 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:00.643 16:15:46 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:00.643 16:15:46 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:00.643 16:15:46 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:00.643 16:15:46 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:00.643 16:15:46 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:00.643 16:15:46 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:00.643 16:15:46 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:00.643 16:15:46 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:00.643 16:15:46 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:00.643 16:15:46 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:00.643 16:15:46 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:00.643 16:15:46 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:00.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.643 --rc genhtml_branch_coverage=1 00:04:00.643 --rc genhtml_function_coverage=1 00:04:00.643 --rc genhtml_legend=1 00:04:00.643 --rc geninfo_all_blocks=1 00:04:00.643 --rc geninfo_unexecuted_blocks=1 00:04:00.643 00:04:00.643 ' 00:04:00.643 16:15:46 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:00.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.643 --rc genhtml_branch_coverage=1 00:04:00.643 --rc genhtml_function_coverage=1 00:04:00.643 --rc genhtml_legend=1 00:04:00.643 --rc geninfo_all_blocks=1 00:04:00.643 --rc geninfo_unexecuted_blocks=1 00:04:00.643 00:04:00.643 ' 00:04:00.643 16:15:46 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:00.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.643 --rc genhtml_branch_coverage=1 00:04:00.643 --rc genhtml_function_coverage=1 00:04:00.643 --rc genhtml_legend=1 00:04:00.643 --rc geninfo_all_blocks=1 00:04:00.643 --rc geninfo_unexecuted_blocks=1 00:04:00.643 00:04:00.643 ' 00:04:00.643 16:15:46 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:00.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.643 --rc genhtml_branch_coverage=1 00:04:00.643 --rc genhtml_function_coverage=1 00:04:00.643 --rc genhtml_legend=1 00:04:00.643 --rc geninfo_all_blocks=1 00:04:00.643 --rc geninfo_unexecuted_blocks=1 00:04:00.643 00:04:00.643 ' 00:04:00.643 16:15:46 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:00.643 16:15:46 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:00.643 16:15:46 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:00.643 16:15:46 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:00.643 16:15:46 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:00.643 16:15:46 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:00.643 16:15:46 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:00.643 16:15:46 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:00.643 16:15:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:00.643 16:15:46 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1968898 00:04:00.643 16:15:46 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1968898 00:04:00.643 16:15:46 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:00.643 16:15:46 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 1968898 ']' 00:04:00.643 16:15:46 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:00.643 16:15:46 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:00.643 16:15:46 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:00.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:00.643 16:15:46 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:00.643 16:15:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:00.643 [2024-11-20 16:15:46.523482] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:04:00.643 [2024-11-20 16:15:46.523535] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1968898 ] 00:04:00.643 [2024-11-20 16:15:46.595509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:00.905 [2024-11-20 16:15:46.632996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:00.905 [2024-11-20 16:15:46.633003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:01.475 16:15:47 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:01.475 16:15:47 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:01.475 16:15:47 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1969225 00:04:01.475 16:15:47 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:01.475 16:15:47 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:01.738 [ 00:04:01.738 "bdev_malloc_delete", 00:04:01.738 "bdev_malloc_create", 00:04:01.738 "bdev_null_resize", 00:04:01.738 "bdev_null_delete", 00:04:01.738 "bdev_null_create", 00:04:01.738 "bdev_nvme_cuse_unregister", 00:04:01.738 "bdev_nvme_cuse_register", 00:04:01.738 "bdev_opal_new_user", 00:04:01.738 "bdev_opal_set_lock_state", 00:04:01.738 "bdev_opal_delete", 00:04:01.738 "bdev_opal_get_info", 00:04:01.738 "bdev_opal_create", 00:04:01.738 "bdev_nvme_opal_revert", 00:04:01.738 "bdev_nvme_opal_init", 00:04:01.738 "bdev_nvme_send_cmd", 00:04:01.738 "bdev_nvme_set_keys", 00:04:01.738 "bdev_nvme_get_path_iostat", 00:04:01.738 "bdev_nvme_get_mdns_discovery_info", 00:04:01.738 "bdev_nvme_stop_mdns_discovery", 00:04:01.738 "bdev_nvme_start_mdns_discovery", 00:04:01.738 "bdev_nvme_set_multipath_policy", 00:04:01.738 "bdev_nvme_set_preferred_path", 00:04:01.738 "bdev_nvme_get_io_paths", 00:04:01.738 "bdev_nvme_remove_error_injection", 00:04:01.738 "bdev_nvme_add_error_injection", 00:04:01.738 "bdev_nvme_get_discovery_info", 00:04:01.738 "bdev_nvme_stop_discovery", 00:04:01.738 "bdev_nvme_start_discovery", 00:04:01.738 "bdev_nvme_get_controller_health_info", 00:04:01.738 "bdev_nvme_disable_controller", 00:04:01.738 "bdev_nvme_enable_controller", 00:04:01.738 "bdev_nvme_reset_controller", 00:04:01.738 "bdev_nvme_get_transport_statistics", 00:04:01.738 "bdev_nvme_apply_firmware", 00:04:01.738 "bdev_nvme_detach_controller", 00:04:01.738 "bdev_nvme_get_controllers", 00:04:01.738 "bdev_nvme_attach_controller", 00:04:01.738 "bdev_nvme_set_hotplug", 00:04:01.738 "bdev_nvme_set_options", 00:04:01.738 "bdev_passthru_delete", 00:04:01.738 "bdev_passthru_create", 00:04:01.738 "bdev_lvol_set_parent_bdev", 00:04:01.738 "bdev_lvol_set_parent", 00:04:01.738 "bdev_lvol_check_shallow_copy", 00:04:01.738 "bdev_lvol_start_shallow_copy", 00:04:01.738 "bdev_lvol_grow_lvstore", 00:04:01.738 "bdev_lvol_get_lvols", 00:04:01.738 "bdev_lvol_get_lvstores", 00:04:01.738 "bdev_lvol_delete", 00:04:01.738 "bdev_lvol_set_read_only", 00:04:01.738 "bdev_lvol_resize", 00:04:01.738 "bdev_lvol_decouple_parent", 00:04:01.738 "bdev_lvol_inflate", 00:04:01.738 "bdev_lvol_rename", 00:04:01.738 "bdev_lvol_clone_bdev", 00:04:01.738 "bdev_lvol_clone", 00:04:01.738 "bdev_lvol_snapshot", 00:04:01.738 "bdev_lvol_create", 00:04:01.738 "bdev_lvol_delete_lvstore", 00:04:01.738 "bdev_lvol_rename_lvstore", 00:04:01.738 "bdev_lvol_create_lvstore", 00:04:01.738 "bdev_raid_set_options", 00:04:01.738 "bdev_raid_remove_base_bdev", 00:04:01.738 "bdev_raid_add_base_bdev", 00:04:01.738 "bdev_raid_delete", 00:04:01.738 "bdev_raid_create", 00:04:01.738 "bdev_raid_get_bdevs", 00:04:01.738 "bdev_error_inject_error", 00:04:01.738 "bdev_error_delete", 00:04:01.738 "bdev_error_create", 00:04:01.738 "bdev_split_delete", 00:04:01.738 "bdev_split_create", 00:04:01.738 "bdev_delay_delete", 00:04:01.738 "bdev_delay_create", 00:04:01.738 "bdev_delay_update_latency", 00:04:01.738 "bdev_zone_block_delete", 00:04:01.738 "bdev_zone_block_create", 00:04:01.738 "blobfs_create", 00:04:01.738 "blobfs_detect", 00:04:01.738 "blobfs_set_cache_size", 00:04:01.738 "bdev_aio_delete", 00:04:01.738 "bdev_aio_rescan", 00:04:01.738 "bdev_aio_create", 00:04:01.738 "bdev_ftl_set_property", 00:04:01.738 "bdev_ftl_get_properties", 00:04:01.738 "bdev_ftl_get_stats", 00:04:01.738 "bdev_ftl_unmap", 00:04:01.738 "bdev_ftl_unload", 00:04:01.738 "bdev_ftl_delete", 00:04:01.738 "bdev_ftl_load", 00:04:01.738 "bdev_ftl_create", 00:04:01.738 "bdev_virtio_attach_controller", 00:04:01.738 "bdev_virtio_scsi_get_devices", 00:04:01.738 "bdev_virtio_detach_controller", 00:04:01.738 "bdev_virtio_blk_set_hotplug", 00:04:01.738 "bdev_iscsi_delete", 00:04:01.738 "bdev_iscsi_create", 00:04:01.738 "bdev_iscsi_set_options", 00:04:01.738 "accel_error_inject_error", 00:04:01.738 "ioat_scan_accel_module", 00:04:01.738 "dsa_scan_accel_module", 00:04:01.738 "iaa_scan_accel_module", 00:04:01.738 "vfu_virtio_create_fs_endpoint", 00:04:01.738 "vfu_virtio_create_scsi_endpoint", 00:04:01.738 "vfu_virtio_scsi_remove_target", 00:04:01.738 "vfu_virtio_scsi_add_target", 00:04:01.738 "vfu_virtio_create_blk_endpoint", 00:04:01.738 "vfu_virtio_delete_endpoint", 00:04:01.738 "keyring_file_remove_key", 00:04:01.738 "keyring_file_add_key", 00:04:01.738 "keyring_linux_set_options", 00:04:01.738 "fsdev_aio_delete", 00:04:01.738 "fsdev_aio_create", 00:04:01.738 "iscsi_get_histogram", 00:04:01.738 "iscsi_enable_histogram", 00:04:01.738 "iscsi_set_options", 00:04:01.738 "iscsi_get_auth_groups", 00:04:01.738 "iscsi_auth_group_remove_secret", 00:04:01.738 "iscsi_auth_group_add_secret", 00:04:01.738 "iscsi_delete_auth_group", 00:04:01.738 "iscsi_create_auth_group", 00:04:01.738 "iscsi_set_discovery_auth", 00:04:01.738 "iscsi_get_options", 00:04:01.738 "iscsi_target_node_request_logout", 00:04:01.738 "iscsi_target_node_set_redirect", 00:04:01.738 "iscsi_target_node_set_auth", 00:04:01.738 "iscsi_target_node_add_lun", 00:04:01.738 "iscsi_get_stats", 00:04:01.738 "iscsi_get_connections", 00:04:01.738 "iscsi_portal_group_set_auth", 00:04:01.738 "iscsi_start_portal_group", 00:04:01.738 "iscsi_delete_portal_group", 00:04:01.738 "iscsi_create_portal_group", 00:04:01.738 "iscsi_get_portal_groups", 00:04:01.738 "iscsi_delete_target_node", 00:04:01.738 "iscsi_target_node_remove_pg_ig_maps", 00:04:01.738 "iscsi_target_node_add_pg_ig_maps", 00:04:01.738 "iscsi_create_target_node", 00:04:01.738 "iscsi_get_target_nodes", 00:04:01.738 "iscsi_delete_initiator_group", 00:04:01.738 "iscsi_initiator_group_remove_initiators", 00:04:01.738 "iscsi_initiator_group_add_initiators", 00:04:01.738 "iscsi_create_initiator_group", 00:04:01.738 "iscsi_get_initiator_groups", 00:04:01.738 "nvmf_set_crdt", 00:04:01.738 "nvmf_set_config", 00:04:01.738 "nvmf_set_max_subsystems", 00:04:01.738 "nvmf_stop_mdns_prr", 00:04:01.738 "nvmf_publish_mdns_prr", 00:04:01.738 "nvmf_subsystem_get_listeners", 00:04:01.738 "nvmf_subsystem_get_qpairs", 00:04:01.738 "nvmf_subsystem_get_controllers", 00:04:01.738 "nvmf_get_stats", 00:04:01.738 "nvmf_get_transports", 00:04:01.738 "nvmf_create_transport", 00:04:01.738 "nvmf_get_targets", 00:04:01.738 "nvmf_delete_target", 00:04:01.738 "nvmf_create_target", 00:04:01.738 "nvmf_subsystem_allow_any_host", 00:04:01.738 "nvmf_subsystem_set_keys", 00:04:01.738 "nvmf_subsystem_remove_host", 00:04:01.738 "nvmf_subsystem_add_host", 00:04:01.739 "nvmf_ns_remove_host", 00:04:01.739 "nvmf_ns_add_host", 00:04:01.739 "nvmf_subsystem_remove_ns", 00:04:01.739 "nvmf_subsystem_set_ns_ana_group", 00:04:01.739 "nvmf_subsystem_add_ns", 00:04:01.739 "nvmf_subsystem_listener_set_ana_state", 00:04:01.739 "nvmf_discovery_get_referrals", 00:04:01.739 "nvmf_discovery_remove_referral", 00:04:01.739 "nvmf_discovery_add_referral", 00:04:01.739 "nvmf_subsystem_remove_listener", 00:04:01.739 "nvmf_subsystem_add_listener", 00:04:01.739 "nvmf_delete_subsystem", 00:04:01.739 "nvmf_create_subsystem", 00:04:01.739 "nvmf_get_subsystems", 00:04:01.739 "env_dpdk_get_mem_stats", 00:04:01.739 "nbd_get_disks", 00:04:01.739 "nbd_stop_disk", 00:04:01.739 "nbd_start_disk", 00:04:01.739 "ublk_recover_disk", 00:04:01.739 "ublk_get_disks", 00:04:01.739 "ublk_stop_disk", 00:04:01.739 "ublk_start_disk", 00:04:01.739 "ublk_destroy_target", 00:04:01.739 "ublk_create_target", 00:04:01.739 "virtio_blk_create_transport", 00:04:01.739 "virtio_blk_get_transports", 00:04:01.739 "vhost_controller_set_coalescing", 00:04:01.739 "vhost_get_controllers", 00:04:01.739 "vhost_delete_controller", 00:04:01.739 "vhost_create_blk_controller", 00:04:01.739 "vhost_scsi_controller_remove_target", 00:04:01.739 "vhost_scsi_controller_add_target", 00:04:01.739 "vhost_start_scsi_controller", 00:04:01.739 "vhost_create_scsi_controller", 00:04:01.739 "thread_set_cpumask", 00:04:01.739 "scheduler_set_options", 00:04:01.739 "framework_get_governor", 00:04:01.739 "framework_get_scheduler", 00:04:01.739 "framework_set_scheduler", 00:04:01.739 "framework_get_reactors", 00:04:01.739 "thread_get_io_channels", 00:04:01.739 "thread_get_pollers", 00:04:01.739 "thread_get_stats", 00:04:01.739 "framework_monitor_context_switch", 00:04:01.739 "spdk_kill_instance", 00:04:01.739 "log_enable_timestamps", 00:04:01.739 "log_get_flags", 00:04:01.739 "log_clear_flag", 00:04:01.739 "log_set_flag", 00:04:01.739 "log_get_level", 00:04:01.739 "log_set_level", 00:04:01.739 "log_get_print_level", 00:04:01.739 "log_set_print_level", 00:04:01.739 "framework_enable_cpumask_locks", 00:04:01.739 "framework_disable_cpumask_locks", 00:04:01.739 "framework_wait_init", 00:04:01.739 "framework_start_init", 00:04:01.739 "scsi_get_devices", 00:04:01.739 "bdev_get_histogram", 00:04:01.739 "bdev_enable_histogram", 00:04:01.739 "bdev_set_qos_limit", 00:04:01.739 "bdev_set_qd_sampling_period", 00:04:01.739 "bdev_get_bdevs", 00:04:01.739 "bdev_reset_iostat", 00:04:01.739 "bdev_get_iostat", 00:04:01.739 "bdev_examine", 00:04:01.739 "bdev_wait_for_examine", 00:04:01.739 "bdev_set_options", 00:04:01.739 "accel_get_stats", 00:04:01.739 "accel_set_options", 00:04:01.739 "accel_set_driver", 00:04:01.739 "accel_crypto_key_destroy", 00:04:01.739 "accel_crypto_keys_get", 00:04:01.739 "accel_crypto_key_create", 00:04:01.739 "accel_assign_opc", 00:04:01.739 "accel_get_module_info", 00:04:01.739 "accel_get_opc_assignments", 00:04:01.739 "vmd_rescan", 00:04:01.739 "vmd_remove_device", 00:04:01.739 "vmd_enable", 00:04:01.739 "sock_get_default_impl", 00:04:01.739 "sock_set_default_impl", 00:04:01.739 "sock_impl_set_options", 00:04:01.739 "sock_impl_get_options", 00:04:01.739 "iobuf_get_stats", 00:04:01.739 "iobuf_set_options", 00:04:01.739 "keyring_get_keys", 00:04:01.739 "vfu_tgt_set_base_path", 00:04:01.739 "framework_get_pci_devices", 00:04:01.739 "framework_get_config", 00:04:01.739 "framework_get_subsystems", 00:04:01.739 "fsdev_set_opts", 00:04:01.739 "fsdev_get_opts", 00:04:01.739 "trace_get_info", 00:04:01.739 "trace_get_tpoint_group_mask", 00:04:01.739 "trace_disable_tpoint_group", 00:04:01.739 "trace_enable_tpoint_group", 00:04:01.739 "trace_clear_tpoint_mask", 00:04:01.739 "trace_set_tpoint_mask", 00:04:01.739 "notify_get_notifications", 00:04:01.739 "notify_get_types", 00:04:01.739 "spdk_get_version", 00:04:01.739 "rpc_get_methods" 00:04:01.739 ] 00:04:01.739 16:15:47 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:01.739 16:15:47 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:01.739 16:15:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:01.739 16:15:47 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:01.739 16:15:47 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1968898 00:04:01.739 16:15:47 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 1968898 ']' 00:04:01.739 16:15:47 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 1968898 00:04:01.739 16:15:47 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:01.739 16:15:47 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:01.739 16:15:47 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1968898 00:04:01.739 16:15:47 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:01.739 16:15:47 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:01.739 16:15:47 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1968898' 00:04:01.739 killing process with pid 1968898 00:04:01.739 16:15:47 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 1968898 00:04:01.739 16:15:47 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 1968898 00:04:02.000 00:04:02.000 real 0m1.545s 00:04:02.000 user 0m2.844s 00:04:02.000 sys 0m0.432s 00:04:02.000 16:15:47 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:02.000 16:15:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:02.000 ************************************ 00:04:02.000 END TEST spdkcli_tcp 00:04:02.000 ************************************ 00:04:02.000 16:15:47 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:02.000 16:15:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.000 16:15:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.000 16:15:47 -- common/autotest_common.sh@10 -- # set +x 00:04:02.000 ************************************ 00:04:02.000 START TEST dpdk_mem_utility 00:04:02.000 ************************************ 00:04:02.000 16:15:47 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:02.262 * Looking for test storage... 00:04:02.262 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:02.262 16:15:47 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:02.262 16:15:47 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:02.262 16:15:47 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:02.262 16:15:48 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:02.262 16:15:48 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:02.262 16:15:48 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:02.262 16:15:48 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:02.262 16:15:48 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:02.262 16:15:48 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:02.262 16:15:48 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:02.262 16:15:48 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:02.262 16:15:48 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:02.262 16:15:48 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:02.262 16:15:48 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:02.262 16:15:48 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:02.262 16:15:48 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:02.262 16:15:48 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:02.262 16:15:48 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:02.262 16:15:48 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:02.262 16:15:48 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:02.262 16:15:48 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:02.262 16:15:48 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:02.262 16:15:48 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:02.262 16:15:48 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:02.262 16:15:48 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:02.262 16:15:48 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:02.262 16:15:48 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:02.262 16:15:48 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:02.262 16:15:48 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:02.262 16:15:48 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:02.262 16:15:48 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:02.262 16:15:48 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:02.262 16:15:48 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:02.262 16:15:48 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:02.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.262 --rc genhtml_branch_coverage=1 00:04:02.262 --rc genhtml_function_coverage=1 00:04:02.262 --rc genhtml_legend=1 00:04:02.262 --rc geninfo_all_blocks=1 00:04:02.262 --rc geninfo_unexecuted_blocks=1 00:04:02.262 00:04:02.262 ' 00:04:02.262 16:15:48 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:02.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.262 --rc genhtml_branch_coverage=1 00:04:02.262 --rc genhtml_function_coverage=1 00:04:02.262 --rc genhtml_legend=1 00:04:02.262 --rc geninfo_all_blocks=1 00:04:02.262 --rc geninfo_unexecuted_blocks=1 00:04:02.262 00:04:02.262 ' 00:04:02.262 16:15:48 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:02.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.262 --rc genhtml_branch_coverage=1 00:04:02.262 --rc genhtml_function_coverage=1 00:04:02.262 --rc genhtml_legend=1 00:04:02.262 --rc geninfo_all_blocks=1 00:04:02.262 --rc geninfo_unexecuted_blocks=1 00:04:02.262 00:04:02.262 ' 00:04:02.262 16:15:48 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:02.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.262 --rc genhtml_branch_coverage=1 00:04:02.262 --rc genhtml_function_coverage=1 00:04:02.262 --rc genhtml_legend=1 00:04:02.262 --rc geninfo_all_blocks=1 00:04:02.262 --rc geninfo_unexecuted_blocks=1 00:04:02.262 00:04:02.262 ' 00:04:02.262 16:15:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:02.262 16:15:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1969314 00:04:02.262 16:15:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1969314 00:04:02.262 16:15:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:02.262 16:15:48 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 1969314 ']' 00:04:02.262 16:15:48 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:02.262 16:15:48 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:02.262 16:15:48 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:02.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:02.262 16:15:48 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:02.262 16:15:48 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:02.263 [2024-11-20 16:15:48.135378] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:04:02.263 [2024-11-20 16:15:48.135446] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1969314 ] 00:04:02.263 [2024-11-20 16:15:48.211928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:02.524 [2024-11-20 16:15:48.252422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.098 16:15:48 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:03.098 16:15:48 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:03.098 16:15:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:03.098 16:15:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:03.098 16:15:48 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.098 16:15:48 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:03.098 { 00:04:03.098 "filename": "/tmp/spdk_mem_dump.txt" 00:04:03.098 } 00:04:03.098 16:15:48 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.098 16:15:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:03.098 DPDK memory size 818.000000 MiB in 1 heap(s) 00:04:03.098 1 heaps totaling size 818.000000 MiB 00:04:03.098 size: 818.000000 MiB heap id: 0 00:04:03.098 end heaps---------- 00:04:03.098 9 mempools totaling size 603.782043 MiB 00:04:03.098 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:03.098 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:03.098 size: 100.555481 MiB name: bdev_io_1969314 00:04:03.098 size: 50.003479 MiB name: msgpool_1969314 00:04:03.098 size: 36.509338 MiB name: fsdev_io_1969314 00:04:03.098 size: 21.763794 MiB name: PDU_Pool 00:04:03.098 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:03.098 size: 4.133484 MiB name: evtpool_1969314 00:04:03.098 size: 0.026123 MiB name: Session_Pool 00:04:03.098 end mempools------- 00:04:03.098 6 memzones totaling size 4.142822 MiB 00:04:03.098 size: 1.000366 MiB name: RG_ring_0_1969314 00:04:03.098 size: 1.000366 MiB name: RG_ring_1_1969314 00:04:03.098 size: 1.000366 MiB name: RG_ring_4_1969314 00:04:03.098 size: 1.000366 MiB name: RG_ring_5_1969314 00:04:03.098 size: 0.125366 MiB name: RG_ring_2_1969314 00:04:03.098 size: 0.015991 MiB name: RG_ring_3_1969314 00:04:03.098 end memzones------- 00:04:03.098 16:15:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:03.098 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:03.098 list of free elements. size: 10.852478 MiB 00:04:03.098 element at address: 0x200019200000 with size: 0.999878 MiB 00:04:03.098 element at address: 0x200019400000 with size: 0.999878 MiB 00:04:03.098 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:03.098 element at address: 0x200032000000 with size: 0.994446 MiB 00:04:03.098 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:03.098 element at address: 0x200012c00000 with size: 0.944275 MiB 00:04:03.098 element at address: 0x200019600000 with size: 0.936584 MiB 00:04:03.098 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:03.098 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:04:03.098 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:03.098 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:03.098 element at address: 0x200019800000 with size: 0.485657 MiB 00:04:03.098 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:03.098 element at address: 0x200028200000 with size: 0.410034 MiB 00:04:03.098 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:03.098 list of standard malloc elements. size: 199.218628 MiB 00:04:03.098 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:03.098 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:03.098 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:03.098 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:04:03.098 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:04:03.098 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:03.098 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:04:03.098 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:03.098 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:04:03.098 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:03.098 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:03.098 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:03.098 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:03.098 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:03.098 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:03.098 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:03.098 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:03.098 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:03.098 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:03.098 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:03.098 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:03.098 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:03.098 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:03.098 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:03.098 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:03.098 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:03.098 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:03.098 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:03.098 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:03.098 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:03.098 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:03.098 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:03.098 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:03.098 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:04:03.098 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:04:03.098 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:04:03.098 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:04:03.098 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:04:03.098 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:04:03.098 element at address: 0x200028268f80 with size: 0.000183 MiB 00:04:03.098 element at address: 0x200028269040 with size: 0.000183 MiB 00:04:03.098 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:04:03.098 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:04:03.098 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:04:03.098 list of memzone associated elements. size: 607.928894 MiB 00:04:03.098 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:04:03.098 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:03.098 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:04:03.098 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:03.098 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:04:03.098 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_1969314_0 00:04:03.098 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:03.098 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1969314_0 00:04:03.098 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:03.098 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1969314_0 00:04:03.098 element at address: 0x2000199be940 with size: 20.255554 MiB 00:04:03.098 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:03.098 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:04:03.098 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:03.098 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:03.098 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1969314_0 00:04:03.098 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:03.098 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1969314 00:04:03.098 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:03.098 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1969314 00:04:03.098 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:03.098 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:03.098 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:04:03.098 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:03.098 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:03.098 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:03.098 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:03.099 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:03.099 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:03.099 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1969314 00:04:03.099 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:03.099 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1969314 00:04:03.099 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:04:03.099 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1969314 00:04:03.099 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:04:03.099 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1969314 00:04:03.099 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:03.099 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1969314 00:04:03.099 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:03.099 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1969314 00:04:03.099 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:03.099 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:03.099 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:03.099 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:03.099 element at address: 0x20001987c540 with size: 0.250488 MiB 00:04:03.099 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:03.099 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:03.099 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1969314 00:04:03.099 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:03.099 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1969314 00:04:03.099 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:03.099 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:03.099 element at address: 0x200028269100 with size: 0.023743 MiB 00:04:03.099 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:03.099 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:03.099 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1969314 00:04:03.099 element at address: 0x20002826f240 with size: 0.002441 MiB 00:04:03.099 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:03.099 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:03.099 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1969314 00:04:03.099 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:03.099 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1969314 00:04:03.099 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:03.099 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1969314 00:04:03.099 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:04:03.099 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:03.099 16:15:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:03.099 16:15:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1969314 00:04:03.099 16:15:49 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 1969314 ']' 00:04:03.099 16:15:49 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 1969314 00:04:03.099 16:15:49 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:03.099 16:15:49 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:03.099 16:15:49 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1969314 00:04:03.360 16:15:49 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:03.360 16:15:49 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:03.360 16:15:49 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1969314' 00:04:03.360 killing process with pid 1969314 00:04:03.360 16:15:49 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 1969314 00:04:03.360 16:15:49 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 1969314 00:04:03.360 00:04:03.360 real 0m1.405s 00:04:03.360 user 0m1.464s 00:04:03.360 sys 0m0.418s 00:04:03.360 16:15:49 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:03.360 16:15:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:03.360 ************************************ 00:04:03.360 END TEST dpdk_mem_utility 00:04:03.360 ************************************ 00:04:03.622 16:15:49 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:03.622 16:15:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:03.622 16:15:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.622 16:15:49 -- common/autotest_common.sh@10 -- # set +x 00:04:03.622 ************************************ 00:04:03.622 START TEST event 00:04:03.622 ************************************ 00:04:03.622 16:15:49 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:03.622 * Looking for test storage... 00:04:03.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:03.622 16:15:49 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:03.622 16:15:49 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:03.622 16:15:49 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:03.622 16:15:49 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:03.622 16:15:49 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:03.622 16:15:49 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:03.622 16:15:49 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:03.622 16:15:49 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:03.622 16:15:49 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:03.622 16:15:49 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:03.622 16:15:49 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:03.622 16:15:49 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:03.622 16:15:49 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:03.622 16:15:49 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:03.622 16:15:49 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:03.622 16:15:49 event -- scripts/common.sh@344 -- # case "$op" in 00:04:03.622 16:15:49 event -- scripts/common.sh@345 -- # : 1 00:04:03.622 16:15:49 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:03.622 16:15:49 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:03.622 16:15:49 event -- scripts/common.sh@365 -- # decimal 1 00:04:03.622 16:15:49 event -- scripts/common.sh@353 -- # local d=1 00:04:03.622 16:15:49 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:03.622 16:15:49 event -- scripts/common.sh@355 -- # echo 1 00:04:03.622 16:15:49 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:03.622 16:15:49 event -- scripts/common.sh@366 -- # decimal 2 00:04:03.622 16:15:49 event -- scripts/common.sh@353 -- # local d=2 00:04:03.622 16:15:49 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:03.622 16:15:49 event -- scripts/common.sh@355 -- # echo 2 00:04:03.622 16:15:49 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:03.622 16:15:49 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:03.622 16:15:49 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:03.622 16:15:49 event -- scripts/common.sh@368 -- # return 0 00:04:03.622 16:15:49 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:03.622 16:15:49 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:03.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.622 --rc genhtml_branch_coverage=1 00:04:03.622 --rc genhtml_function_coverage=1 00:04:03.622 --rc genhtml_legend=1 00:04:03.622 --rc geninfo_all_blocks=1 00:04:03.622 --rc geninfo_unexecuted_blocks=1 00:04:03.622 00:04:03.622 ' 00:04:03.622 16:15:49 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:03.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.622 --rc genhtml_branch_coverage=1 00:04:03.622 --rc genhtml_function_coverage=1 00:04:03.622 --rc genhtml_legend=1 00:04:03.622 --rc geninfo_all_blocks=1 00:04:03.622 --rc geninfo_unexecuted_blocks=1 00:04:03.622 00:04:03.622 ' 00:04:03.622 16:15:49 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:03.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.622 --rc genhtml_branch_coverage=1 00:04:03.622 --rc genhtml_function_coverage=1 00:04:03.622 --rc genhtml_legend=1 00:04:03.622 --rc geninfo_all_blocks=1 00:04:03.622 --rc geninfo_unexecuted_blocks=1 00:04:03.622 00:04:03.622 ' 00:04:03.622 16:15:49 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:03.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.622 --rc genhtml_branch_coverage=1 00:04:03.622 --rc genhtml_function_coverage=1 00:04:03.622 --rc genhtml_legend=1 00:04:03.622 --rc geninfo_all_blocks=1 00:04:03.622 --rc geninfo_unexecuted_blocks=1 00:04:03.622 00:04:03.622 ' 00:04:03.622 16:15:49 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:03.622 16:15:49 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:03.622 16:15:49 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:03.622 16:15:49 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:03.622 16:15:49 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:03.622 16:15:49 event -- common/autotest_common.sh@10 -- # set +x 00:04:03.884 ************************************ 00:04:03.884 START TEST event_perf 00:04:03.884 ************************************ 00:04:03.884 16:15:49 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:03.884 Running I/O for 1 seconds...[2024-11-20 16:15:49.612650] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:04:03.884 [2024-11-20 16:15:49.612743] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1969709 ] 00:04:03.884 [2024-11-20 16:15:49.689528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:03.884 [2024-11-20 16:15:49.729047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:03.884 [2024-11-20 16:15:49.729285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:03.884 [2024-11-20 16:15:49.729441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.884 Running I/O for 1 seconds...[2024-11-20 16:15:49.729440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:04.827 00:04:04.827 lcore 0: 178588 00:04:04.827 lcore 1: 178585 00:04:04.827 lcore 2: 178587 00:04:04.827 lcore 3: 178590 00:04:04.827 done. 00:04:04.827 00:04:04.827 real 0m1.175s 00:04:04.827 user 0m4.100s 00:04:04.827 sys 0m0.071s 00:04:04.828 16:15:50 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.828 16:15:50 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:04.828 ************************************ 00:04:04.828 END TEST event_perf 00:04:04.828 ************************************ 00:04:05.087 16:15:50 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:05.087 16:15:50 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:05.087 16:15:50 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.087 16:15:50 event -- common/autotest_common.sh@10 -- # set +x 00:04:05.087 ************************************ 00:04:05.087 START TEST event_reactor 00:04:05.087 ************************************ 00:04:05.087 16:15:50 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:05.087 [2024-11-20 16:15:50.862969] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:04:05.087 [2024-11-20 16:15:50.863081] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1970063 ] 00:04:05.087 [2024-11-20 16:15:50.940504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.087 [2024-11-20 16:15:50.978761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.471 test_start 00:04:06.471 oneshot 00:04:06.471 tick 100 00:04:06.471 tick 100 00:04:06.471 tick 250 00:04:06.471 tick 100 00:04:06.471 tick 100 00:04:06.471 tick 100 00:04:06.471 tick 250 00:04:06.471 tick 500 00:04:06.471 tick 100 00:04:06.471 tick 100 00:04:06.471 tick 250 00:04:06.471 tick 100 00:04:06.471 tick 100 00:04:06.471 test_end 00:04:06.471 00:04:06.471 real 0m1.170s 00:04:06.471 user 0m1.099s 00:04:06.471 sys 0m0.065s 00:04:06.471 16:15:52 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:06.471 16:15:52 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:06.471 ************************************ 00:04:06.471 END TEST event_reactor 00:04:06.471 ************************************ 00:04:06.471 16:15:52 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:06.471 16:15:52 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:06.471 16:15:52 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.471 16:15:52 event -- common/autotest_common.sh@10 -- # set +x 00:04:06.471 ************************************ 00:04:06.471 START TEST event_reactor_perf 00:04:06.471 ************************************ 00:04:06.471 16:15:52 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:06.471 [2024-11-20 16:15:52.113697] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:04:06.471 [2024-11-20 16:15:52.113795] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1970267 ] 00:04:06.471 [2024-11-20 16:15:52.191949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.471 [2024-11-20 16:15:52.226797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.412 test_start 00:04:07.412 test_end 00:04:07.412 Performance: 369498 events per second 00:04:07.412 00:04:07.412 real 0m1.167s 00:04:07.412 user 0m1.094s 00:04:07.412 sys 0m0.069s 00:04:07.412 16:15:53 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:07.412 16:15:53 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:07.412 ************************************ 00:04:07.412 END TEST event_reactor_perf 00:04:07.412 ************************************ 00:04:07.412 16:15:53 event -- event/event.sh@49 -- # uname -s 00:04:07.412 16:15:53 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:07.412 16:15:53 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:07.412 16:15:53 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.412 16:15:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.412 16:15:53 event -- common/autotest_common.sh@10 -- # set +x 00:04:07.412 ************************************ 00:04:07.412 START TEST event_scheduler 00:04:07.412 ************************************ 00:04:07.412 16:15:53 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:07.673 * Looking for test storage... 00:04:07.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:07.673 16:15:53 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:07.673 16:15:53 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:04:07.673 16:15:53 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:07.673 16:15:53 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:07.673 16:15:53 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:07.673 16:15:53 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:07.673 16:15:53 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:07.673 16:15:53 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:07.673 16:15:53 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:07.673 16:15:53 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:07.673 16:15:53 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:07.673 16:15:53 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:07.673 16:15:53 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:07.673 16:15:53 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:07.673 16:15:53 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:07.673 16:15:53 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:07.673 16:15:53 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:07.673 16:15:53 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:07.673 16:15:53 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:07.673 16:15:53 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:07.673 16:15:53 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:07.673 16:15:53 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:07.673 16:15:53 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:07.673 16:15:53 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:07.673 16:15:53 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:07.673 16:15:53 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:07.673 16:15:53 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:07.673 16:15:53 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:07.673 16:15:53 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:07.673 16:15:53 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:07.673 16:15:53 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:07.673 16:15:53 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:07.673 16:15:53 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:07.673 16:15:53 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:07.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.673 --rc genhtml_branch_coverage=1 00:04:07.673 --rc genhtml_function_coverage=1 00:04:07.673 --rc genhtml_legend=1 00:04:07.673 --rc geninfo_all_blocks=1 00:04:07.673 --rc geninfo_unexecuted_blocks=1 00:04:07.673 00:04:07.673 ' 00:04:07.673 16:15:53 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:07.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.673 --rc genhtml_branch_coverage=1 00:04:07.673 --rc genhtml_function_coverage=1 00:04:07.673 --rc genhtml_legend=1 00:04:07.673 --rc geninfo_all_blocks=1 00:04:07.673 --rc geninfo_unexecuted_blocks=1 00:04:07.673 00:04:07.673 ' 00:04:07.673 16:15:53 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:07.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.673 --rc genhtml_branch_coverage=1 00:04:07.673 --rc genhtml_function_coverage=1 00:04:07.673 --rc genhtml_legend=1 00:04:07.673 --rc geninfo_all_blocks=1 00:04:07.673 --rc geninfo_unexecuted_blocks=1 00:04:07.673 00:04:07.673 ' 00:04:07.673 16:15:53 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:07.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.673 --rc genhtml_branch_coverage=1 00:04:07.673 --rc genhtml_function_coverage=1 00:04:07.673 --rc genhtml_legend=1 00:04:07.673 --rc geninfo_all_blocks=1 00:04:07.673 --rc geninfo_unexecuted_blocks=1 00:04:07.673 00:04:07.673 ' 00:04:07.673 16:15:53 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:07.673 16:15:53 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1970535 00:04:07.673 16:15:53 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:07.673 16:15:53 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1970535 00:04:07.673 16:15:53 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:07.673 16:15:53 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 1970535 ']' 00:04:07.673 16:15:53 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:07.673 16:15:53 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:07.673 16:15:53 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:07.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:07.673 16:15:53 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:07.673 16:15:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:07.673 [2024-11-20 16:15:53.606695] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:04:07.673 [2024-11-20 16:15:53.606773] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1970535 ] 00:04:07.935 [2024-11-20 16:15:53.669628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:07.935 [2024-11-20 16:15:53.709258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.935 [2024-11-20 16:15:53.709416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:07.935 [2024-11-20 16:15:53.709572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:07.935 [2024-11-20 16:15:53.709573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:07.935 16:15:53 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:07.935 16:15:53 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:07.935 16:15:53 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:07.935 16:15:53 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.935 16:15:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:07.935 [2024-11-20 16:15:53.733977] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:07.935 [2024-11-20 16:15:53.733994] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:07.935 [2024-11-20 16:15:53.734002] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:07.935 [2024-11-20 16:15:53.734006] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:07.935 [2024-11-20 16:15:53.734010] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:07.935 16:15:53 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.935 16:15:53 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:07.935 16:15:53 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.935 16:15:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:07.935 [2024-11-20 16:15:53.797109] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:07.935 16:15:53 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.935 16:15:53 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:07.935 16:15:53 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.935 16:15:53 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.935 16:15:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:07.935 ************************************ 00:04:07.935 START TEST scheduler_create_thread 00:04:07.935 ************************************ 00:04:07.935 16:15:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:07.935 16:15:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:07.935 16:15:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.935 16:15:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:07.935 2 00:04:07.935 16:15:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.935 16:15:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:07.935 16:15:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.935 16:15:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:07.935 3 00:04:07.935 16:15:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.935 16:15:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:07.935 16:15:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.935 16:15:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:07.935 4 00:04:07.935 16:15:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.935 16:15:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:07.935 16:15:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.935 16:15:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:07.935 5 00:04:07.935 16:15:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.935 16:15:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:08.195 16:15:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.195 16:15:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:08.195 6 00:04:08.195 16:15:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.195 16:15:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:08.195 16:15:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.195 16:15:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:08.195 7 00:04:08.195 16:15:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.196 16:15:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:08.196 16:15:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.196 16:15:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:08.196 8 00:04:08.196 16:15:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.196 16:15:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:08.196 16:15:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.196 16:15:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:08.196 9 00:04:08.196 16:15:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.196 16:15:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:08.196 16:15:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.196 16:15:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:08.456 10 00:04:08.456 16:15:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.456 16:15:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:08.456 16:15:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.456 16:15:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:09.842 16:15:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:09.842 16:15:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:09.842 16:15:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:09.842 16:15:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:09.842 16:15:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.783 16:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:10.783 16:15:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:10.783 16:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:10.783 16:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:11.355 16:15:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.355 16:15:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:11.355 16:15:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:11.355 16:15:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.355 16:15:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:12.297 16:15:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:12.297 00:04:12.297 real 0m4.225s 00:04:12.297 user 0m0.022s 00:04:12.297 sys 0m0.010s 00:04:12.297 16:15:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.298 16:15:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:12.298 ************************************ 00:04:12.298 END TEST scheduler_create_thread 00:04:12.298 ************************************ 00:04:12.298 16:15:58 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:12.298 16:15:58 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1970535 00:04:12.298 16:15:58 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 1970535 ']' 00:04:12.298 16:15:58 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 1970535 00:04:12.298 16:15:58 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:12.298 16:15:58 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:12.298 16:15:58 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1970535 00:04:12.298 16:15:58 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:12.298 16:15:58 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:12.298 16:15:58 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1970535' 00:04:12.298 killing process with pid 1970535 00:04:12.298 16:15:58 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 1970535 00:04:12.298 16:15:58 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 1970535 00:04:12.558 [2024-11-20 16:15:58.342348] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:12.558 00:04:12.558 real 0m5.156s 00:04:12.558 user 0m10.137s 00:04:12.558 sys 0m0.386s 00:04:12.558 16:15:58 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.558 16:15:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:12.558 ************************************ 00:04:12.558 END TEST event_scheduler 00:04:12.558 ************************************ 00:04:12.819 16:15:58 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:12.819 16:15:58 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:12.819 16:15:58 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.819 16:15:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.819 16:15:58 event -- common/autotest_common.sh@10 -- # set +x 00:04:12.819 ************************************ 00:04:12.819 START TEST app_repeat 00:04:12.819 ************************************ 00:04:12.819 16:15:58 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:12.819 16:15:58 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:12.819 16:15:58 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:12.819 16:15:58 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:12.819 16:15:58 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:12.819 16:15:58 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:12.819 16:15:58 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:12.819 16:15:58 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:12.819 16:15:58 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1971583 00:04:12.819 16:15:58 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:12.819 16:15:58 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:12.819 16:15:58 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1971583' 00:04:12.819 Process app_repeat pid: 1971583 00:04:12.819 16:15:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:12.819 16:15:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:12.819 spdk_app_start Round 0 00:04:12.819 16:15:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1971583 /var/tmp/spdk-nbd.sock 00:04:12.819 16:15:58 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1971583 ']' 00:04:12.819 16:15:58 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:12.819 16:15:58 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:12.819 16:15:58 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:12.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:12.819 16:15:58 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:12.819 16:15:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:12.819 [2024-11-20 16:15:58.624216] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:04:12.819 [2024-11-20 16:15:58.624280] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1971583 ] 00:04:12.819 [2024-11-20 16:15:58.697436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:12.819 [2024-11-20 16:15:58.735948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:12.819 [2024-11-20 16:15:58.735952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.080 16:15:58 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:13.080 16:15:58 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:13.080 16:15:58 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:13.080 Malloc0 00:04:13.080 16:15:58 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:13.339 Malloc1 00:04:13.339 16:15:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:13.339 16:15:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:13.339 16:15:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:13.339 16:15:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:13.339 16:15:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:13.339 16:15:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:13.339 16:15:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:13.339 16:15:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:13.339 16:15:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:13.339 16:15:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:13.339 16:15:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:13.339 16:15:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:13.339 16:15:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:13.339 16:15:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:13.339 16:15:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:13.339 16:15:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:13.600 /dev/nbd0 00:04:13.600 16:15:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:13.600 16:15:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:13.600 16:15:59 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:13.600 16:15:59 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:13.600 16:15:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:13.600 16:15:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:13.600 16:15:59 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:13.600 16:15:59 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:13.600 16:15:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:13.600 16:15:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:13.600 16:15:59 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:13.600 1+0 records in 00:04:13.600 1+0 records out 00:04:13.600 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000210408 s, 19.5 MB/s 00:04:13.600 16:15:59 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:13.600 16:15:59 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:13.600 16:15:59 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:13.600 16:15:59 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:13.600 16:15:59 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:13.600 16:15:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:13.600 16:15:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:13.601 16:15:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:13.861 /dev/nbd1 00:04:13.861 16:15:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:13.861 16:15:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:13.861 16:15:59 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:13.861 16:15:59 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:13.861 16:15:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:13.861 16:15:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:13.861 16:15:59 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:13.861 16:15:59 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:13.861 16:15:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:13.861 16:15:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:13.861 16:15:59 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:13.861 1+0 records in 00:04:13.861 1+0 records out 00:04:13.861 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276537 s, 14.8 MB/s 00:04:13.861 16:15:59 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:13.861 16:15:59 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:13.861 16:15:59 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:13.861 16:15:59 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:13.861 16:15:59 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:13.861 16:15:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:13.861 16:15:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:13.861 16:15:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:13.861 16:15:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:13.861 16:15:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:13.861 16:15:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:13.861 { 00:04:13.861 "nbd_device": "/dev/nbd0", 00:04:13.861 "bdev_name": "Malloc0" 00:04:13.861 }, 00:04:13.861 { 00:04:13.861 "nbd_device": "/dev/nbd1", 00:04:13.861 "bdev_name": "Malloc1" 00:04:13.861 } 00:04:13.861 ]' 00:04:13.861 16:15:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:13.861 { 00:04:13.861 "nbd_device": "/dev/nbd0", 00:04:13.861 "bdev_name": "Malloc0" 00:04:13.861 }, 00:04:13.861 { 00:04:13.861 "nbd_device": "/dev/nbd1", 00:04:13.861 "bdev_name": "Malloc1" 00:04:13.861 } 00:04:13.861 ]' 00:04:13.861 16:15:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:13.861 16:15:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:13.861 /dev/nbd1' 00:04:13.861 16:15:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:13.861 /dev/nbd1' 00:04:13.861 16:15:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:14.122 16:15:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:14.122 16:15:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:14.122 16:15:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:14.122 16:15:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:14.122 16:15:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:14.122 16:15:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:14.122 16:15:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:14.122 16:15:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:14.122 16:15:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:14.122 16:15:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:14.122 16:15:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:14.122 256+0 records in 00:04:14.122 256+0 records out 00:04:14.122 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00534393 s, 196 MB/s 00:04:14.122 16:15:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:14.122 16:15:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:14.122 256+0 records in 00:04:14.122 256+0 records out 00:04:14.122 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0167602 s, 62.6 MB/s 00:04:14.122 16:15:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:14.122 16:15:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:14.122 256+0 records in 00:04:14.122 256+0 records out 00:04:14.122 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0201488 s, 52.0 MB/s 00:04:14.122 16:15:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:14.122 16:15:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:14.122 16:15:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:14.122 16:15:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:14.122 16:15:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:14.122 16:15:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:14.122 16:15:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:14.123 16:15:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:14.123 16:15:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:14.123 16:15:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:14.123 16:15:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:14.123 16:15:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:14.123 16:15:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:14.123 16:15:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:14.123 16:15:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:14.123 16:15:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:14.123 16:15:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:14.123 16:15:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:14.123 16:15:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:14.123 16:16:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:14.383 16:16:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:14.383 16:16:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:14.384 16:16:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:14.384 16:16:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:14.384 16:16:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:14.384 16:16:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:14.384 16:16:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:14.384 16:16:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:14.384 16:16:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:14.384 16:16:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:14.384 16:16:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:14.384 16:16:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:14.384 16:16:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:14.384 16:16:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:14.384 16:16:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:14.384 16:16:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:14.384 16:16:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:14.384 16:16:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:14.384 16:16:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:14.384 16:16:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:14.646 16:16:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:14.646 16:16:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:14.646 16:16:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:14.646 16:16:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:14.646 16:16:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:14.646 16:16:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:14.646 16:16:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:14.646 16:16:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:14.646 16:16:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:14.646 16:16:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:14.646 16:16:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:14.646 16:16:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:14.646 16:16:00 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:14.907 16:16:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:14.907 [2024-11-20 16:16:00.788279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:14.907 [2024-11-20 16:16:00.824463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:14.907 [2024-11-20 16:16:00.824464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.907 [2024-11-20 16:16:00.856609] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:14.907 [2024-11-20 16:16:00.856645] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:18.207 16:16:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:18.207 16:16:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:18.207 spdk_app_start Round 1 00:04:18.207 16:16:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1971583 /var/tmp/spdk-nbd.sock 00:04:18.207 16:16:03 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1971583 ']' 00:04:18.207 16:16:03 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:18.207 16:16:03 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:18.207 16:16:03 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:18.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:18.207 16:16:03 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:18.207 16:16:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:18.207 16:16:03 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:18.207 16:16:03 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:18.207 16:16:03 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:18.207 Malloc0 00:04:18.207 16:16:04 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:18.468 Malloc1 00:04:18.468 16:16:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:18.468 16:16:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:18.468 16:16:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:18.468 16:16:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:18.468 16:16:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:18.468 16:16:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:18.468 16:16:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:18.469 16:16:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:18.469 16:16:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:18.469 16:16:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:18.469 16:16:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:18.469 16:16:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:18.469 16:16:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:18.469 16:16:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:18.469 16:16:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:18.469 16:16:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:18.469 /dev/nbd0 00:04:18.469 16:16:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:18.469 16:16:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:18.469 16:16:04 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:18.469 16:16:04 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:18.469 16:16:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:18.469 16:16:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:18.469 16:16:04 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:18.469 16:16:04 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:18.469 16:16:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:18.469 16:16:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:18.469 16:16:04 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:18.469 1+0 records in 00:04:18.469 1+0 records out 00:04:18.469 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280778 s, 14.6 MB/s 00:04:18.469 16:16:04 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:18.469 16:16:04 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:18.469 16:16:04 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:18.469 16:16:04 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:18.469 16:16:04 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:18.469 16:16:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:18.469 16:16:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:18.469 16:16:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:18.730 /dev/nbd1 00:04:18.730 16:16:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:18.730 16:16:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:18.730 16:16:04 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:18.730 16:16:04 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:18.730 16:16:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:18.730 16:16:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:18.730 16:16:04 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:18.730 16:16:04 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:18.730 16:16:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:18.730 16:16:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:18.730 16:16:04 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:18.730 1+0 records in 00:04:18.730 1+0 records out 00:04:18.730 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209778 s, 19.5 MB/s 00:04:18.730 16:16:04 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:18.730 16:16:04 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:18.730 16:16:04 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:18.730 16:16:04 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:18.730 16:16:04 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:18.730 16:16:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:18.730 16:16:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:18.730 16:16:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:18.730 16:16:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:18.730 16:16:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:18.992 16:16:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:18.992 { 00:04:18.992 "nbd_device": "/dev/nbd0", 00:04:18.992 "bdev_name": "Malloc0" 00:04:18.992 }, 00:04:18.992 { 00:04:18.992 "nbd_device": "/dev/nbd1", 00:04:18.992 "bdev_name": "Malloc1" 00:04:18.992 } 00:04:18.992 ]' 00:04:18.992 16:16:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:18.992 { 00:04:18.992 "nbd_device": "/dev/nbd0", 00:04:18.992 "bdev_name": "Malloc0" 00:04:18.992 }, 00:04:18.992 { 00:04:18.992 "nbd_device": "/dev/nbd1", 00:04:18.992 "bdev_name": "Malloc1" 00:04:18.992 } 00:04:18.992 ]' 00:04:18.992 16:16:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:18.992 16:16:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:18.992 /dev/nbd1' 00:04:18.992 16:16:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:18.992 /dev/nbd1' 00:04:18.992 16:16:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:18.992 16:16:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:18.992 16:16:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:18.992 16:16:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:18.992 16:16:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:18.992 16:16:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:18.992 16:16:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:18.992 16:16:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:18.992 16:16:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:18.992 16:16:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:18.992 16:16:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:18.992 16:16:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:18.992 256+0 records in 00:04:18.992 256+0 records out 00:04:18.992 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120206 s, 87.2 MB/s 00:04:18.992 16:16:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:18.992 16:16:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:18.992 256+0 records in 00:04:18.992 256+0 records out 00:04:18.992 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0168169 s, 62.4 MB/s 00:04:18.992 16:16:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:18.992 16:16:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:18.992 256+0 records in 00:04:18.992 256+0 records out 00:04:18.992 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0194317 s, 54.0 MB/s 00:04:18.992 16:16:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:18.992 16:16:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:18.992 16:16:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:18.992 16:16:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:18.992 16:16:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:18.992 16:16:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:18.992 16:16:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:18.992 16:16:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:18.992 16:16:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:18.992 16:16:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:18.993 16:16:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:18.993 16:16:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:18.993 16:16:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:18.993 16:16:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:18.993 16:16:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:18.993 16:16:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:18.993 16:16:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:18.993 16:16:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:18.993 16:16:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:19.253 16:16:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:19.254 16:16:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:19.254 16:16:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:19.254 16:16:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:19.254 16:16:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:19.254 16:16:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:19.254 16:16:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:19.254 16:16:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:19.254 16:16:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:19.254 16:16:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:19.514 16:16:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:19.514 16:16:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:19.514 16:16:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:19.514 16:16:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:19.514 16:16:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:19.514 16:16:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:19.514 16:16:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:19.514 16:16:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:19.514 16:16:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:19.514 16:16:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:19.514 16:16:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:19.775 16:16:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:19.775 16:16:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:19.775 16:16:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:19.775 16:16:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:19.775 16:16:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:19.775 16:16:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:19.775 16:16:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:19.775 16:16:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:19.775 16:16:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:19.775 16:16:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:19.775 16:16:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:19.775 16:16:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:19.775 16:16:05 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:19.775 16:16:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:20.035 [2024-11-20 16:16:05.834698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:20.035 [2024-11-20 16:16:05.870573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:20.035 [2024-11-20 16:16:05.870575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.035 [2024-11-20 16:16:05.903283] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:20.035 [2024-11-20 16:16:05.903317] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:23.340 16:16:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:23.340 16:16:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:23.340 spdk_app_start Round 2 00:04:23.340 16:16:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1971583 /var/tmp/spdk-nbd.sock 00:04:23.340 16:16:08 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1971583 ']' 00:04:23.340 16:16:08 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:23.340 16:16:08 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:23.340 16:16:08 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:23.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:23.340 16:16:08 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:23.340 16:16:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:23.340 16:16:08 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:23.340 16:16:08 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:23.340 16:16:08 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:23.340 Malloc0 00:04:23.340 16:16:09 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:23.340 Malloc1 00:04:23.340 16:16:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:23.340 16:16:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.340 16:16:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:23.340 16:16:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:23.340 16:16:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.340 16:16:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:23.340 16:16:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:23.340 16:16:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.340 16:16:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:23.341 16:16:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:23.341 16:16:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.341 16:16:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:23.341 16:16:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:23.341 16:16:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:23.341 16:16:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:23.341 16:16:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:23.601 /dev/nbd0 00:04:23.601 16:16:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:23.601 16:16:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:23.601 16:16:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:23.601 16:16:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:23.601 16:16:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:23.601 16:16:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:23.601 16:16:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:23.601 16:16:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:23.601 16:16:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:23.601 16:16:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:23.601 16:16:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:23.601 1+0 records in 00:04:23.601 1+0 records out 00:04:23.601 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000152978 s, 26.8 MB/s 00:04:23.601 16:16:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:23.601 16:16:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:23.601 16:16:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:23.601 16:16:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:23.601 16:16:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:23.601 16:16:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:23.601 16:16:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:23.601 16:16:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:23.861 /dev/nbd1 00:04:23.862 16:16:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:23.862 16:16:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:23.862 16:16:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:23.862 16:16:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:23.862 16:16:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:23.862 16:16:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:23.862 16:16:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:23.862 16:16:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:23.862 16:16:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:23.862 16:16:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:23.862 16:16:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:23.862 1+0 records in 00:04:23.862 1+0 records out 00:04:23.862 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235162 s, 17.4 MB/s 00:04:23.862 16:16:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:23.862 16:16:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:23.862 16:16:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:23.862 16:16:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:23.862 16:16:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:23.862 16:16:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:23.862 16:16:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:23.862 16:16:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:23.862 16:16:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.862 16:16:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:24.123 16:16:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:24.123 { 00:04:24.123 "nbd_device": "/dev/nbd0", 00:04:24.123 "bdev_name": "Malloc0" 00:04:24.123 }, 00:04:24.123 { 00:04:24.123 "nbd_device": "/dev/nbd1", 00:04:24.123 "bdev_name": "Malloc1" 00:04:24.123 } 00:04:24.123 ]' 00:04:24.123 16:16:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:24.123 { 00:04:24.123 "nbd_device": "/dev/nbd0", 00:04:24.123 "bdev_name": "Malloc0" 00:04:24.123 }, 00:04:24.123 { 00:04:24.123 "nbd_device": "/dev/nbd1", 00:04:24.123 "bdev_name": "Malloc1" 00:04:24.123 } 00:04:24.123 ]' 00:04:24.123 16:16:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:24.123 16:16:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:24.123 /dev/nbd1' 00:04:24.123 16:16:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:24.123 /dev/nbd1' 00:04:24.123 16:16:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:24.123 16:16:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:24.123 16:16:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:24.123 16:16:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:24.123 16:16:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:24.123 16:16:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:24.123 16:16:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.123 16:16:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:24.123 16:16:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:24.123 16:16:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:24.123 16:16:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:24.123 16:16:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:24.123 256+0 records in 00:04:24.123 256+0 records out 00:04:24.123 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127307 s, 82.4 MB/s 00:04:24.123 16:16:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:24.123 16:16:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:24.123 256+0 records in 00:04:24.123 256+0 records out 00:04:24.123 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0168117 s, 62.4 MB/s 00:04:24.123 16:16:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:24.123 16:16:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:24.123 256+0 records in 00:04:24.123 256+0 records out 00:04:24.123 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0185019 s, 56.7 MB/s 00:04:24.123 16:16:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:24.123 16:16:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.123 16:16:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:24.123 16:16:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:24.123 16:16:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:24.123 16:16:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:24.123 16:16:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:24.123 16:16:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:24.123 16:16:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:24.123 16:16:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:24.123 16:16:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:24.123 16:16:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:24.123 16:16:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:24.123 16:16:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.123 16:16:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.123 16:16:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:24.123 16:16:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:24.123 16:16:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:24.123 16:16:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:24.383 16:16:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:24.383 16:16:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:24.383 16:16:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:24.383 16:16:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:24.383 16:16:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:24.383 16:16:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:24.383 16:16:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:24.383 16:16:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:24.383 16:16:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:24.383 16:16:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:24.644 16:16:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:24.644 16:16:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:24.644 16:16:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:24.644 16:16:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:24.644 16:16:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:24.644 16:16:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:24.644 16:16:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:24.644 16:16:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:24.644 16:16:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:24.644 16:16:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.644 16:16:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:24.644 16:16:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:24.644 16:16:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:24.644 16:16:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:24.644 16:16:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:24.644 16:16:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:24.644 16:16:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:24.644 16:16:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:24.644 16:16:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:24.644 16:16:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:24.644 16:16:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:24.644 16:16:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:24.644 16:16:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:24.644 16:16:10 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:24.904 16:16:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:25.164 [2024-11-20 16:16:10.868823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:25.164 [2024-11-20 16:16:10.904488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:25.164 [2024-11-20 16:16:10.904491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.164 [2024-11-20 16:16:10.936607] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:25.164 [2024-11-20 16:16:10.936647] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:28.461 16:16:13 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1971583 /var/tmp/spdk-nbd.sock 00:04:28.461 16:16:13 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1971583 ']' 00:04:28.461 16:16:13 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:28.461 16:16:13 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:28.461 16:16:13 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:28.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:28.461 16:16:13 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:28.461 16:16:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:28.461 16:16:13 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:28.461 16:16:13 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:28.461 16:16:13 event.app_repeat -- event/event.sh@39 -- # killprocess 1971583 00:04:28.461 16:16:13 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 1971583 ']' 00:04:28.461 16:16:13 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 1971583 00:04:28.461 16:16:13 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:04:28.461 16:16:13 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:28.461 16:16:13 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1971583 00:04:28.462 16:16:13 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:28.462 16:16:13 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:28.462 16:16:13 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1971583' 00:04:28.462 killing process with pid 1971583 00:04:28.462 16:16:13 event.app_repeat -- common/autotest_common.sh@973 -- # kill 1971583 00:04:28.462 16:16:13 event.app_repeat -- common/autotest_common.sh@978 -- # wait 1971583 00:04:28.462 spdk_app_start is called in Round 0. 00:04:28.462 Shutdown signal received, stop current app iteration 00:04:28.462 Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 reinitialization... 00:04:28.462 spdk_app_start is called in Round 1. 00:04:28.462 Shutdown signal received, stop current app iteration 00:04:28.462 Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 reinitialization... 00:04:28.462 spdk_app_start is called in Round 2. 00:04:28.462 Shutdown signal received, stop current app iteration 00:04:28.462 Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 reinitialization... 00:04:28.462 spdk_app_start is called in Round 3. 00:04:28.462 Shutdown signal received, stop current app iteration 00:04:28.462 16:16:14 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:28.462 16:16:14 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:28.462 00:04:28.462 real 0m15.504s 00:04:28.462 user 0m33.854s 00:04:28.462 sys 0m2.170s 00:04:28.462 16:16:14 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.462 16:16:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:28.462 ************************************ 00:04:28.462 END TEST app_repeat 00:04:28.462 ************************************ 00:04:28.462 16:16:14 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:28.462 16:16:14 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:28.462 16:16:14 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.462 16:16:14 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.462 16:16:14 event -- common/autotest_common.sh@10 -- # set +x 00:04:28.462 ************************************ 00:04:28.462 START TEST cpu_locks 00:04:28.462 ************************************ 00:04:28.462 16:16:14 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:28.462 * Looking for test storage... 00:04:28.462 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:28.462 16:16:14 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:28.462 16:16:14 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:04:28.462 16:16:14 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:28.462 16:16:14 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:28.462 16:16:14 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:28.462 16:16:14 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:28.462 16:16:14 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:28.462 16:16:14 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:28.462 16:16:14 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:28.462 16:16:14 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:28.462 16:16:14 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:28.462 16:16:14 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:28.462 16:16:14 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:28.462 16:16:14 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:28.462 16:16:14 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:28.462 16:16:14 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:28.462 16:16:14 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:28.462 16:16:14 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:28.462 16:16:14 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:28.462 16:16:14 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:28.462 16:16:14 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:28.462 16:16:14 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:28.462 16:16:14 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:28.462 16:16:14 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:28.462 16:16:14 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:28.462 16:16:14 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:28.462 16:16:14 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:28.462 16:16:14 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:28.462 16:16:14 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:28.462 16:16:14 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:28.462 16:16:14 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:28.462 16:16:14 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:28.462 16:16:14 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:28.462 16:16:14 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:28.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.462 --rc genhtml_branch_coverage=1 00:04:28.462 --rc genhtml_function_coverage=1 00:04:28.462 --rc genhtml_legend=1 00:04:28.462 --rc geninfo_all_blocks=1 00:04:28.462 --rc geninfo_unexecuted_blocks=1 00:04:28.462 00:04:28.462 ' 00:04:28.462 16:16:14 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:28.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.462 --rc genhtml_branch_coverage=1 00:04:28.462 --rc genhtml_function_coverage=1 00:04:28.462 --rc genhtml_legend=1 00:04:28.462 --rc geninfo_all_blocks=1 00:04:28.462 --rc geninfo_unexecuted_blocks=1 00:04:28.462 00:04:28.462 ' 00:04:28.462 16:16:14 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:28.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.462 --rc genhtml_branch_coverage=1 00:04:28.462 --rc genhtml_function_coverage=1 00:04:28.462 --rc genhtml_legend=1 00:04:28.462 --rc geninfo_all_blocks=1 00:04:28.462 --rc geninfo_unexecuted_blocks=1 00:04:28.462 00:04:28.462 ' 00:04:28.462 16:16:14 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:28.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.462 --rc genhtml_branch_coverage=1 00:04:28.462 --rc genhtml_function_coverage=1 00:04:28.462 --rc genhtml_legend=1 00:04:28.462 --rc geninfo_all_blocks=1 00:04:28.462 --rc geninfo_unexecuted_blocks=1 00:04:28.462 00:04:28.462 ' 00:04:28.462 16:16:14 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:28.462 16:16:14 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:28.462 16:16:14 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:28.462 16:16:14 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:28.462 16:16:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.462 16:16:14 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.462 16:16:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:28.462 ************************************ 00:04:28.462 START TEST default_locks 00:04:28.462 ************************************ 00:04:28.462 16:16:14 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:04:28.462 16:16:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1975110 00:04:28.462 16:16:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1975110 00:04:28.462 16:16:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:28.462 16:16:14 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1975110 ']' 00:04:28.462 16:16:14 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:28.462 16:16:14 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:28.462 16:16:14 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:28.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:28.462 16:16:14 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:28.462 16:16:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:28.723 [2024-11-20 16:16:14.466160] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:04:28.723 [2024-11-20 16:16:14.466224] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1975110 ] 00:04:28.723 [2024-11-20 16:16:14.540734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.723 [2024-11-20 16:16:14.582541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.666 16:16:15 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:29.666 16:16:15 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:04:29.666 16:16:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1975110 00:04:29.666 16:16:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1975110 00:04:29.666 16:16:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:29.666 lslocks: write error 00:04:29.666 16:16:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1975110 00:04:29.666 16:16:15 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 1975110 ']' 00:04:29.666 16:16:15 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 1975110 00:04:29.666 16:16:15 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:04:29.666 16:16:15 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:29.666 16:16:15 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1975110 00:04:29.666 16:16:15 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:29.666 16:16:15 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:29.667 16:16:15 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1975110' 00:04:29.667 killing process with pid 1975110 00:04:29.667 16:16:15 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 1975110 00:04:29.667 16:16:15 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 1975110 00:04:29.928 16:16:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1975110 00:04:29.928 16:16:15 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:04:29.928 16:16:15 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1975110 00:04:29.928 16:16:15 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:29.928 16:16:15 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:29.928 16:16:15 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:29.928 16:16:15 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:29.928 16:16:15 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 1975110 00:04:29.928 16:16:15 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1975110 ']' 00:04:29.928 16:16:15 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.928 16:16:15 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:29.928 16:16:15 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.928 16:16:15 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:29.928 16:16:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:29.928 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1975110) - No such process 00:04:29.928 ERROR: process (pid: 1975110) is no longer running 00:04:29.928 16:16:15 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:29.928 16:16:15 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:04:29.928 16:16:15 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:04:29.928 16:16:15 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:29.928 16:16:15 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:29.928 16:16:15 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:29.928 16:16:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:29.928 16:16:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:29.928 16:16:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:29.928 16:16:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:29.928 00:04:29.928 real 0m1.288s 00:04:29.928 user 0m1.396s 00:04:29.928 sys 0m0.411s 00:04:29.928 16:16:15 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.928 16:16:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:29.928 ************************************ 00:04:29.928 END TEST default_locks 00:04:29.928 ************************************ 00:04:29.928 16:16:15 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:29.928 16:16:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.928 16:16:15 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.928 16:16:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:29.928 ************************************ 00:04:29.928 START TEST default_locks_via_rpc 00:04:29.928 ************************************ 00:04:29.928 16:16:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:04:29.928 16:16:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1975309 00:04:29.928 16:16:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1975309 00:04:29.928 16:16:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:29.928 16:16:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1975309 ']' 00:04:29.928 16:16:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.928 16:16:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:29.928 16:16:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.928 16:16:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:29.928 16:16:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.928 [2024-11-20 16:16:15.820569] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:04:29.928 [2024-11-20 16:16:15.820618] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1975309 ] 00:04:30.188 [2024-11-20 16:16:15.891050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.188 [2024-11-20 16:16:15.928388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.757 16:16:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:30.757 16:16:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:30.757 16:16:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:30.757 16:16:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.758 16:16:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.758 16:16:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.758 16:16:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:30.758 16:16:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:30.758 16:16:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:30.758 16:16:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:30.758 16:16:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:30.758 16:16:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.758 16:16:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.758 16:16:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.758 16:16:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1975309 00:04:30.758 16:16:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1975309 00:04:30.758 16:16:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:31.327 16:16:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1975309 00:04:31.327 16:16:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 1975309 ']' 00:04:31.327 16:16:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 1975309 00:04:31.327 16:16:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:04:31.327 16:16:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:31.327 16:16:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1975309 00:04:31.327 16:16:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:31.327 16:16:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:31.327 16:16:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1975309' 00:04:31.327 killing process with pid 1975309 00:04:31.327 16:16:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 1975309 00:04:31.327 16:16:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 1975309 00:04:31.589 00:04:31.589 real 0m1.555s 00:04:31.589 user 0m1.686s 00:04:31.589 sys 0m0.521s 00:04:31.589 16:16:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.589 16:16:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.589 ************************************ 00:04:31.589 END TEST default_locks_via_rpc 00:04:31.589 ************************************ 00:04:31.589 16:16:17 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:31.589 16:16:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.589 16:16:17 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.589 16:16:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:31.589 ************************************ 00:04:31.589 START TEST non_locking_app_on_locked_coremask 00:04:31.589 ************************************ 00:04:31.589 16:16:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:04:31.589 16:16:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1975637 00:04:31.589 16:16:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1975637 /var/tmp/spdk.sock 00:04:31.589 16:16:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:31.589 16:16:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1975637 ']' 00:04:31.589 16:16:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:31.589 16:16:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:31.589 16:16:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:31.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:31.589 16:16:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:31.589 16:16:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:31.589 [2024-11-20 16:16:17.456193] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:04:31.589 [2024-11-20 16:16:17.456248] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1975637 ] 00:04:31.589 [2024-11-20 16:16:17.530172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.849 [2024-11-20 16:16:17.572111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.420 16:16:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:32.420 16:16:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:32.420 16:16:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:32.420 16:16:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1975876 00:04:32.420 16:16:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1975876 /var/tmp/spdk2.sock 00:04:32.420 16:16:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1975876 ']' 00:04:32.420 16:16:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:32.420 16:16:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:32.420 16:16:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:32.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:32.420 16:16:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:32.420 16:16:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:32.420 [2024-11-20 16:16:18.275605] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:04:32.420 [2024-11-20 16:16:18.275655] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1975876 ] 00:04:32.680 [2024-11-20 16:16:18.386251] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:32.680 [2024-11-20 16:16:18.386280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.680 [2024-11-20 16:16:18.458579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.249 16:16:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:33.249 16:16:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:33.249 16:16:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1975637 00:04:33.249 16:16:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1975637 00:04:33.249 16:16:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:33.821 lslocks: write error 00:04:33.821 16:16:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1975637 00:04:33.821 16:16:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1975637 ']' 00:04:33.821 16:16:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1975637 00:04:33.821 16:16:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:33.821 16:16:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:33.821 16:16:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1975637 00:04:34.081 16:16:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:34.081 16:16:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:34.081 16:16:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1975637' 00:04:34.081 killing process with pid 1975637 00:04:34.081 16:16:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1975637 00:04:34.081 16:16:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1975637 00:04:34.342 16:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1975876 00:04:34.342 16:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1975876 ']' 00:04:34.342 16:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1975876 00:04:34.342 16:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:34.342 16:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:34.342 16:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1975876 00:04:34.603 16:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:34.603 16:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:34.603 16:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1975876' 00:04:34.603 killing process with pid 1975876 00:04:34.603 16:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1975876 00:04:34.603 16:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1975876 00:04:34.603 00:04:34.603 real 0m3.115s 00:04:34.603 user 0m3.417s 00:04:34.603 sys 0m0.941s 00:04:34.603 16:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.603 16:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:34.603 ************************************ 00:04:34.603 END TEST non_locking_app_on_locked_coremask 00:04:34.603 ************************************ 00:04:34.603 16:16:20 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:34.603 16:16:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.603 16:16:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.603 16:16:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:34.863 ************************************ 00:04:34.863 START TEST locking_app_on_unlocked_coremask 00:04:34.863 ************************************ 00:04:34.863 16:16:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:04:34.863 16:16:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1976264 00:04:34.863 16:16:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1976264 /var/tmp/spdk.sock 00:04:34.863 16:16:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:34.863 16:16:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1976264 ']' 00:04:34.863 16:16:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.863 16:16:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:34.863 16:16:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.863 16:16:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:34.864 16:16:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:34.864 [2024-11-20 16:16:20.645827] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:04:34.864 [2024-11-20 16:16:20.645885] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1976264 ] 00:04:34.864 [2024-11-20 16:16:20.722204] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:34.864 [2024-11-20 16:16:20.722240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.864 [2024-11-20 16:16:20.762067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.805 16:16:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:35.805 16:16:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:35.805 16:16:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1976582 00:04:35.805 16:16:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1976582 /var/tmp/spdk2.sock 00:04:35.805 16:16:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1976582 ']' 00:04:35.805 16:16:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:35.805 16:16:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:35.805 16:16:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:35.805 16:16:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:35.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:35.805 16:16:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:35.805 16:16:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:35.805 [2024-11-20 16:16:21.485538] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:04:35.805 [2024-11-20 16:16:21.485590] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1976582 ] 00:04:35.805 [2024-11-20 16:16:21.598592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.805 [2024-11-20 16:16:21.670705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.375 16:16:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:36.375 16:16:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:36.375 16:16:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1976582 00:04:36.375 16:16:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1976582 00:04:36.375 16:16:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:36.946 lslocks: write error 00:04:36.946 16:16:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1976264 00:04:36.946 16:16:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1976264 ']' 00:04:36.946 16:16:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1976264 00:04:36.946 16:16:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:36.946 16:16:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:36.946 16:16:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1976264 00:04:37.207 16:16:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:37.207 16:16:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:37.207 16:16:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1976264' 00:04:37.207 killing process with pid 1976264 00:04:37.207 16:16:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1976264 00:04:37.207 16:16:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1976264 00:04:37.467 16:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1976582 00:04:37.467 16:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1976582 ']' 00:04:37.467 16:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1976582 00:04:37.467 16:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:37.467 16:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:37.467 16:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1976582 00:04:37.467 16:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:37.467 16:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:37.467 16:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1976582' 00:04:37.467 killing process with pid 1976582 00:04:37.467 16:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1976582 00:04:37.467 16:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1976582 00:04:37.727 00:04:37.727 real 0m3.028s 00:04:37.727 user 0m3.355s 00:04:37.727 sys 0m0.910s 00:04:37.727 16:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.727 16:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:37.727 ************************************ 00:04:37.727 END TEST locking_app_on_unlocked_coremask 00:04:37.727 ************************************ 00:04:37.727 16:16:23 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:37.727 16:16:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.728 16:16:23 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.728 16:16:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:37.988 ************************************ 00:04:37.988 START TEST locking_app_on_locked_coremask 00:04:37.988 ************************************ 00:04:37.988 16:16:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:04:37.988 16:16:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1976961 00:04:37.988 16:16:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1976961 /var/tmp/spdk.sock 00:04:37.988 16:16:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:37.988 16:16:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1976961 ']' 00:04:37.988 16:16:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.988 16:16:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:37.988 16:16:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.988 16:16:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:37.988 16:16:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:37.988 [2024-11-20 16:16:23.756361] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:04:37.988 [2024-11-20 16:16:23.756412] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1976961 ] 00:04:37.988 [2024-11-20 16:16:23.827518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.988 [2024-11-20 16:16:23.866099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.578 16:16:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:38.578 16:16:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:38.578 16:16:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1977232 00:04:38.578 16:16:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1977232 /var/tmp/spdk2.sock 00:04:38.578 16:16:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:38.578 16:16:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:38.578 16:16:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1977232 /var/tmp/spdk2.sock 00:04:38.578 16:16:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:38.578 16:16:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:38.578 16:16:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:38.578 16:16:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:38.578 16:16:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1977232 /var/tmp/spdk2.sock 00:04:38.578 16:16:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1977232 ']' 00:04:38.578 16:16:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:38.578 16:16:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:38.578 16:16:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:38.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:38.578 16:16:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:38.578 16:16:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:38.839 [2024-11-20 16:16:24.591733] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:04:38.839 [2024-11-20 16:16:24.591788] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1977232 ] 00:04:38.839 [2024-11-20 16:16:24.704108] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1976961 has claimed it. 00:04:38.839 [2024-11-20 16:16:24.704148] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:39.411 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1977232) - No such process 00:04:39.411 ERROR: process (pid: 1977232) is no longer running 00:04:39.411 16:16:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:39.411 16:16:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:39.411 16:16:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:39.411 16:16:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:39.411 16:16:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:39.411 16:16:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:39.411 16:16:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1976961 00:04:39.411 16:16:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1976961 00:04:39.411 16:16:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:39.672 lslocks: write error 00:04:39.672 16:16:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1976961 00:04:39.672 16:16:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1976961 ']' 00:04:39.672 16:16:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1976961 00:04:39.672 16:16:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:39.672 16:16:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:39.672 16:16:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1976961 00:04:39.672 16:16:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:39.672 16:16:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:39.672 16:16:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1976961' 00:04:39.672 killing process with pid 1976961 00:04:39.672 16:16:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1976961 00:04:39.672 16:16:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1976961 00:04:39.933 00:04:39.934 real 0m1.983s 00:04:39.934 user 0m2.256s 00:04:39.934 sys 0m0.517s 00:04:39.934 16:16:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.934 16:16:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:39.934 ************************************ 00:04:39.934 END TEST locking_app_on_locked_coremask 00:04:39.934 ************************************ 00:04:39.934 16:16:25 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:39.934 16:16:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.934 16:16:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.934 16:16:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:39.934 ************************************ 00:04:39.934 START TEST locking_overlapped_coremask 00:04:39.934 ************************************ 00:04:39.934 16:16:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:04:39.934 16:16:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1977361 00:04:39.934 16:16:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1977361 /var/tmp/spdk.sock 00:04:39.934 16:16:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:39.934 16:16:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1977361 ']' 00:04:39.934 16:16:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.934 16:16:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:39.934 16:16:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.934 16:16:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:39.934 16:16:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:39.934 [2024-11-20 16:16:25.806385] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:04:39.934 [2024-11-20 16:16:25.806439] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1977361 ] 00:04:39.934 [2024-11-20 16:16:25.879975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:40.194 [2024-11-20 16:16:25.921567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:40.194 [2024-11-20 16:16:25.921683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:40.194 [2024-11-20 16:16:25.921687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.766 16:16:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:40.766 16:16:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:40.766 16:16:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1977673 00:04:40.766 16:16:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1977673 /var/tmp/spdk2.sock 00:04:40.766 16:16:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:40.766 16:16:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:40.766 16:16:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1977673 /var/tmp/spdk2.sock 00:04:40.766 16:16:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:40.766 16:16:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.766 16:16:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:40.766 16:16:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.766 16:16:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1977673 /var/tmp/spdk2.sock 00:04:40.766 16:16:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1977673 ']' 00:04:40.766 16:16:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:40.766 16:16:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:40.766 16:16:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:40.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:40.766 16:16:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:40.766 16:16:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:40.766 [2024-11-20 16:16:26.666455] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:04:40.766 [2024-11-20 16:16:26.666510] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1977673 ] 00:04:41.026 [2024-11-20 16:16:26.754332] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1977361 has claimed it. 00:04:41.027 [2024-11-20 16:16:26.754363] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:41.598 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1977673) - No such process 00:04:41.598 ERROR: process (pid: 1977673) is no longer running 00:04:41.598 16:16:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:41.598 16:16:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:41.598 16:16:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:41.598 16:16:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:41.598 16:16:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:41.598 16:16:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:41.598 16:16:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:41.598 16:16:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:41.598 16:16:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:41.598 16:16:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:41.598 16:16:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1977361 00:04:41.598 16:16:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 1977361 ']' 00:04:41.598 16:16:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 1977361 00:04:41.598 16:16:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:04:41.598 16:16:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:41.598 16:16:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1977361 00:04:41.598 16:16:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:41.598 16:16:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:41.598 16:16:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1977361' 00:04:41.598 killing process with pid 1977361 00:04:41.598 16:16:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 1977361 00:04:41.598 16:16:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 1977361 00:04:41.598 00:04:41.598 real 0m1.802s 00:04:41.598 user 0m5.207s 00:04:41.598 sys 0m0.395s 00:04:41.598 16:16:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.598 16:16:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:41.598 ************************************ 00:04:41.598 END TEST locking_overlapped_coremask 00:04:41.598 ************************************ 00:04:41.859 16:16:27 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:41.859 16:16:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.859 16:16:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.859 16:16:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:41.859 ************************************ 00:04:41.859 START TEST locking_overlapped_coremask_via_rpc 00:04:41.859 ************************************ 00:04:41.859 16:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:04:41.859 16:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1977824 00:04:41.859 16:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1977824 /var/tmp/spdk.sock 00:04:41.859 16:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:41.859 16:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1977824 ']' 00:04:41.859 16:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.859 16:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:41.859 16:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.859 16:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:41.859 16:16:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.859 [2024-11-20 16:16:27.694515] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:04:41.859 [2024-11-20 16:16:27.694573] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1977824 ] 00:04:41.859 [2024-11-20 16:16:27.770348] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:41.859 [2024-11-20 16:16:27.770383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:42.120 [2024-11-20 16:16:27.815948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.120 [2024-11-20 16:16:27.816099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:42.120 [2024-11-20 16:16:27.816195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.694 16:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:42.694 16:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:42.694 16:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1978048 00:04:42.694 16:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1978048 /var/tmp/spdk2.sock 00:04:42.694 16:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1978048 ']' 00:04:42.694 16:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:42.694 16:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:42.695 16:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:42.695 16:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:42.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:42.695 16:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:42.695 16:16:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.695 [2024-11-20 16:16:28.544867] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:04:42.695 [2024-11-20 16:16:28.544920] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1978048 ] 00:04:42.695 [2024-11-20 16:16:28.633227] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:42.695 [2024-11-20 16:16:28.633248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:42.956 [2024-11-20 16:16:28.696346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:42.956 [2024-11-20 16:16:28.696504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:42.956 [2024-11-20 16:16:28.696506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:04:43.526 16:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:43.527 16:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:43.527 16:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:43.527 16:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.527 16:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.527 16:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.527 16:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:43.527 16:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:43.527 16:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:43.527 16:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:43.527 16:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:43.527 16:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:43.527 16:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:43.527 16:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:43.527 16:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.527 16:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.527 [2024-11-20 16:16:29.348047] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1977824 has claimed it. 00:04:43.527 request: 00:04:43.527 { 00:04:43.527 "method": "framework_enable_cpumask_locks", 00:04:43.527 "req_id": 1 00:04:43.527 } 00:04:43.527 Got JSON-RPC error response 00:04:43.527 response: 00:04:43.527 { 00:04:43.527 "code": -32603, 00:04:43.527 "message": "Failed to claim CPU core: 2" 00:04:43.527 } 00:04:43.527 16:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:43.527 16:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:43.527 16:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:43.527 16:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:43.527 16:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:43.527 16:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1977824 /var/tmp/spdk.sock 00:04:43.527 16:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1977824 ']' 00:04:43.527 16:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.527 16:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:43.527 16:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.527 16:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:43.527 16:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.788 16:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:43.788 16:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:43.788 16:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1978048 /var/tmp/spdk2.sock 00:04:43.788 16:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1978048 ']' 00:04:43.788 16:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:43.788 16:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:43.788 16:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:43.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:43.788 16:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:43.788 16:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.788 16:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:43.788 16:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:43.788 16:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:43.788 16:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:43.788 16:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:43.788 16:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:43.788 00:04:43.788 real 0m2.100s 00:04:43.788 user 0m0.874s 00:04:43.788 sys 0m0.149s 00:04:43.788 16:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.788 16:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.788 ************************************ 00:04:43.788 END TEST locking_overlapped_coremask_via_rpc 00:04:43.788 ************************************ 00:04:44.049 16:16:29 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:04:44.049 16:16:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1977824 ]] 00:04:44.049 16:16:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1977824 00:04:44.049 16:16:29 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1977824 ']' 00:04:44.049 16:16:29 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1977824 00:04:44.049 16:16:29 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:04:44.049 16:16:29 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:44.049 16:16:29 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1977824 00:04:44.049 16:16:29 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:44.049 16:16:29 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:44.049 16:16:29 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1977824' 00:04:44.049 killing process with pid 1977824 00:04:44.049 16:16:29 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1977824 00:04:44.049 16:16:29 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1977824 00:04:44.309 16:16:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1978048 ]] 00:04:44.309 16:16:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1978048 00:04:44.309 16:16:30 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1978048 ']' 00:04:44.309 16:16:30 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1978048 00:04:44.309 16:16:30 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:04:44.309 16:16:30 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:44.309 16:16:30 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1978048 00:04:44.309 16:16:30 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:44.309 16:16:30 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:44.310 16:16:30 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1978048' 00:04:44.310 killing process with pid 1978048 00:04:44.310 16:16:30 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1978048 00:04:44.310 16:16:30 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1978048 00:04:44.571 16:16:30 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:44.571 16:16:30 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:04:44.571 16:16:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1977824 ]] 00:04:44.571 16:16:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1977824 00:04:44.571 16:16:30 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1977824 ']' 00:04:44.571 16:16:30 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1977824 00:04:44.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1977824) - No such process 00:04:44.571 16:16:30 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1977824 is not found' 00:04:44.571 Process with pid 1977824 is not found 00:04:44.571 16:16:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1978048 ]] 00:04:44.571 16:16:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1978048 00:04:44.571 16:16:30 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1978048 ']' 00:04:44.571 16:16:30 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1978048 00:04:44.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1978048) - No such process 00:04:44.571 16:16:30 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1978048 is not found' 00:04:44.571 Process with pid 1978048 is not found 00:04:44.571 16:16:30 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:44.571 00:04:44.571 real 0m16.144s 00:04:44.571 user 0m28.469s 00:04:44.571 sys 0m4.739s 00:04:44.571 16:16:30 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.571 16:16:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:44.571 ************************************ 00:04:44.571 END TEST cpu_locks 00:04:44.571 ************************************ 00:04:44.571 00:04:44.571 real 0m40.995s 00:04:44.571 user 1m19.044s 00:04:44.571 sys 0m7.925s 00:04:44.571 16:16:30 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.571 16:16:30 event -- common/autotest_common.sh@10 -- # set +x 00:04:44.571 ************************************ 00:04:44.571 END TEST event 00:04:44.571 ************************************ 00:04:44.571 16:16:30 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:44.571 16:16:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.571 16:16:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.571 16:16:30 -- common/autotest_common.sh@10 -- # set +x 00:04:44.571 ************************************ 00:04:44.571 START TEST thread 00:04:44.571 ************************************ 00:04:44.571 16:16:30 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:44.571 * Looking for test storage... 00:04:44.571 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:04:44.571 16:16:30 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:44.571 16:16:30 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:04:44.571 16:16:30 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:44.832 16:16:30 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:44.832 16:16:30 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:44.832 16:16:30 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:44.832 16:16:30 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:44.832 16:16:30 thread -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.832 16:16:30 thread -- scripts/common.sh@336 -- # read -ra ver1 00:04:44.832 16:16:30 thread -- scripts/common.sh@337 -- # IFS=.-: 00:04:44.832 16:16:30 thread -- scripts/common.sh@337 -- # read -ra ver2 00:04:44.832 16:16:30 thread -- scripts/common.sh@338 -- # local 'op=<' 00:04:44.832 16:16:30 thread -- scripts/common.sh@340 -- # ver1_l=2 00:04:44.832 16:16:30 thread -- scripts/common.sh@341 -- # ver2_l=1 00:04:44.832 16:16:30 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:44.832 16:16:30 thread -- scripts/common.sh@344 -- # case "$op" in 00:04:44.832 16:16:30 thread -- scripts/common.sh@345 -- # : 1 00:04:44.832 16:16:30 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:44.832 16:16:30 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.832 16:16:30 thread -- scripts/common.sh@365 -- # decimal 1 00:04:44.832 16:16:30 thread -- scripts/common.sh@353 -- # local d=1 00:04:44.832 16:16:30 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.832 16:16:30 thread -- scripts/common.sh@355 -- # echo 1 00:04:44.832 16:16:30 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:04:44.832 16:16:30 thread -- scripts/common.sh@366 -- # decimal 2 00:04:44.832 16:16:30 thread -- scripts/common.sh@353 -- # local d=2 00:04:44.832 16:16:30 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:44.832 16:16:30 thread -- scripts/common.sh@355 -- # echo 2 00:04:44.832 16:16:30 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:04:44.832 16:16:30 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:44.832 16:16:30 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:44.832 16:16:30 thread -- scripts/common.sh@368 -- # return 0 00:04:44.832 16:16:30 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:44.832 16:16:30 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:44.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.832 --rc genhtml_branch_coverage=1 00:04:44.832 --rc genhtml_function_coverage=1 00:04:44.832 --rc genhtml_legend=1 00:04:44.832 --rc geninfo_all_blocks=1 00:04:44.832 --rc geninfo_unexecuted_blocks=1 00:04:44.832 00:04:44.832 ' 00:04:44.832 16:16:30 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:44.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.832 --rc genhtml_branch_coverage=1 00:04:44.832 --rc genhtml_function_coverage=1 00:04:44.832 --rc genhtml_legend=1 00:04:44.832 --rc geninfo_all_blocks=1 00:04:44.832 --rc geninfo_unexecuted_blocks=1 00:04:44.832 00:04:44.832 ' 00:04:44.832 16:16:30 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:44.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.832 --rc genhtml_branch_coverage=1 00:04:44.832 --rc genhtml_function_coverage=1 00:04:44.832 --rc genhtml_legend=1 00:04:44.832 --rc geninfo_all_blocks=1 00:04:44.832 --rc geninfo_unexecuted_blocks=1 00:04:44.832 00:04:44.832 ' 00:04:44.832 16:16:30 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:44.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.832 --rc genhtml_branch_coverage=1 00:04:44.832 --rc genhtml_function_coverage=1 00:04:44.832 --rc genhtml_legend=1 00:04:44.832 --rc geninfo_all_blocks=1 00:04:44.832 --rc geninfo_unexecuted_blocks=1 00:04:44.832 00:04:44.832 ' 00:04:44.832 16:16:30 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:44.832 16:16:30 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:04:44.832 16:16:30 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.832 16:16:30 thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.832 ************************************ 00:04:44.832 START TEST thread_poller_perf 00:04:44.832 ************************************ 00:04:44.832 16:16:30 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:44.832 [2024-11-20 16:16:30.680885] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:04:44.832 [2024-11-20 16:16:30.680974] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1978498 ] 00:04:44.832 [2024-11-20 16:16:30.757656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.091 [2024-11-20 16:16:30.794708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.091 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:46.029 [2024-11-20T15:16:31.988Z] ====================================== 00:04:46.029 [2024-11-20T15:16:31.988Z] busy:2407468024 (cyc) 00:04:46.029 [2024-11-20T15:16:31.988Z] total_run_count: 287000 00:04:46.029 [2024-11-20T15:16:31.988Z] tsc_hz: 2400000000 (cyc) 00:04:46.029 [2024-11-20T15:16:31.988Z] ====================================== 00:04:46.029 [2024-11-20T15:16:31.988Z] poller_cost: 8388 (cyc), 3495 (nsec) 00:04:46.029 00:04:46.029 real 0m1.175s 00:04:46.029 user 0m1.105s 00:04:46.029 sys 0m0.065s 00:04:46.029 16:16:31 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.029 16:16:31 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:46.029 ************************************ 00:04:46.029 END TEST thread_poller_perf 00:04:46.029 ************************************ 00:04:46.029 16:16:31 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:46.029 16:16:31 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:04:46.029 16:16:31 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.029 16:16:31 thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.029 ************************************ 00:04:46.029 START TEST thread_poller_perf 00:04:46.029 ************************************ 00:04:46.029 16:16:31 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:46.029 [2024-11-20 16:16:31.934145] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:04:46.029 [2024-11-20 16:16:31.934229] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1978849 ] 00:04:46.287 [2024-11-20 16:16:32.010384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.288 [2024-11-20 16:16:32.045548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.288 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:47.227 [2024-11-20T15:16:33.186Z] ====================================== 00:04:47.227 [2024-11-20T15:16:33.186Z] busy:2401999708 (cyc) 00:04:47.227 [2024-11-20T15:16:33.186Z] total_run_count: 3812000 00:04:47.227 [2024-11-20T15:16:33.186Z] tsc_hz: 2400000000 (cyc) 00:04:47.227 [2024-11-20T15:16:33.186Z] ====================================== 00:04:47.227 [2024-11-20T15:16:33.186Z] poller_cost: 630 (cyc), 262 (nsec) 00:04:47.227 00:04:47.227 real 0m1.166s 00:04:47.227 user 0m1.094s 00:04:47.227 sys 0m0.068s 00:04:47.227 16:16:33 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.227 16:16:33 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:47.227 ************************************ 00:04:47.227 END TEST thread_poller_perf 00:04:47.227 ************************************ 00:04:47.227 16:16:33 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:47.227 00:04:47.227 real 0m2.699s 00:04:47.227 user 0m2.373s 00:04:47.227 sys 0m0.339s 00:04:47.227 16:16:33 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.227 16:16:33 thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.227 ************************************ 00:04:47.227 END TEST thread 00:04:47.227 ************************************ 00:04:47.227 16:16:33 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:04:47.227 16:16:33 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:47.227 16:16:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.227 16:16:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.227 16:16:33 -- common/autotest_common.sh@10 -- # set +x 00:04:47.488 ************************************ 00:04:47.488 START TEST app_cmdline 00:04:47.488 ************************************ 00:04:47.488 16:16:33 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:47.488 * Looking for test storage... 00:04:47.488 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:47.488 16:16:33 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:47.488 16:16:33 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:04:47.488 16:16:33 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:47.488 16:16:33 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:47.488 16:16:33 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.488 16:16:33 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.488 16:16:33 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.488 16:16:33 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.488 16:16:33 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.488 16:16:33 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.488 16:16:33 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.488 16:16:33 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.488 16:16:33 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.488 16:16:33 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.488 16:16:33 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.488 16:16:33 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:04:47.488 16:16:33 app_cmdline -- scripts/common.sh@345 -- # : 1 00:04:47.488 16:16:33 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.488 16:16:33 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.488 16:16:33 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:04:47.488 16:16:33 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:04:47.488 16:16:33 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.488 16:16:33 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:04:47.488 16:16:33 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.488 16:16:33 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:04:47.488 16:16:33 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:04:47.488 16:16:33 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.488 16:16:33 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:04:47.488 16:16:33 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.488 16:16:33 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.488 16:16:33 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.488 16:16:33 app_cmdline -- scripts/common.sh@368 -- # return 0 00:04:47.488 16:16:33 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.488 16:16:33 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:47.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.488 --rc genhtml_branch_coverage=1 00:04:47.488 --rc genhtml_function_coverage=1 00:04:47.488 --rc genhtml_legend=1 00:04:47.488 --rc geninfo_all_blocks=1 00:04:47.488 --rc geninfo_unexecuted_blocks=1 00:04:47.488 00:04:47.488 ' 00:04:47.488 16:16:33 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:47.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.488 --rc genhtml_branch_coverage=1 00:04:47.488 --rc genhtml_function_coverage=1 00:04:47.488 --rc genhtml_legend=1 00:04:47.488 --rc geninfo_all_blocks=1 00:04:47.488 --rc geninfo_unexecuted_blocks=1 00:04:47.488 00:04:47.488 ' 00:04:47.488 16:16:33 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:47.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.488 --rc genhtml_branch_coverage=1 00:04:47.488 --rc genhtml_function_coverage=1 00:04:47.488 --rc genhtml_legend=1 00:04:47.488 --rc geninfo_all_blocks=1 00:04:47.488 --rc geninfo_unexecuted_blocks=1 00:04:47.488 00:04:47.488 ' 00:04:47.488 16:16:33 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:47.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.488 --rc genhtml_branch_coverage=1 00:04:47.488 --rc genhtml_function_coverage=1 00:04:47.488 --rc genhtml_legend=1 00:04:47.488 --rc geninfo_all_blocks=1 00:04:47.488 --rc geninfo_unexecuted_blocks=1 00:04:47.488 00:04:47.488 ' 00:04:47.488 16:16:33 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:04:47.488 16:16:33 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1979250 00:04:47.488 16:16:33 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1979250 00:04:47.488 16:16:33 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 1979250 ']' 00:04:47.488 16:16:33 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.488 16:16:33 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:47.488 16:16:33 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.488 16:16:33 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:47.488 16:16:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:47.488 16:16:33 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:04:47.488 [2024-11-20 16:16:33.443744] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:04:47.488 [2024-11-20 16:16:33.443814] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1979250 ] 00:04:47.748 [2024-11-20 16:16:33.518767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.749 [2024-11-20 16:16:33.560435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.320 16:16:34 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:48.320 16:16:34 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:04:48.320 16:16:34 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:04:48.580 { 00:04:48.580 "version": "SPDK v25.01-pre git sha1 7bc1aace1", 00:04:48.580 "fields": { 00:04:48.580 "major": 25, 00:04:48.580 "minor": 1, 00:04:48.580 "patch": 0, 00:04:48.580 "suffix": "-pre", 00:04:48.580 "commit": "7bc1aace1" 00:04:48.580 } 00:04:48.580 } 00:04:48.580 16:16:34 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:04:48.580 16:16:34 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:04:48.580 16:16:34 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:04:48.580 16:16:34 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:04:48.580 16:16:34 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:04:48.580 16:16:34 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:04:48.580 16:16:34 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.580 16:16:34 app_cmdline -- app/cmdline.sh@26 -- # sort 00:04:48.580 16:16:34 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:48.580 16:16:34 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.580 16:16:34 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:04:48.580 16:16:34 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:04:48.580 16:16:34 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:48.580 16:16:34 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:04:48.580 16:16:34 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:48.580 16:16:34 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:48.580 16:16:34 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:48.580 16:16:34 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:48.580 16:16:34 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:48.580 16:16:34 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:48.580 16:16:34 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:48.580 16:16:34 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:48.580 16:16:34 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:04:48.580 16:16:34 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:48.842 request: 00:04:48.842 { 00:04:48.842 "method": "env_dpdk_get_mem_stats", 00:04:48.842 "req_id": 1 00:04:48.842 } 00:04:48.842 Got JSON-RPC error response 00:04:48.842 response: 00:04:48.842 { 00:04:48.842 "code": -32601, 00:04:48.842 "message": "Method not found" 00:04:48.842 } 00:04:48.842 16:16:34 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:04:48.842 16:16:34 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:48.842 16:16:34 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:48.842 16:16:34 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:48.842 16:16:34 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1979250 00:04:48.842 16:16:34 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 1979250 ']' 00:04:48.842 16:16:34 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 1979250 00:04:48.842 16:16:34 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:04:48.842 16:16:34 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:48.842 16:16:34 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1979250 00:04:48.842 16:16:34 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:48.842 16:16:34 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:48.842 16:16:34 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1979250' 00:04:48.842 killing process with pid 1979250 00:04:48.842 16:16:34 app_cmdline -- common/autotest_common.sh@973 -- # kill 1979250 00:04:48.842 16:16:34 app_cmdline -- common/autotest_common.sh@978 -- # wait 1979250 00:04:49.102 00:04:49.102 real 0m1.677s 00:04:49.102 user 0m2.003s 00:04:49.102 sys 0m0.432s 00:04:49.102 16:16:34 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.102 16:16:34 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:49.102 ************************************ 00:04:49.102 END TEST app_cmdline 00:04:49.102 ************************************ 00:04:49.102 16:16:34 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:49.102 16:16:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.102 16:16:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.102 16:16:34 -- common/autotest_common.sh@10 -- # set +x 00:04:49.102 ************************************ 00:04:49.102 START TEST version 00:04:49.102 ************************************ 00:04:49.102 16:16:34 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:49.102 * Looking for test storage... 00:04:49.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:49.102 16:16:35 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:49.102 16:16:35 version -- common/autotest_common.sh@1693 -- # lcov --version 00:04:49.102 16:16:35 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:49.364 16:16:35 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:49.364 16:16:35 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.364 16:16:35 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.364 16:16:35 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.364 16:16:35 version -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.364 16:16:35 version -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.364 16:16:35 version -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.364 16:16:35 version -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.364 16:16:35 version -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.364 16:16:35 version -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.364 16:16:35 version -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.364 16:16:35 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.364 16:16:35 version -- scripts/common.sh@344 -- # case "$op" in 00:04:49.364 16:16:35 version -- scripts/common.sh@345 -- # : 1 00:04:49.364 16:16:35 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.364 16:16:35 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.364 16:16:35 version -- scripts/common.sh@365 -- # decimal 1 00:04:49.364 16:16:35 version -- scripts/common.sh@353 -- # local d=1 00:04:49.364 16:16:35 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.364 16:16:35 version -- scripts/common.sh@355 -- # echo 1 00:04:49.364 16:16:35 version -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.364 16:16:35 version -- scripts/common.sh@366 -- # decimal 2 00:04:49.364 16:16:35 version -- scripts/common.sh@353 -- # local d=2 00:04:49.364 16:16:35 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.364 16:16:35 version -- scripts/common.sh@355 -- # echo 2 00:04:49.364 16:16:35 version -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.364 16:16:35 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.364 16:16:35 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.364 16:16:35 version -- scripts/common.sh@368 -- # return 0 00:04:49.364 16:16:35 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.364 16:16:35 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:49.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.364 --rc genhtml_branch_coverage=1 00:04:49.364 --rc genhtml_function_coverage=1 00:04:49.364 --rc genhtml_legend=1 00:04:49.364 --rc geninfo_all_blocks=1 00:04:49.364 --rc geninfo_unexecuted_blocks=1 00:04:49.364 00:04:49.364 ' 00:04:49.364 16:16:35 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:49.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.364 --rc genhtml_branch_coverage=1 00:04:49.364 --rc genhtml_function_coverage=1 00:04:49.364 --rc genhtml_legend=1 00:04:49.364 --rc geninfo_all_blocks=1 00:04:49.364 --rc geninfo_unexecuted_blocks=1 00:04:49.364 00:04:49.364 ' 00:04:49.364 16:16:35 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:49.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.364 --rc genhtml_branch_coverage=1 00:04:49.364 --rc genhtml_function_coverage=1 00:04:49.364 --rc genhtml_legend=1 00:04:49.364 --rc geninfo_all_blocks=1 00:04:49.364 --rc geninfo_unexecuted_blocks=1 00:04:49.364 00:04:49.364 ' 00:04:49.364 16:16:35 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:49.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.364 --rc genhtml_branch_coverage=1 00:04:49.364 --rc genhtml_function_coverage=1 00:04:49.364 --rc genhtml_legend=1 00:04:49.364 --rc geninfo_all_blocks=1 00:04:49.364 --rc geninfo_unexecuted_blocks=1 00:04:49.364 00:04:49.364 ' 00:04:49.364 16:16:35 version -- app/version.sh@17 -- # get_header_version major 00:04:49.364 16:16:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:49.364 16:16:35 version -- app/version.sh@14 -- # cut -f2 00:04:49.364 16:16:35 version -- app/version.sh@14 -- # tr -d '"' 00:04:49.364 16:16:35 version -- app/version.sh@17 -- # major=25 00:04:49.364 16:16:35 version -- app/version.sh@18 -- # get_header_version minor 00:04:49.364 16:16:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:49.364 16:16:35 version -- app/version.sh@14 -- # cut -f2 00:04:49.364 16:16:35 version -- app/version.sh@14 -- # tr -d '"' 00:04:49.364 16:16:35 version -- app/version.sh@18 -- # minor=1 00:04:49.364 16:16:35 version -- app/version.sh@19 -- # get_header_version patch 00:04:49.364 16:16:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:49.364 16:16:35 version -- app/version.sh@14 -- # cut -f2 00:04:49.364 16:16:35 version -- app/version.sh@14 -- # tr -d '"' 00:04:49.364 16:16:35 version -- app/version.sh@19 -- # patch=0 00:04:49.364 16:16:35 version -- app/version.sh@20 -- # get_header_version suffix 00:04:49.364 16:16:35 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:49.364 16:16:35 version -- app/version.sh@14 -- # cut -f2 00:04:49.364 16:16:35 version -- app/version.sh@14 -- # tr -d '"' 00:04:49.364 16:16:35 version -- app/version.sh@20 -- # suffix=-pre 00:04:49.364 16:16:35 version -- app/version.sh@22 -- # version=25.1 00:04:49.364 16:16:35 version -- app/version.sh@25 -- # (( patch != 0 )) 00:04:49.364 16:16:35 version -- app/version.sh@28 -- # version=25.1rc0 00:04:49.364 16:16:35 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:04:49.364 16:16:35 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:04:49.364 16:16:35 version -- app/version.sh@30 -- # py_version=25.1rc0 00:04:49.364 16:16:35 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:04:49.364 00:04:49.364 real 0m0.259s 00:04:49.364 user 0m0.148s 00:04:49.364 sys 0m0.158s 00:04:49.364 16:16:35 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.364 16:16:35 version -- common/autotest_common.sh@10 -- # set +x 00:04:49.364 ************************************ 00:04:49.364 END TEST version 00:04:49.364 ************************************ 00:04:49.364 16:16:35 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:04:49.365 16:16:35 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:04:49.365 16:16:35 -- spdk/autotest.sh@194 -- # uname -s 00:04:49.365 16:16:35 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:04:49.365 16:16:35 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:49.365 16:16:35 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:49.365 16:16:35 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:04:49.365 16:16:35 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:04:49.365 16:16:35 -- spdk/autotest.sh@260 -- # timing_exit lib 00:04:49.365 16:16:35 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:49.365 16:16:35 -- common/autotest_common.sh@10 -- # set +x 00:04:49.365 16:16:35 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:04:49.365 16:16:35 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:04:49.365 16:16:35 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:04:49.365 16:16:35 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:04:49.365 16:16:35 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:04:49.365 16:16:35 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:04:49.365 16:16:35 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:49.365 16:16:35 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:49.365 16:16:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.365 16:16:35 -- common/autotest_common.sh@10 -- # set +x 00:04:49.626 ************************************ 00:04:49.626 START TEST nvmf_tcp 00:04:49.626 ************************************ 00:04:49.626 16:16:35 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:49.626 * Looking for test storage... 00:04:49.626 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:49.626 16:16:35 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:49.626 16:16:35 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:49.626 16:16:35 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:49.626 16:16:35 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:49.626 16:16:35 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.626 16:16:35 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.626 16:16:35 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.626 16:16:35 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.626 16:16:35 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.626 16:16:35 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.626 16:16:35 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.626 16:16:35 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.626 16:16:35 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.626 16:16:35 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.626 16:16:35 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.626 16:16:35 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:49.626 16:16:35 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:04:49.626 16:16:35 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.626 16:16:35 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.626 16:16:35 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:49.626 16:16:35 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:04:49.626 16:16:35 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.626 16:16:35 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:04:49.626 16:16:35 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.626 16:16:35 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:49.626 16:16:35 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:04:49.626 16:16:35 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.626 16:16:35 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:04:49.626 16:16:35 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.626 16:16:35 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.626 16:16:35 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.626 16:16:35 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:04:49.626 16:16:35 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.626 16:16:35 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:49.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.626 --rc genhtml_branch_coverage=1 00:04:49.626 --rc genhtml_function_coverage=1 00:04:49.626 --rc genhtml_legend=1 00:04:49.626 --rc geninfo_all_blocks=1 00:04:49.626 --rc geninfo_unexecuted_blocks=1 00:04:49.626 00:04:49.626 ' 00:04:49.626 16:16:35 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:49.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.626 --rc genhtml_branch_coverage=1 00:04:49.626 --rc genhtml_function_coverage=1 00:04:49.626 --rc genhtml_legend=1 00:04:49.626 --rc geninfo_all_blocks=1 00:04:49.626 --rc geninfo_unexecuted_blocks=1 00:04:49.626 00:04:49.626 ' 00:04:49.626 16:16:35 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:49.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.626 --rc genhtml_branch_coverage=1 00:04:49.626 --rc genhtml_function_coverage=1 00:04:49.626 --rc genhtml_legend=1 00:04:49.626 --rc geninfo_all_blocks=1 00:04:49.626 --rc geninfo_unexecuted_blocks=1 00:04:49.626 00:04:49.626 ' 00:04:49.626 16:16:35 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:49.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.626 --rc genhtml_branch_coverage=1 00:04:49.626 --rc genhtml_function_coverage=1 00:04:49.626 --rc genhtml_legend=1 00:04:49.626 --rc geninfo_all_blocks=1 00:04:49.626 --rc geninfo_unexecuted_blocks=1 00:04:49.626 00:04:49.626 ' 00:04:49.626 16:16:35 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:04:49.626 16:16:35 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:49.626 16:16:35 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:49.626 16:16:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:49.626 16:16:35 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.626 16:16:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:49.626 ************************************ 00:04:49.627 START TEST nvmf_target_core 00:04:49.627 ************************************ 00:04:49.627 16:16:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:49.888 * Looking for test storage... 00:04:49.888 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:49.888 16:16:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:49.888 16:16:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:04:49.888 16:16:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:49.888 16:16:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:49.888 16:16:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.888 16:16:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.888 16:16:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.888 16:16:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:49.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.889 --rc genhtml_branch_coverage=1 00:04:49.889 --rc genhtml_function_coverage=1 00:04:49.889 --rc genhtml_legend=1 00:04:49.889 --rc geninfo_all_blocks=1 00:04:49.889 --rc geninfo_unexecuted_blocks=1 00:04:49.889 00:04:49.889 ' 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:49.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.889 --rc genhtml_branch_coverage=1 00:04:49.889 --rc genhtml_function_coverage=1 00:04:49.889 --rc genhtml_legend=1 00:04:49.889 --rc geninfo_all_blocks=1 00:04:49.889 --rc geninfo_unexecuted_blocks=1 00:04:49.889 00:04:49.889 ' 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:49.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.889 --rc genhtml_branch_coverage=1 00:04:49.889 --rc genhtml_function_coverage=1 00:04:49.889 --rc genhtml_legend=1 00:04:49.889 --rc geninfo_all_blocks=1 00:04:49.889 --rc geninfo_unexecuted_blocks=1 00:04:49.889 00:04:49.889 ' 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:49.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.889 --rc genhtml_branch_coverage=1 00:04:49.889 --rc genhtml_function_coverage=1 00:04:49.889 --rc genhtml_legend=1 00:04:49.889 --rc geninfo_all_blocks=1 00:04:49.889 --rc geninfo_unexecuted_blocks=1 00:04:49.889 00:04:49.889 ' 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:49.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.889 16:16:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:50.152 ************************************ 00:04:50.152 START TEST nvmf_abort 00:04:50.152 ************************************ 00:04:50.152 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:50.152 * Looking for test storage... 00:04:50.152 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:04:50.152 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:50.152 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:04:50.152 16:16:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:50.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.152 --rc genhtml_branch_coverage=1 00:04:50.152 --rc genhtml_function_coverage=1 00:04:50.152 --rc genhtml_legend=1 00:04:50.152 --rc geninfo_all_blocks=1 00:04:50.152 --rc geninfo_unexecuted_blocks=1 00:04:50.152 00:04:50.152 ' 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:50.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.152 --rc genhtml_branch_coverage=1 00:04:50.152 --rc genhtml_function_coverage=1 00:04:50.152 --rc genhtml_legend=1 00:04:50.152 --rc geninfo_all_blocks=1 00:04:50.152 --rc geninfo_unexecuted_blocks=1 00:04:50.152 00:04:50.152 ' 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:50.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.152 --rc genhtml_branch_coverage=1 00:04:50.152 --rc genhtml_function_coverage=1 00:04:50.152 --rc genhtml_legend=1 00:04:50.152 --rc geninfo_all_blocks=1 00:04:50.152 --rc geninfo_unexecuted_blocks=1 00:04:50.152 00:04:50.152 ' 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:50.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.152 --rc genhtml_branch_coverage=1 00:04:50.152 --rc genhtml_function_coverage=1 00:04:50.152 --rc genhtml_legend=1 00:04:50.152 --rc geninfo_all_blocks=1 00:04:50.152 --rc geninfo_unexecuted_blocks=1 00:04:50.152 00:04:50.152 ' 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:50.152 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:50.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:50.153 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:50.153 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:50.153 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:50.153 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:04:50.153 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:04:50.153 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:04:50.153 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:04:50.153 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:50.153 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:04:50.153 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:04:50.153 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:04:50.153 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:50.153 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:50.153 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:50.153 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:04:50.153 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:04:50.153 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:04:50.153 16:16:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:04:58.314 Found 0000:31:00.0 (0x8086 - 0x159b) 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:04:58.314 Found 0000:31:00.1 (0x8086 - 0x159b) 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:04:58.314 Found net devices under 0000:31:00.0: cvl_0_0 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:04:58.314 Found net devices under 0000:31:00.1: cvl_0_1 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:04:58.314 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:04:58.315 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:04:58.315 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:04:58.315 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:04:58.315 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:04:58.315 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:04:58.315 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:04:58.315 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:04:58.315 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:04:58.315 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:04:58.315 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:04:58.315 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:04:58.315 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:04:58.315 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:04:58.315 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:04:58.315 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:04:58.315 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:04:58.315 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:04:58.315 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:04:58.315 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:04:58.315 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:04:58.315 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:04:58.315 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:04:58.315 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.584 ms 00:04:58.315 00:04:58.315 --- 10.0.0.2 ping statistics --- 00:04:58.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:58.315 rtt min/avg/max/mdev = 0.584/0.584/0.584/0.000 ms 00:04:58.315 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:04:58.315 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:04:58.315 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:04:58.315 00:04:58.315 --- 10.0.0.1 ping statistics --- 00:04:58.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:58.315 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:04:58.315 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:04:58.315 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:04:58.315 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:04:58.315 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:04:58.315 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:04:58.315 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:04:58.315 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:04:58.315 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:04:58.315 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:04:58.315 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:04:58.315 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:04:58.315 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:58.315 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:58.315 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1983747 00:04:58.315 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1983747 00:04:58.315 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:04:58.315 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1983747 ']' 00:04:58.315 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.315 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.315 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.315 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.315 16:16:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:58.315 [2024-11-20 16:16:43.535822] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:04:58.315 [2024-11-20 16:16:43.535874] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:04:58.315 [2024-11-20 16:16:43.631704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:58.315 [2024-11-20 16:16:43.674922] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:04:58.315 [2024-11-20 16:16:43.674963] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:04:58.315 [2024-11-20 16:16:43.674972] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:58.315 [2024-11-20 16:16:43.674978] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:58.315 [2024-11-20 16:16:43.674993] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:04:58.315 [2024-11-20 16:16:43.676577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:58.315 [2024-11-20 16:16:43.676736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.315 [2024-11-20 16:16:43.676736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:58.576 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:58.576 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:04:58.576 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:04:58.576 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:58.576 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:58.576 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:04:58.576 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:04:58.576 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.576 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:58.576 [2024-11-20 16:16:44.387146] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:58.576 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.576 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:04:58.576 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.576 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:58.576 Malloc0 00:04:58.576 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.576 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:04:58.576 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.576 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:58.576 Delay0 00:04:58.576 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.576 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:04:58.576 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.576 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:58.576 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.576 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:04:58.576 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.576 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:58.576 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.576 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:04:58.576 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.576 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:58.576 [2024-11-20 16:16:44.462199] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:04:58.576 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.576 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:04:58.576 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.576 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:58.576 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.576 16:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:04:58.837 [2024-11-20 16:16:44.634157] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:01.382 Initializing NVMe Controllers 00:05:01.382 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:01.382 controller IO queue size 128 less than required 00:05:01.382 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:01.382 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:01.382 Initialization complete. Launching workers. 00:05:01.382 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29084 00:05:01.382 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29145, failed to submit 62 00:05:01.382 success 29088, unsuccessful 57, failed 0 00:05:01.382 16:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:01.382 16:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.382 16:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:01.382 16:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.382 16:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:01.382 16:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:01.382 16:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:01.382 16:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:01.382 16:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:01.382 16:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:01.382 16:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:01.382 16:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:01.382 rmmod nvme_tcp 00:05:01.382 rmmod nvme_fabrics 00:05:01.382 rmmod nvme_keyring 00:05:01.382 16:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:01.382 16:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:01.382 16:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:01.382 16:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1983747 ']' 00:05:01.382 16:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1983747 00:05:01.382 16:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1983747 ']' 00:05:01.382 16:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1983747 00:05:01.382 16:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:01.382 16:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:01.382 16:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1983747 00:05:01.382 16:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:01.382 16:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:01.382 16:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1983747' 00:05:01.382 killing process with pid 1983747 00:05:01.382 16:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1983747 00:05:01.382 16:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1983747 00:05:01.382 16:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:01.382 16:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:01.382 16:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:01.382 16:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:01.382 16:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:01.382 16:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:01.382 16:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:01.382 16:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:01.382 16:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:01.382 16:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:01.382 16:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:01.382 16:16:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:03.296 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:03.296 00:05:03.296 real 0m13.277s 00:05:03.296 user 0m14.129s 00:05:03.296 sys 0m6.448s 00:05:03.296 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.296 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:03.296 ************************************ 00:05:03.296 END TEST nvmf_abort 00:05:03.296 ************************************ 00:05:03.296 16:16:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:03.296 16:16:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:03.296 16:16:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.296 16:16:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:03.296 ************************************ 00:05:03.296 START TEST nvmf_ns_hotplug_stress 00:05:03.296 ************************************ 00:05:03.296 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:03.560 * Looking for test storage... 00:05:03.560 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:03.560 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:03.560 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:05:03.560 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:03.560 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:03.560 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:03.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.561 --rc genhtml_branch_coverage=1 00:05:03.561 --rc genhtml_function_coverage=1 00:05:03.561 --rc genhtml_legend=1 00:05:03.561 --rc geninfo_all_blocks=1 00:05:03.561 --rc geninfo_unexecuted_blocks=1 00:05:03.561 00:05:03.561 ' 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:03.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.561 --rc genhtml_branch_coverage=1 00:05:03.561 --rc genhtml_function_coverage=1 00:05:03.561 --rc genhtml_legend=1 00:05:03.561 --rc geninfo_all_blocks=1 00:05:03.561 --rc geninfo_unexecuted_blocks=1 00:05:03.561 00:05:03.561 ' 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:03.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.561 --rc genhtml_branch_coverage=1 00:05:03.561 --rc genhtml_function_coverage=1 00:05:03.561 --rc genhtml_legend=1 00:05:03.561 --rc geninfo_all_blocks=1 00:05:03.561 --rc geninfo_unexecuted_blocks=1 00:05:03.561 00:05:03.561 ' 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:03.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.561 --rc genhtml_branch_coverage=1 00:05:03.561 --rc genhtml_function_coverage=1 00:05:03.561 --rc genhtml_legend=1 00:05:03.561 --rc geninfo_all_blocks=1 00:05:03.561 --rc geninfo_unexecuted_blocks=1 00:05:03.561 00:05:03.561 ' 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.561 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.562 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.562 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:03.562 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.562 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:03.562 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:03.562 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:03.562 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:03.562 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:03.562 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:03.562 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:03.562 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:03.562 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:03.562 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:03.562 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:03.562 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:03.562 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:03.562 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:03.562 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:03.562 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:03.562 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:03.562 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:03.562 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:03.562 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:03.562 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:03.562 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:03.562 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:03.562 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:03.562 16:16:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:05:11.707 Found 0000:31:00.0 (0x8086 - 0x159b) 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:05:11.707 Found 0000:31:00.1 (0x8086 - 0x159b) 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:05:11.707 Found net devices under 0000:31:00.0: cvl_0_0 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:05:11.707 Found net devices under 0000:31:00.1: cvl_0_1 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:11.707 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:11.708 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:11.708 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:11.708 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:11.708 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:11.708 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:11.708 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:11.708 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:11.708 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:11.708 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:11.708 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:11.708 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:11.708 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:11.708 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:11.708 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:11.708 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:11.708 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.448 ms 00:05:11.708 00:05:11.708 --- 10.0.0.2 ping statistics --- 00:05:11.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:11.708 rtt min/avg/max/mdev = 0.448/0.448/0.448/0.000 ms 00:05:11.708 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:11.708 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:11.708 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:05:11.708 00:05:11.708 --- 10.0.0.1 ping statistics --- 00:05:11.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:11.708 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:05:11.708 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:11.708 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:11.708 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:11.708 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:11.708 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:11.708 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:11.708 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:11.708 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:11.708 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:11.708 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:11.708 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:11.708 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:11.708 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:11.708 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1988629 00:05:11.708 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:11.708 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1988629 00:05:11.708 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1988629 ']' 00:05:11.708 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.708 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.708 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.708 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.708 16:16:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:11.708 [2024-11-20 16:16:56.956413] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:05:11.708 [2024-11-20 16:16:56.956463] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:11.708 [2024-11-20 16:16:57.039162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:11.708 [2024-11-20 16:16:57.090546] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:11.708 [2024-11-20 16:16:57.090592] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:11.708 [2024-11-20 16:16:57.090600] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:11.708 [2024-11-20 16:16:57.090607] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:11.708 [2024-11-20 16:16:57.090613] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:11.708 [2024-11-20 16:16:57.092319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:11.708 [2024-11-20 16:16:57.092357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:11.708 [2024-11-20 16:16:57.092364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.969 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.969 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:05:11.969 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:11.969 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:11.970 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:11.970 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:11.970 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:11.970 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:12.232 [2024-11-20 16:16:57.954059] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:12.232 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:12.232 16:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:12.494 [2024-11-20 16:16:58.315514] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:12.494 16:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:12.755 16:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:12.755 Malloc0 00:05:13.017 16:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:13.017 Delay0 00:05:13.017 16:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:13.278 16:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:13.538 NULL1 00:05:13.538 16:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:13.538 16:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1989221 00:05:13.538 16:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:13.538 16:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:13.538 16:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:13.799 16:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:14.061 16:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:14.061 16:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:14.061 true 00:05:14.061 16:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:14.061 16:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:14.323 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:14.584 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:14.584 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:14.584 true 00:05:14.584 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:14.584 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:14.845 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:15.148 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:15.148 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:15.148 true 00:05:15.462 16:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:15.462 16:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:15.462 16:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:15.747 16:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:15.747 16:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:15.747 true 00:05:15.747 16:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:15.747 16:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:16.037 16:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:16.037 16:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:16.037 16:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:16.329 true 00:05:16.329 16:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:16.329 16:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:16.628 16:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:16.628 16:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:16.628 16:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:16.888 true 00:05:16.888 16:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:16.888 16:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:17.149 16:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:17.149 16:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:17.149 16:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:17.410 true 00:05:17.410 16:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:17.410 16:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:17.670 16:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:17.670 16:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:17.670 16:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:17.929 true 00:05:17.929 16:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:17.930 16:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:18.190 16:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:18.190 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:18.190 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:18.450 true 00:05:18.450 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:18.450 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:18.710 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:18.971 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:18.971 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:18.971 true 00:05:18.971 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:18.971 16:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:19.232 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:19.492 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:19.492 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:19.492 true 00:05:19.492 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:19.492 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:19.753 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:20.013 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:20.013 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:20.013 true 00:05:20.013 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:20.013 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:20.273 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:20.533 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:20.533 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:20.533 true 00:05:20.533 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:20.533 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:20.793 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:21.054 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:21.054 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:21.054 true 00:05:21.054 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:21.054 16:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:21.314 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:21.574 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:21.574 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:21.574 true 00:05:21.574 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:21.574 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:21.833 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:22.093 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:22.093 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:22.093 true 00:05:22.093 16:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:22.093 16:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:22.354 16:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:22.615 16:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:22.615 16:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:22.615 true 00:05:22.615 16:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:22.615 16:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:22.875 16:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:23.136 16:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:23.136 16:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:23.136 true 00:05:23.136 16:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:23.136 16:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:23.407 16:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:23.668 16:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:23.668 16:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:23.668 true 00:05:23.668 16:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:23.668 16:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:23.929 16:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:24.190 16:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:24.190 16:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:24.190 true 00:05:24.450 16:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:24.450 16:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:24.450 16:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:24.710 16:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:24.710 16:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:24.970 true 00:05:24.970 16:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:24.970 16:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:24.970 16:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:25.231 16:17:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:25.231 16:17:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:25.493 true 00:05:25.493 16:17:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:25.493 16:17:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:25.493 16:17:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:25.753 16:17:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:25.753 16:17:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:26.013 true 00:05:26.013 16:17:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:26.013 16:17:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:26.013 16:17:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:26.274 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:26.274 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:26.536 true 00:05:26.536 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:26.536 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:26.797 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:26.797 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:26.797 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:27.057 true 00:05:27.057 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:27.057 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:27.317 16:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:27.317 16:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:27.317 16:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:27.578 true 00:05:27.578 16:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:27.578 16:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:27.839 16:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:27.839 16:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:28.100 16:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:28.100 true 00:05:28.100 16:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:28.100 16:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.361 16:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:28.621 16:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:28.621 16:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:28.621 true 00:05:28.621 16:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:28.621 16:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.881 16:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.142 16:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:05:29.142 16:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:05:29.142 true 00:05:29.142 16:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:29.142 16:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.403 16:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.663 16:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:05:29.663 16:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:05:29.663 true 00:05:29.663 16:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:29.923 16:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.923 16:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.184 16:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:05:30.184 16:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:05:30.444 true 00:05:30.444 16:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:30.444 16:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.444 16:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.704 16:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:05:30.704 16:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:05:30.965 true 00:05:30.965 16:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:30.965 16:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.965 16:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:31.226 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:05:31.226 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:05:31.487 true 00:05:31.487 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:31.487 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.749 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:31.749 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:05:31.749 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:05:32.010 true 00:05:32.010 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:32.010 16:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.271 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.271 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:05:32.271 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:05:32.532 true 00:05:32.532 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:32.532 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.793 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.793 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:05:32.793 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:05:33.054 true 00:05:33.054 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:33.054 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.321 16:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.321 16:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:05:33.321 16:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:05:33.582 true 00:05:33.582 16:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:33.582 16:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.842 16:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.103 16:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:05:34.103 16:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:05:34.103 true 00:05:34.103 16:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:34.103 16:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.363 16:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.625 16:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:05:34.625 16:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:05:34.625 true 00:05:34.625 16:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:34.625 16:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.887 16:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.148 16:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:05:35.148 16:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:05:35.148 true 00:05:35.409 16:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:35.409 16:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.409 16:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.669 16:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:05:35.669 16:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:05:35.928 true 00:05:35.928 16:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:35.929 16:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.929 16:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.189 16:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:05:36.189 16:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:05:36.449 true 00:05:36.449 16:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:36.449 16:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.449 16:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.709 16:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:05:36.709 16:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:05:36.969 true 00:05:36.969 16:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:36.970 16:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.230 16:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.230 16:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:05:37.230 16:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:05:37.490 true 00:05:37.490 16:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:37.490 16:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.751 16:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.751 16:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:05:37.751 16:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:05:38.013 true 00:05:38.013 16:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:38.013 16:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.274 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.274 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:05:38.274 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:05:38.535 true 00:05:38.535 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:38.535 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.795 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.055 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:05:39.055 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:05:39.055 true 00:05:39.055 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:39.055 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.316 16:17:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.576 16:17:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:05:39.576 16:17:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:05:39.576 true 00:05:39.576 16:17:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:39.576 16:17:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.836 16:17:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.096 16:17:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:05:40.097 16:17:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:05:40.097 true 00:05:40.097 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:40.097 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.417 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.784 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:05:40.784 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:05:40.784 true 00:05:40.784 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:40.784 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.044 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.044 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:05:41.044 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:05:41.305 true 00:05:41.305 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:41.305 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.565 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.565 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:05:41.565 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:05:41.825 true 00:05:41.825 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:41.826 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.086 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.346 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:05:42.346 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:05:42.346 true 00:05:42.346 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:42.346 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.606 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.867 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:05:42.867 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:05:42.867 true 00:05:42.867 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:42.867 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.128 16:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.397 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:05:43.397 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:05:43.397 true 00:05:43.397 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:43.397 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.663 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.923 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:05:43.923 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:05:43.923 Initializing NVMe Controllers 00:05:43.923 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:43.923 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:05:43.923 Controller IO queue size 128, less than required. 00:05:43.923 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:43.923 WARNING: Some requested NVMe devices were skipped 00:05:43.923 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:05:43.923 Initialization complete. Launching workers. 00:05:43.923 ======================================================== 00:05:43.923 Latency(us) 00:05:43.923 Device Information : IOPS MiB/s Average min max 00:05:43.923 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30797.30 15.04 4156.03 1408.43 43951.92 00:05:43.923 ======================================================== 00:05:43.923 Total : 30797.30 15.04 4156.03 1408.43 43951.92 00:05:43.923 00:05:43.923 true 00:05:44.183 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1989221 00:05:44.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1989221) - No such process 00:05:44.183 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1989221 00:05:44.183 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.183 16:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:44.442 16:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:05:44.442 16:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:05:44.442 16:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:05:44.442 16:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:44.443 16:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:05:44.703 null0 00:05:44.703 16:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:44.703 16:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:44.703 16:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:05:44.703 null1 00:05:44.703 16:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:44.703 16:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:44.703 16:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:05:44.964 null2 00:05:44.964 16:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:44.964 16:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:44.964 16:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:05:45.225 null3 00:05:45.225 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:45.225 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:45.225 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:05:45.225 null4 00:05:45.486 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:45.486 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:45.486 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:05:45.486 null5 00:05:45.486 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:45.486 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:45.486 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:05:45.745 null6 00:05:45.745 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:45.745 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:45.745 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:05:46.005 null7 00:05:46.005 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:46.005 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:46.005 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:05:46.005 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:46.005 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:46.005 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:46.005 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:46.005 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:05:46.005 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:05:46.005 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:46.005 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.005 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:46.005 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:46.005 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:46.005 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:46.005 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:05:46.005 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:05:46.005 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:46.005 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.005 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:46.005 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:46.005 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:46.005 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:46.005 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:05:46.005 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:05:46.005 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:46.005 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.005 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:46.005 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:46.005 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:46.005 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:46.005 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:05:46.005 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:05:46.005 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:46.005 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.005 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:46.005 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:46.005 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:46.005 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:46.005 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:05:46.005 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:05:46.005 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:46.005 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.005 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:46.005 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:46.005 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:46.005 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:46.005 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:05:46.005 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:05:46.005 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:46.005 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.005 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:46.006 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:46.006 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:46.006 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:46.006 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:05:46.006 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:05:46.006 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:46.006 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.006 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:46.006 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:46.006 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:46.006 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:46.006 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:05:46.006 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1995804 1995805 1995807 1995809 1995811 1995813 1995815 1995817 00:05:46.006 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:05:46.006 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:46.006 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.006 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:46.006 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:46.006 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.265 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:46.265 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:46.265 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:46.265 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:46.265 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:46.265 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:46.265 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.265 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.265 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:46.265 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.265 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.265 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.265 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:46.265 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.265 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:46.265 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.265 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.265 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:46.266 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.266 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.266 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:46.528 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.528 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.528 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:46.528 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.528 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.528 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:46.528 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.528 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.528 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:46.528 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:46.528 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:46.528 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:46.528 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:46.529 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.529 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:46.529 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:46.529 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.529 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.529 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:46.529 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:46.788 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.788 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.788 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:46.788 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.788 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.788 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:46.788 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.788 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.788 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:46.788 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.788 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.788 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:46.788 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.788 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.788 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:46.788 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:46.788 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.788 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.789 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:46.789 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.789 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.789 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:46.789 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:46.789 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:47.049 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.049 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:47.049 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:47.049 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.049 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.049 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:47.049 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:47.049 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:47.049 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.049 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.049 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:47.049 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.049 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.049 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:47.049 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.049 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.049 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:47.049 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.049 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.049 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:47.049 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.049 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.049 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:47.049 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.309 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.309 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:47.309 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:47.309 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.309 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.309 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:47.309 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:47.309 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:47.309 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:47.309 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.309 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:47.309 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.309 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.309 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:47.309 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:47.309 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:47.309 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.309 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.309 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:47.572 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.572 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.572 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:47.572 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.572 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.572 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:47.572 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.572 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.572 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:47.572 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.572 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.572 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:47.572 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.572 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.572 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:47.572 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.572 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.572 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:47.572 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:47.572 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:47.572 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:47.572 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:47.833 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.833 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.833 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.833 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:47.833 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:47.833 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:47.833 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:47.833 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.833 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.833 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:47.833 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.833 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.833 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:47.833 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.833 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.833 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:47.833 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.833 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.833 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:47.833 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.833 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.833 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:47.833 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.833 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.833 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:47.833 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.833 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.833 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:47.833 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:47.833 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:48.095 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:48.095 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:48.095 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:48.095 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.095 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.095 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:48.095 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:48.095 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.095 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.095 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:48.095 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:48.095 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.095 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.095 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.095 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:48.095 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.095 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.095 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:48.357 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.357 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.357 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:48.357 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:48.357 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.357 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.357 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:48.357 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.357 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.357 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:48.357 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:48.357 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:48.357 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.357 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.357 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:48.357 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:48.357 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:48.357 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.357 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.357 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:48.357 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.357 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.357 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:48.618 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.618 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.618 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:48.618 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:48.618 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.618 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.618 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:48.618 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.618 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:48.618 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.618 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.618 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:48.618 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:48.618 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:48.618 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:48.618 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.618 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.618 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:48.618 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:48.618 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.618 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.618 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:48.880 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.880 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.880 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:48.880 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:48.880 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.880 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.880 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:48.880 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.880 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.880 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:48.880 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:48.880 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.880 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.880 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:48.880 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.880 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.880 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:48.880 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.880 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:48.880 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:48.880 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:48.880 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.880 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.880 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:49.142 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:49.142 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.142 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.142 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:49.142 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.142 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.142 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:49.142 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:49.142 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.142 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.142 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:49.142 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:49.142 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.142 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.142 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:49.142 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.142 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.142 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:49.142 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.142 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.142 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:49.142 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:49.142 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.142 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.142 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.142 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:49.404 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:49.404 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:49.404 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.404 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.404 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:49.404 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:49.404 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:49.404 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.404 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.404 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:49.404 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.404 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.404 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:49.404 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:49.404 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.404 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.404 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:49.664 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.664 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.664 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.664 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.664 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:49.664 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.664 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.664 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:49.664 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.664 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.664 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.664 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:49.664 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.664 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.664 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.664 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.924 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.924 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.924 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:49.924 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:49.924 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:05:49.924 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:05:49.924 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:49.924 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:05:49.924 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:49.924 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:05:49.924 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:49.924 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:49.924 rmmod nvme_tcp 00:05:49.924 rmmod nvme_fabrics 00:05:49.924 rmmod nvme_keyring 00:05:49.924 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:49.924 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:05:49.924 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:05:49.924 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1988629 ']' 00:05:49.924 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1988629 00:05:49.924 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1988629 ']' 00:05:49.924 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1988629 00:05:49.924 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:05:49.924 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:49.925 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1988629 00:05:49.925 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:49.925 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:49.925 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1988629' 00:05:49.925 killing process with pid 1988629 00:05:49.925 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1988629 00:05:49.925 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1988629 00:05:50.185 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:50.185 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:50.185 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:50.185 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:05:50.185 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:05:50.185 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:50.185 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:05:50.185 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:50.185 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:50.185 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:50.185 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:50.185 16:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:52.095 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:52.095 00:05:52.095 real 0m48.810s 00:05:52.095 user 3m20.081s 00:05:52.095 sys 0m16.997s 00:05:52.095 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.095 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:52.095 ************************************ 00:05:52.095 END TEST nvmf_ns_hotplug_stress 00:05:52.095 ************************************ 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:52.357 ************************************ 00:05:52.357 START TEST nvmf_delete_subsystem 00:05:52.357 ************************************ 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:52.357 * Looking for test storage... 00:05:52.357 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:52.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.357 --rc genhtml_branch_coverage=1 00:05:52.357 --rc genhtml_function_coverage=1 00:05:52.357 --rc genhtml_legend=1 00:05:52.357 --rc geninfo_all_blocks=1 00:05:52.357 --rc geninfo_unexecuted_blocks=1 00:05:52.357 00:05:52.357 ' 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:52.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.357 --rc genhtml_branch_coverage=1 00:05:52.357 --rc genhtml_function_coverage=1 00:05:52.357 --rc genhtml_legend=1 00:05:52.357 --rc geninfo_all_blocks=1 00:05:52.357 --rc geninfo_unexecuted_blocks=1 00:05:52.357 00:05:52.357 ' 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:52.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.357 --rc genhtml_branch_coverage=1 00:05:52.357 --rc genhtml_function_coverage=1 00:05:52.357 --rc genhtml_legend=1 00:05:52.357 --rc geninfo_all_blocks=1 00:05:52.357 --rc geninfo_unexecuted_blocks=1 00:05:52.357 00:05:52.357 ' 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:52.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.357 --rc genhtml_branch_coverage=1 00:05:52.357 --rc genhtml_function_coverage=1 00:05:52.357 --rc genhtml_legend=1 00:05:52.357 --rc geninfo_all_blocks=1 00:05:52.357 --rc geninfo_unexecuted_blocks=1 00:05:52.357 00:05:52.357 ' 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:05:52.357 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:05:52.358 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:52.358 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:52.358 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:52.358 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:52.358 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:52.358 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:05:52.358 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:52.358 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:52.358 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:52.358 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.358 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.358 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.358 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:05:52.619 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.619 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:05:52.619 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:52.619 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:52.619 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:52.619 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:52.619 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:52.619 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:52.619 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:52.619 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:52.619 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:52.619 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:52.619 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:05:52.619 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:52.619 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:52.619 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:52.619 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:52.619 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:52.619 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:52.619 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:52.619 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:52.619 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:52.619 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:52.619 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:05:52.619 16:17:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:00.762 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:00.762 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:00.762 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:00.763 Found net devices under 0000:31:00.0: cvl_0_0 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:00.763 Found net devices under 0000:31:00.1: cvl_0_1 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:00.763 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:00.763 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:06:00.763 00:06:00.763 --- 10.0.0.2 ping statistics --- 00:06:00.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:00.763 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:00.763 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:00.763 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:06:00.763 00:06:00.763 --- 10.0.0.1 ping statistics --- 00:06:00.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:00.763 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2001029 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2001029 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2001029 ']' 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.763 16:17:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:00.763 [2024-11-20 16:17:45.735117] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:06:00.763 [2024-11-20 16:17:45.735183] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:00.763 [2024-11-20 16:17:45.819457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:00.763 [2024-11-20 16:17:45.859625] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:00.763 [2024-11-20 16:17:45.859662] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:00.763 [2024-11-20 16:17:45.859674] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:00.763 [2024-11-20 16:17:45.859681] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:00.763 [2024-11-20 16:17:45.859687] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:00.763 [2024-11-20 16:17:45.860978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.763 [2024-11-20 16:17:45.860989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.763 16:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.763 16:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:00.763 16:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:00.763 16:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:00.763 16:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:00.763 16:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:00.763 16:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:00.763 16:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.763 16:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:00.763 [2024-11-20 16:17:46.564500] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:00.763 16:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.763 16:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:00.763 16:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.763 16:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:00.763 16:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.763 16:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:00.763 16:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.763 16:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:00.763 [2024-11-20 16:17:46.588671] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:00.763 16:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.764 16:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:00.764 16:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.764 16:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:00.764 NULL1 00:06:00.764 16:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.764 16:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:00.764 16:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.764 16:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:00.764 Delay0 00:06:00.764 16:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.764 16:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.764 16:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.764 16:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:00.764 16:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.764 16:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2001374 00:06:00.764 16:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:00.764 16:17:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:00.764 [2024-11-20 16:17:46.685521] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:03.307 16:17:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:03.307 16:17:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.307 16:17:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:03.307 Read completed with error (sct=0, sc=8) 00:06:03.307 starting I/O failed: -6 00:06:03.307 Write completed with error (sct=0, sc=8) 00:06:03.307 Write completed with error (sct=0, sc=8) 00:06:03.307 Read completed with error (sct=0, sc=8) 00:06:03.307 Write completed with error (sct=0, sc=8) 00:06:03.307 starting I/O failed: -6 00:06:03.307 Read completed with error (sct=0, sc=8) 00:06:03.307 Read completed with error (sct=0, sc=8) 00:06:03.307 Read completed with error (sct=0, sc=8) 00:06:03.307 Write completed with error (sct=0, sc=8) 00:06:03.307 starting I/O failed: -6 00:06:03.307 Read completed with error (sct=0, sc=8) 00:06:03.307 Write completed with error (sct=0, sc=8) 00:06:03.307 Read completed with error (sct=0, sc=8) 00:06:03.307 Read completed with error (sct=0, sc=8) 00:06:03.307 starting I/O failed: -6 00:06:03.307 Read completed with error (sct=0, sc=8) 00:06:03.307 Read completed with error (sct=0, sc=8) 00:06:03.307 Read completed with error (sct=0, sc=8) 00:06:03.307 Read completed with error (sct=0, sc=8) 00:06:03.307 starting I/O failed: -6 00:06:03.307 Read completed with error (sct=0, sc=8) 00:06:03.307 Read completed with error (sct=0, sc=8) 00:06:03.307 Write completed with error (sct=0, sc=8) 00:06:03.307 Read completed with error (sct=0, sc=8) 00:06:03.307 starting I/O failed: -6 00:06:03.307 Write completed with error (sct=0, sc=8) 00:06:03.307 Read completed with error (sct=0, sc=8) 00:06:03.307 Read completed with error (sct=0, sc=8) 00:06:03.307 Read completed with error (sct=0, sc=8) 00:06:03.307 starting I/O failed: -6 00:06:03.307 Read completed with error (sct=0, sc=8) 00:06:03.307 Read completed with error (sct=0, sc=8) 00:06:03.307 Read completed with error (sct=0, sc=8) 00:06:03.307 Write completed with error (sct=0, sc=8) 00:06:03.307 starting I/O failed: -6 00:06:03.307 Read completed with error (sct=0, sc=8) 00:06:03.307 Read completed with error (sct=0, sc=8) 00:06:03.307 Read completed with error (sct=0, sc=8) 00:06:03.307 Read completed with error (sct=0, sc=8) 00:06:03.307 starting I/O failed: -6 00:06:03.307 Read completed with error (sct=0, sc=8) 00:06:03.307 Read completed with error (sct=0, sc=8) 00:06:03.307 Write completed with error (sct=0, sc=8) 00:06:03.307 Write completed with error (sct=0, sc=8) 00:06:03.307 starting I/O failed: -6 00:06:03.307 Write completed with error (sct=0, sc=8) 00:06:03.307 Read completed with error (sct=0, sc=8) 00:06:03.307 Read completed with error (sct=0, sc=8) 00:06:03.307 Write completed with error (sct=0, sc=8) 00:06:03.307 starting I/O failed: -6 00:06:03.307 Write completed with error (sct=0, sc=8) 00:06:03.307 Read completed with error (sct=0, sc=8) 00:06:03.307 Write completed with error (sct=0, sc=8) 00:06:03.307 Write completed with error (sct=0, sc=8) 00:06:03.307 starting I/O failed: -6 00:06:03.307 Write completed with error (sct=0, sc=8) 00:06:03.307 Read completed with error (sct=0, sc=8) 00:06:03.307 Read completed with error (sct=0, sc=8) 00:06:03.307 [2024-11-20 16:17:48.890984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adff00 is same with the state(6) to be set 00:06:03.307 Read completed with error (sct=0, sc=8) 00:06:03.307 Read completed with error (sct=0, sc=8) 00:06:03.307 Read completed with error (sct=0, sc=8) 00:06:03.307 Read completed with error (sct=0, sc=8) 00:06:03.307 Write completed with error (sct=0, sc=8) 00:06:03.307 Read completed with error (sct=0, sc=8) 00:06:03.307 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Write completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Write completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Write completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Write completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Write completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Write completed with error (sct=0, sc=8) 00:06:03.308 Write completed with error (sct=0, sc=8) 00:06:03.308 Write completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Write completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Write completed with error (sct=0, sc=8) 00:06:03.308 Write completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Write completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Write completed with error (sct=0, sc=8) 00:06:03.308 Write completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Write completed with error (sct=0, sc=8) 00:06:03.308 Write completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 starting I/O failed: -6 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Write completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 starting I/O failed: -6 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Write completed with error (sct=0, sc=8) 00:06:03.308 starting I/O failed: -6 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Write completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 starting I/O failed: -6 00:06:03.308 Write completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Write completed with error (sct=0, sc=8) 00:06:03.308 starting I/O failed: -6 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Write completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 starting I/O failed: -6 00:06:03.308 Write completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 starting I/O failed: -6 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Write completed with error (sct=0, sc=8) 00:06:03.308 starting I/O failed: -6 00:06:03.308 Write completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 starting I/O failed: -6 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 starting I/O failed: -6 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 [2024-11-20 16:17:48.895188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4c3800d4b0 is same with the state(6) to be set 00:06:03.308 starting I/O failed: -6 00:06:03.308 starting I/O failed: -6 00:06:03.308 starting I/O failed: -6 00:06:03.308 Write completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Write completed with error (sct=0, sc=8) 00:06:03.308 Write completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Write completed with error (sct=0, sc=8) 00:06:03.308 Write completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Write completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Write completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Write completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Write completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Write completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Write completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Write completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Write completed with error (sct=0, sc=8) 00:06:03.308 Write completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Write completed with error (sct=0, sc=8) 00:06:03.308 Write completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Write completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:03.308 Write completed with error (sct=0, sc=8) 00:06:03.308 Read completed with error (sct=0, sc=8) 00:06:04.248 [2024-11-20 16:17:49.867013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae15e0 is same with the state(6) to be set 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 Write completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 Write completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 Write completed with error (sct=0, sc=8) 00:06:04.248 Write completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 Write completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 Write completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 Write completed with error (sct=0, sc=8) 00:06:04.248 Write completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 Write completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 Write completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 [2024-11-20 16:17:49.894499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae00e0 is same with the state(6) to be set 00:06:04.248 Write completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 Write completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 Write completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 Write completed with error (sct=0, sc=8) 00:06:04.248 Write completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 [2024-11-20 16:17:49.894803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae04a0 is same with the state(6) to be set 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 Write completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 Write completed with error (sct=0, sc=8) 00:06:04.248 Write completed with error (sct=0, sc=8) 00:06:04.248 Write completed with error (sct=0, sc=8) 00:06:04.248 Write completed with error (sct=0, sc=8) 00:06:04.248 Write completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 Write completed with error (sct=0, sc=8) 00:06:04.248 Write completed with error (sct=0, sc=8) 00:06:04.248 Write completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 Write completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 Write completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 [2024-11-20 16:17:49.896210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4c3800d7e0 is same with the state(6) to be set 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 Write completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.248 Read completed with error (sct=0, sc=8) 00:06:04.249 Write completed with error (sct=0, sc=8) 00:06:04.249 Read completed with error (sct=0, sc=8) 00:06:04.249 Read completed with error (sct=0, sc=8) 00:06:04.249 Read completed with error (sct=0, sc=8) 00:06:04.249 Read completed with error (sct=0, sc=8) 00:06:04.249 Write completed with error (sct=0, sc=8) 00:06:04.249 Read completed with error (sct=0, sc=8) 00:06:04.249 Read completed with error (sct=0, sc=8) 00:06:04.249 Read completed with error (sct=0, sc=8) 00:06:04.249 Read completed with error (sct=0, sc=8) 00:06:04.249 Read completed with error (sct=0, sc=8) 00:06:04.249 Read completed with error (sct=0, sc=8) 00:06:04.249 [2024-11-20 16:17:49.896291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4c3800d020 is same with the state(6) to be set 00:06:04.249 Initializing NVMe Controllers 00:06:04.249 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:04.249 Controller IO queue size 128, less than required. 00:06:04.249 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:04.249 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:04.249 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:04.249 Initialization complete. Launching workers. 00:06:04.249 ======================================================== 00:06:04.249 Latency(us) 00:06:04.249 Device Information : IOPS MiB/s Average min max 00:06:04.249 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 177.25 0.09 879628.35 257.69 1007668.33 00:06:04.249 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 160.32 0.08 926862.04 264.29 2001502.58 00:06:04.249 ======================================================== 00:06:04.249 Total : 337.57 0.16 902060.87 257.69 2001502.58 00:06:04.249 00:06:04.249 [2024-11-20 16:17:49.896805] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae15e0 (9): Bad file descriptor 00:06:04.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:04.249 16:17:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.249 16:17:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:04.249 16:17:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2001374 00:06:04.249 16:17:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:04.509 16:17:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:04.509 16:17:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2001374 00:06:04.509 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2001374) - No such process 00:06:04.509 16:17:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2001374 00:06:04.509 16:17:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:04.509 16:17:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2001374 00:06:04.509 16:17:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:04.509 16:17:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:04.509 16:17:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:04.509 16:17:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:04.509 16:17:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2001374 00:06:04.509 16:17:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:04.509 16:17:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:04.509 16:17:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:04.509 16:17:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:04.509 16:17:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:04.509 16:17:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.509 16:17:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:04.509 16:17:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.509 16:17:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:04.509 16:17:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.509 16:17:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:04.509 [2024-11-20 16:17:50.426350] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:04.509 16:17:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.509 16:17:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.509 16:17:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.509 16:17:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:04.509 16:17:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.509 16:17:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2002053 00:06:04.509 16:17:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:04.509 16:17:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:04.509 16:17:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2002053 00:06:04.509 16:17:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:04.769 [2024-11-20 16:17:50.506344] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:05.029 16:17:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:05.029 16:17:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2002053 00:06:05.029 16:17:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:05.600 16:17:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:05.600 16:17:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2002053 00:06:05.600 16:17:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:06.170 16:17:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:06.170 16:17:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2002053 00:06:06.170 16:17:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:06.739 16:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:06.739 16:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2002053 00:06:06.739 16:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:07.311 16:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:07.311 16:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2002053 00:06:07.311 16:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:07.572 16:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:07.572 16:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2002053 00:06:07.572 16:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:08.143 Initializing NVMe Controllers 00:06:08.143 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:08.143 Controller IO queue size 128, less than required. 00:06:08.143 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:08.143 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:08.143 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:08.143 Initialization complete. Launching workers. 00:06:08.143 ======================================================== 00:06:08.143 Latency(us) 00:06:08.143 Device Information : IOPS MiB/s Average min max 00:06:08.143 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002339.73 1000215.86 1041602.02 00:06:08.143 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002930.98 1000291.48 1009613.32 00:06:08.143 ======================================================== 00:06:08.143 Total : 256.00 0.12 1002635.35 1000215.86 1041602.02 00:06:08.143 00:06:08.143 16:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:08.143 16:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2002053 00:06:08.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2002053) - No such process 00:06:08.143 16:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2002053 00:06:08.143 16:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:08.143 16:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:08.143 16:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:08.143 16:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:08.143 16:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:08.143 16:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:08.143 16:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:08.143 16:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:08.143 rmmod nvme_tcp 00:06:08.143 rmmod nvme_fabrics 00:06:08.143 rmmod nvme_keyring 00:06:08.143 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:08.143 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:08.143 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:08.143 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2001029 ']' 00:06:08.143 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2001029 00:06:08.143 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2001029 ']' 00:06:08.143 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2001029 00:06:08.143 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:08.143 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:08.143 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2001029 00:06:08.404 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:08.404 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:08.404 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2001029' 00:06:08.404 killing process with pid 2001029 00:06:08.404 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2001029 00:06:08.404 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2001029 00:06:08.404 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:08.404 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:08.404 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:08.404 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:08.404 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:08.404 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:08.404 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:08.404 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:08.404 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:08.404 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:08.404 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:08.404 16:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:10.954 00:06:10.954 real 0m18.239s 00:06:10.954 user 0m31.035s 00:06:10.954 sys 0m6.658s 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:10.954 ************************************ 00:06:10.954 END TEST nvmf_delete_subsystem 00:06:10.954 ************************************ 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:10.954 ************************************ 00:06:10.954 START TEST nvmf_host_management 00:06:10.954 ************************************ 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:10.954 * Looking for test storage... 00:06:10.954 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:10.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.954 --rc genhtml_branch_coverage=1 00:06:10.954 --rc genhtml_function_coverage=1 00:06:10.954 --rc genhtml_legend=1 00:06:10.954 --rc geninfo_all_blocks=1 00:06:10.954 --rc geninfo_unexecuted_blocks=1 00:06:10.954 00:06:10.954 ' 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:10.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.954 --rc genhtml_branch_coverage=1 00:06:10.954 --rc genhtml_function_coverage=1 00:06:10.954 --rc genhtml_legend=1 00:06:10.954 --rc geninfo_all_blocks=1 00:06:10.954 --rc geninfo_unexecuted_blocks=1 00:06:10.954 00:06:10.954 ' 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:10.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.954 --rc genhtml_branch_coverage=1 00:06:10.954 --rc genhtml_function_coverage=1 00:06:10.954 --rc genhtml_legend=1 00:06:10.954 --rc geninfo_all_blocks=1 00:06:10.954 --rc geninfo_unexecuted_blocks=1 00:06:10.954 00:06:10.954 ' 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:10.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.954 --rc genhtml_branch_coverage=1 00:06:10.954 --rc genhtml_function_coverage=1 00:06:10.954 --rc genhtml_legend=1 00:06:10.954 --rc geninfo_all_blocks=1 00:06:10.954 --rc geninfo_unexecuted_blocks=1 00:06:10.954 00:06:10.954 ' 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:10.954 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:10.955 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:10.955 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:10.955 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.955 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.955 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.955 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:10.955 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.955 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:10.955 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:10.955 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:10.955 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:10.955 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:10.955 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:10.955 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:10.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:10.955 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:10.955 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:10.955 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:10.955 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:10.955 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:10.955 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:10.955 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:10.955 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:10.955 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:10.955 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:10.955 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:10.955 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:10.955 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:10.955 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:10.955 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:10.955 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:10.955 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:10.955 16:17:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:19.097 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:19.097 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:19.097 Found net devices under 0000:31:00.0: cvl_0_0 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:19.097 Found net devices under 0000:31:00.1: cvl_0_1 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:19.097 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:19.098 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:19.098 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:19.098 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:19.098 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:19.098 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.679 ms 00:06:19.098 00:06:19.098 --- 10.0.0.2 ping statistics --- 00:06:19.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:19.098 rtt min/avg/max/mdev = 0.679/0.679/0.679/0.000 ms 00:06:19.098 16:18:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:19.098 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:19.098 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:06:19.098 00:06:19.098 --- 10.0.0.1 ping statistics --- 00:06:19.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:19.098 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2007211 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2007211 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2007211 ']' 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:19.098 [2024-11-20 16:18:04.100213] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:06:19.098 [2024-11-20 16:18:04.100273] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:19.098 [2024-11-20 16:18:04.170130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:19.098 [2024-11-20 16:18:04.201853] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:19.098 [2024-11-20 16:18:04.201879] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:19.098 [2024-11-20 16:18:04.201885] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:19.098 [2024-11-20 16:18:04.201890] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:19.098 [2024-11-20 16:18:04.201895] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:19.098 [2024-11-20 16:18:04.204995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:19.098 [2024-11-20 16:18:04.205102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:19.098 [2024-11-20 16:18:04.205375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.098 [2024-11-20 16:18:04.205376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:19.098 [2024-11-20 16:18:04.329498] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:19.098 Malloc0 00:06:19.098 [2024-11-20 16:18:04.407177] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2007263 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2007263 /var/tmp/bdevperf.sock 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2007263 ']' 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:19.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:19.098 { 00:06:19.098 "params": { 00:06:19.098 "name": "Nvme$subsystem", 00:06:19.098 "trtype": "$TEST_TRANSPORT", 00:06:19.098 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:19.098 "adrfam": "ipv4", 00:06:19.098 "trsvcid": "$NVMF_PORT", 00:06:19.098 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:19.098 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:19.098 "hdgst": ${hdgst:-false}, 00:06:19.098 "ddgst": ${ddgst:-false} 00:06:19.098 }, 00:06:19.098 "method": "bdev_nvme_attach_controller" 00:06:19.098 } 00:06:19.098 EOF 00:06:19.098 )") 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:19.098 16:18:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:19.098 "params": { 00:06:19.098 "name": "Nvme0", 00:06:19.098 "trtype": "tcp", 00:06:19.098 "traddr": "10.0.0.2", 00:06:19.098 "adrfam": "ipv4", 00:06:19.098 "trsvcid": "4420", 00:06:19.098 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:19.098 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:19.098 "hdgst": false, 00:06:19.098 "ddgst": false 00:06:19.098 }, 00:06:19.098 "method": "bdev_nvme_attach_controller" 00:06:19.098 }' 00:06:19.099 [2024-11-20 16:18:04.509859] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:06:19.099 [2024-11-20 16:18:04.509912] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2007263 ] 00:06:19.099 [2024-11-20 16:18:04.582483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.099 [2024-11-20 16:18:04.618788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.099 Running I/O for 10 seconds... 00:06:19.359 16:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.359 16:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:19.359 16:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:19.359 16:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.359 16:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:19.621 16:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.621 16:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:19.621 16:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:19.621 16:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:19.621 16:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:19.621 16:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:19.621 16:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:19.621 16:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:19.621 16:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:19.621 16:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:19.621 16:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:19.621 16:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.621 16:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:19.621 16:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.621 16:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1027 00:06:19.621 16:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1027 -ge 100 ']' 00:06:19.621 16:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:19.621 16:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:19.621 16:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:19.621 16:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:19.621 16:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.621 16:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:19.621 [2024-11-20 16:18:05.387955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.621 [2024-11-20 16:18:05.388008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.621 [2024-11-20 16:18:05.388017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.621 [2024-11-20 16:18:05.388025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.621 [2024-11-20 16:18:05.388037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.621 [2024-11-20 16:18:05.388043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.621 [2024-11-20 16:18:05.388050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.621 [2024-11-20 16:18:05.388057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.621 [2024-11-20 16:18:05.388064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.621 [2024-11-20 16:18:05.388070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.621 [2024-11-20 16:18:05.388077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.621 [2024-11-20 16:18:05.388083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.621 [2024-11-20 16:18:05.388090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.621 [2024-11-20 16:18:05.388097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.621 [2024-11-20 16:18:05.388103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.621 [2024-11-20 16:18:05.388110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.621 [2024-11-20 16:18:05.388117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.621 [2024-11-20 16:18:05.388123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.621 [2024-11-20 16:18:05.388130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.621 [2024-11-20 16:18:05.388137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.621 [2024-11-20 16:18:05.388143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.621 [2024-11-20 16:18:05.388150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.621 [2024-11-20 16:18:05.388156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.621 [2024-11-20 16:18:05.388164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.621 [2024-11-20 16:18:05.388170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.621 [2024-11-20 16:18:05.388177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.621 [2024-11-20 16:18:05.388183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.621 [2024-11-20 16:18:05.388189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.621 [2024-11-20 16:18:05.388196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.621 [2024-11-20 16:18:05.388203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.621 [2024-11-20 16:18:05.388209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.621 [2024-11-20 16:18:05.388217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.621 [2024-11-20 16:18:05.388224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.621 [2024-11-20 16:18:05.388230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.621 [2024-11-20 16:18:05.388237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.621 [2024-11-20 16:18:05.388244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.621 [2024-11-20 16:18:05.388250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.621 [2024-11-20 16:18:05.388257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.621 [2024-11-20 16:18:05.388263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.621 [2024-11-20 16:18:05.388270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.622 [2024-11-20 16:18:05.388277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.622 [2024-11-20 16:18:05.388283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.622 [2024-11-20 16:18:05.388290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.622 [2024-11-20 16:18:05.388296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.622 [2024-11-20 16:18:05.388304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.622 [2024-11-20 16:18:05.388310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.622 [2024-11-20 16:18:05.388316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.622 [2024-11-20 16:18:05.388323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.622 [2024-11-20 16:18:05.388330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.622 [2024-11-20 16:18:05.388337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.622 [2024-11-20 16:18:05.388343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.622 [2024-11-20 16:18:05.388350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.622 [2024-11-20 16:18:05.388356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.622 [2024-11-20 16:18:05.388363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.622 [2024-11-20 16:18:05.388369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.622 [2024-11-20 16:18:05.388376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.622 [2024-11-20 16:18:05.388382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.622 [2024-11-20 16:18:05.388389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.622 [2024-11-20 16:18:05.388398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.622 [2024-11-20 16:18:05.388405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.622 [2024-11-20 16:18:05.388411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.622 [2024-11-20 16:18:05.388418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f910 is same with the state(6) to be set 00:06:19.622 [2024-11-20 16:18:05.388516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.622 [2024-11-20 16:18:05.388556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.622 [2024-11-20 16:18:05.388577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.622 [2024-11-20 16:18:05.388586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.622 [2024-11-20 16:18:05.388596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.622 [2024-11-20 16:18:05.388604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.622 [2024-11-20 16:18:05.388614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.622 [2024-11-20 16:18:05.388621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.622 [2024-11-20 16:18:05.388631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.622 [2024-11-20 16:18:05.388638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.622 [2024-11-20 16:18:05.388648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.622 [2024-11-20 16:18:05.388655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.622 [2024-11-20 16:18:05.388665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.622 [2024-11-20 16:18:05.388673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.622 [2024-11-20 16:18:05.388682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.622 [2024-11-20 16:18:05.388690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.622 [2024-11-20 16:18:05.388699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.622 [2024-11-20 16:18:05.388706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.622 [2024-11-20 16:18:05.388716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.622 [2024-11-20 16:18:05.388723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.622 [2024-11-20 16:18:05.388732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.622 [2024-11-20 16:18:05.388740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.622 [2024-11-20 16:18:05.388755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.622 [2024-11-20 16:18:05.388762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.622 [2024-11-20 16:18:05.388772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.622 [2024-11-20 16:18:05.388779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.622 [2024-11-20 16:18:05.388788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.622 [2024-11-20 16:18:05.388796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.622 [2024-11-20 16:18:05.388806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.622 [2024-11-20 16:18:05.388813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.622 [2024-11-20 16:18:05.388822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.622 [2024-11-20 16:18:05.388829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.622 [2024-11-20 16:18:05.388839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.622 [2024-11-20 16:18:05.388846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.622 [2024-11-20 16:18:05.388856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.622 [2024-11-20 16:18:05.388863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.622 [2024-11-20 16:18:05.388872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.622 [2024-11-20 16:18:05.388880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.622 [2024-11-20 16:18:05.388889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.622 [2024-11-20 16:18:05.388896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.622 [2024-11-20 16:18:05.388906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.622 [2024-11-20 16:18:05.388913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.622 [2024-11-20 16:18:05.388922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.622 [2024-11-20 16:18:05.388930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.622 [2024-11-20 16:18:05.388939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.622 [2024-11-20 16:18:05.388946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.622 [2024-11-20 16:18:05.388956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.622 [2024-11-20 16:18:05.388965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.622 [2024-11-20 16:18:05.388974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.622 [2024-11-20 16:18:05.388988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.622 [2024-11-20 16:18:05.388998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.622 [2024-11-20 16:18:05.389005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.622 [2024-11-20 16:18:05.389014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.622 [2024-11-20 16:18:05.389022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.622 [2024-11-20 16:18:05.389031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.622 [2024-11-20 16:18:05.389038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.622 [2024-11-20 16:18:05.389048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.622 [2024-11-20 16:18:05.389055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.623 [2024-11-20 16:18:05.389064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.623 [2024-11-20 16:18:05.389072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.623 [2024-11-20 16:18:05.389081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.623 [2024-11-20 16:18:05.389088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.623 [2024-11-20 16:18:05.389097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.623 [2024-11-20 16:18:05.389105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.623 [2024-11-20 16:18:05.389114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.623 [2024-11-20 16:18:05.389123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.623 [2024-11-20 16:18:05.389132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.623 [2024-11-20 16:18:05.389139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.623 [2024-11-20 16:18:05.389149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.623 [2024-11-20 16:18:05.389156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.623 [2024-11-20 16:18:05.389166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.623 [2024-11-20 16:18:05.389173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.623 [2024-11-20 16:18:05.389184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.623 [2024-11-20 16:18:05.389192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.623 [2024-11-20 16:18:05.389201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.623 [2024-11-20 16:18:05.389209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.623 [2024-11-20 16:18:05.389218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.623 [2024-11-20 16:18:05.389226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.623 [2024-11-20 16:18:05.389235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.623 [2024-11-20 16:18:05.389242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.623 [2024-11-20 16:18:05.389252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.623 [2024-11-20 16:18:05.389259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.623 [2024-11-20 16:18:05.389269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.623 [2024-11-20 16:18:05.389276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.623 [2024-11-20 16:18:05.389285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.623 [2024-11-20 16:18:05.389293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.623 [2024-11-20 16:18:05.389302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.623 [2024-11-20 16:18:05.389310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.623 [2024-11-20 16:18:05.389319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.623 [2024-11-20 16:18:05.389326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.623 [2024-11-20 16:18:05.389336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.623 [2024-11-20 16:18:05.389343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.623 [2024-11-20 16:18:05.389355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.623 [2024-11-20 16:18:05.389363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.623 [2024-11-20 16:18:05.389372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.623 [2024-11-20 16:18:05.389380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.623 [2024-11-20 16:18:05.389389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.623 [2024-11-20 16:18:05.389398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.623 [2024-11-20 16:18:05.389408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.623 [2024-11-20 16:18:05.389416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.623 [2024-11-20 16:18:05.389425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.623 [2024-11-20 16:18:05.389433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.623 [2024-11-20 16:18:05.389442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.623 [2024-11-20 16:18:05.389450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.623 [2024-11-20 16:18:05.389459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.623 [2024-11-20 16:18:05.389467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.623 [2024-11-20 16:18:05.389476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.623 [2024-11-20 16:18:05.389484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.623 [2024-11-20 16:18:05.389493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.623 [2024-11-20 16:18:05.389501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.623 [2024-11-20 16:18:05.389510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.623 [2024-11-20 16:18:05.389518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.623 [2024-11-20 16:18:05.389527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.623 [2024-11-20 16:18:05.389535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.623 [2024-11-20 16:18:05.389544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.623 [2024-11-20 16:18:05.389552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.623 [2024-11-20 16:18:05.389561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.623 [2024-11-20 16:18:05.389569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.623 [2024-11-20 16:18:05.389578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.623 [2024-11-20 16:18:05.389585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.623 [2024-11-20 16:18:05.389595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.623 [2024-11-20 16:18:05.389602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.623 [2024-11-20 16:18:05.389613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.623 [2024-11-20 16:18:05.389620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.623 [2024-11-20 16:18:05.389630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.623 [2024-11-20 16:18:05.389638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.623 [2024-11-20 16:18:05.389647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:19.623 [2024-11-20 16:18:05.389655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:19.623 [2024-11-20 16:18:05.389664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1853c50 is same with the state(6) to be set 00:06:19.623 [2024-11-20 16:18:05.390928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:19.623 16:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.623 16:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:19.623 16:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.623 16:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:19.623 task offset: 8192 on job bdev=Nvme0n1 fails 00:06:19.623 00:06:19.623 Latency(us) 00:06:19.623 [2024-11-20T15:18:05.582Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:19.623 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:19.623 Job: Nvme0n1 ended in about 0.62 seconds with error 00:06:19.623 Verification LBA range: start 0x0 length 0x400 00:06:19.623 Nvme0n1 : 0.62 1762.90 110.18 103.70 0.00 33459.28 4396.37 30801.92 00:06:19.623 [2024-11-20T15:18:05.583Z] =================================================================================================================== 00:06:19.624 [2024-11-20T15:18:05.583Z] Total : 1762.90 110.18 103.70 0.00 33459.28 4396.37 30801.92 00:06:19.624 [2024-11-20 16:18:05.392931] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:19.624 [2024-11-20 16:18:05.392956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1843280 (9): Bad file descriptor 00:06:19.624 16:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.624 16:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:19.624 [2024-11-20 16:18:05.404537] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:06:20.564 16:18:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2007263 00:06:20.564 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2007263) - No such process 00:06:20.564 16:18:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:20.564 16:18:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:20.564 16:18:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:20.564 16:18:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:20.564 16:18:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:20.564 16:18:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:20.564 16:18:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:20.564 16:18:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:20.564 { 00:06:20.564 "params": { 00:06:20.564 "name": "Nvme$subsystem", 00:06:20.564 "trtype": "$TEST_TRANSPORT", 00:06:20.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:20.564 "adrfam": "ipv4", 00:06:20.564 "trsvcid": "$NVMF_PORT", 00:06:20.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:20.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:20.564 "hdgst": ${hdgst:-false}, 00:06:20.564 "ddgst": ${ddgst:-false} 00:06:20.564 }, 00:06:20.564 "method": "bdev_nvme_attach_controller" 00:06:20.564 } 00:06:20.564 EOF 00:06:20.564 )") 00:06:20.564 16:18:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:20.564 16:18:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:20.564 16:18:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:20.564 16:18:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:20.564 "params": { 00:06:20.564 "name": "Nvme0", 00:06:20.564 "trtype": "tcp", 00:06:20.564 "traddr": "10.0.0.2", 00:06:20.564 "adrfam": "ipv4", 00:06:20.564 "trsvcid": "4420", 00:06:20.564 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:20.564 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:20.564 "hdgst": false, 00:06:20.564 "ddgst": false 00:06:20.564 }, 00:06:20.564 "method": "bdev_nvme_attach_controller" 00:06:20.564 }' 00:06:20.564 [2024-11-20 16:18:06.462610] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:06:20.564 [2024-11-20 16:18:06.462665] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2007619 ] 00:06:20.824 [2024-11-20 16:18:06.534230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.824 [2024-11-20 16:18:06.570188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.085 Running I/O for 1 seconds... 00:06:22.026 1856.00 IOPS, 116.00 MiB/s 00:06:22.026 Latency(us) 00:06:22.026 [2024-11-20T15:18:07.985Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:22.026 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:22.026 Verification LBA range: start 0x0 length 0x400 00:06:22.026 Nvme0n1 : 1.01 1910.07 119.38 0.00 0.00 32862.86 3481.60 31457.28 00:06:22.026 [2024-11-20T15:18:07.985Z] =================================================================================================================== 00:06:22.026 [2024-11-20T15:18:07.985Z] Total : 1910.07 119.38 0.00 0.00 32862.86 3481.60 31457.28 00:06:22.026 16:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:22.026 16:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:22.026 16:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:22.026 16:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:22.026 16:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:22.026 16:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:22.026 16:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:22.026 16:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:22.026 16:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:22.026 16:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:22.026 16:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:22.287 rmmod nvme_tcp 00:06:22.287 rmmod nvme_fabrics 00:06:22.287 rmmod nvme_keyring 00:06:22.287 16:18:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:22.287 16:18:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:22.287 16:18:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:22.287 16:18:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2007211 ']' 00:06:22.287 16:18:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2007211 00:06:22.287 16:18:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2007211 ']' 00:06:22.287 16:18:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2007211 00:06:22.287 16:18:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:22.287 16:18:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.287 16:18:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2007211 00:06:22.287 16:18:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:22.287 16:18:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:22.287 16:18:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2007211' 00:06:22.287 killing process with pid 2007211 00:06:22.287 16:18:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2007211 00:06:22.287 16:18:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2007211 00:06:22.287 [2024-11-20 16:18:08.200132] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:22.287 16:18:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:22.287 16:18:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:22.287 16:18:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:22.287 16:18:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:22.287 16:18:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:22.287 16:18:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:22.287 16:18:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:22.287 16:18:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:22.287 16:18:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:22.287 16:18:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:22.287 16:18:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:22.287 16:18:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:24.835 00:06:24.835 real 0m13.891s 00:06:24.835 user 0m20.514s 00:06:24.835 sys 0m6.584s 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:24.835 ************************************ 00:06:24.835 END TEST nvmf_host_management 00:06:24.835 ************************************ 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:24.835 ************************************ 00:06:24.835 START TEST nvmf_lvol 00:06:24.835 ************************************ 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:24.835 * Looking for test storage... 00:06:24.835 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:24.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.835 --rc genhtml_branch_coverage=1 00:06:24.835 --rc genhtml_function_coverage=1 00:06:24.835 --rc genhtml_legend=1 00:06:24.835 --rc geninfo_all_blocks=1 00:06:24.835 --rc geninfo_unexecuted_blocks=1 00:06:24.835 00:06:24.835 ' 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:24.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.835 --rc genhtml_branch_coverage=1 00:06:24.835 --rc genhtml_function_coverage=1 00:06:24.835 --rc genhtml_legend=1 00:06:24.835 --rc geninfo_all_blocks=1 00:06:24.835 --rc geninfo_unexecuted_blocks=1 00:06:24.835 00:06:24.835 ' 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:24.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.835 --rc genhtml_branch_coverage=1 00:06:24.835 --rc genhtml_function_coverage=1 00:06:24.835 --rc genhtml_legend=1 00:06:24.835 --rc geninfo_all_blocks=1 00:06:24.835 --rc geninfo_unexecuted_blocks=1 00:06:24.835 00:06:24.835 ' 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:24.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.835 --rc genhtml_branch_coverage=1 00:06:24.835 --rc genhtml_function_coverage=1 00:06:24.835 --rc genhtml_legend=1 00:06:24.835 --rc geninfo_all_blocks=1 00:06:24.835 --rc geninfo_unexecuted_blocks=1 00:06:24.835 00:06:24.835 ' 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:24.835 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:24.836 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:24.836 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:24.836 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:24.836 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:24.836 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:24.836 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:24.836 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.836 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.836 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.836 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:24.836 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.836 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:24.836 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:24.836 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:24.836 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:24.836 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:24.836 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:24.836 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:24.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:24.836 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:24.836 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:24.836 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:24.836 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:24.836 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:24.836 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:24.836 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:24.836 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:24.836 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:24.836 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:24.836 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:24.836 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:24.836 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:24.836 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:24.836 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:24.836 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:24.836 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:24.836 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:24.836 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:24.836 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:24.836 16:18:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:32.981 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:32.981 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:32.981 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:32.981 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:32.981 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:32.981 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:32.981 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:32.981 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:32.981 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:32.981 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:32.981 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:32.981 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:32.981 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:32.981 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:32.981 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:32.981 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:32.981 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:32.981 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:32.981 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:32.981 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:32.981 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:32.981 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:32.981 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:32.981 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:32.981 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:32.981 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:32.981 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:32.981 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:32.981 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:32.981 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:32.981 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:32.981 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:32.981 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:32.981 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:32.981 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:32.981 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:32.981 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:32.981 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:32.981 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:32.981 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:32.981 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:32.981 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:32.981 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:32.981 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:32.981 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:32.982 Found net devices under 0000:31:00.0: cvl_0_0 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:32.982 Found net devices under 0000:31:00.1: cvl_0_1 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:32.982 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:32.982 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:06:32.982 00:06:32.982 --- 10.0.0.2 ping statistics --- 00:06:32.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:32.982 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:32.982 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:32.982 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:06:32.982 00:06:32.982 --- 10.0.0.1 ping statistics --- 00:06:32.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:32.982 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:32.982 16:18:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:32.982 16:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:32.982 16:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:32.982 16:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:32.982 16:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:32.982 16:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2012782 00:06:32.982 16:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2012782 00:06:32.982 16:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:32.983 16:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2012782 ']' 00:06:32.983 16:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.983 16:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.983 16:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.983 16:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.983 16:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:32.983 [2024-11-20 16:18:18.080373] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:06:32.983 [2024-11-20 16:18:18.080424] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:32.983 [2024-11-20 16:18:18.159010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:32.983 [2024-11-20 16:18:18.195490] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:32.983 [2024-11-20 16:18:18.195522] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:32.983 [2024-11-20 16:18:18.195530] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:32.983 [2024-11-20 16:18:18.195536] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:32.983 [2024-11-20 16:18:18.195542] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:32.983 [2024-11-20 16:18:18.196891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.983 [2024-11-20 16:18:18.197006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:32.983 [2024-11-20 16:18:18.197014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.983 16:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:32.983 16:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:06:32.983 16:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:32.983 16:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:32.983 16:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:32.983 16:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:32.983 16:18:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:33.245 [2024-11-20 16:18:19.076228] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:33.245 16:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:33.507 16:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:33.507 16:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:33.768 16:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:33.768 16:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:33.768 16:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:34.028 16:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=c92154c8-ed75-449f-99d2-36e568bfb54f 00:06:34.028 16:18:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c92154c8-ed75-449f-99d2-36e568bfb54f lvol 20 00:06:34.288 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=1c53aba5-4b38-4ddc-9725-0debcd927925 00:06:34.288 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:34.288 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1c53aba5-4b38-4ddc-9725-0debcd927925 00:06:34.548 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:34.809 [2024-11-20 16:18:20.560269] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:34.809 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:35.070 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:35.070 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2013224 00:06:35.070 16:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:36.010 16:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 1c53aba5-4b38-4ddc-9725-0debcd927925 MY_SNAPSHOT 00:06:36.270 16:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=ddbbe969-0b67-4fab-9feb-4cea37932744 00:06:36.270 16:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 1c53aba5-4b38-4ddc-9725-0debcd927925 30 00:06:36.270 16:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone ddbbe969-0b67-4fab-9feb-4cea37932744 MY_CLONE 00:06:36.530 16:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=1c9a3892-96dc-4546-b1ef-d5b39d2f02a4 00:06:36.530 16:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 1c9a3892-96dc-4546-b1ef-d5b39d2f02a4 00:06:37.101 16:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2013224 00:06:45.232 Initializing NVMe Controllers 00:06:45.232 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:45.232 Controller IO queue size 128, less than required. 00:06:45.232 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:45.232 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:45.232 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:45.232 Initialization complete. Launching workers. 00:06:45.232 ======================================================== 00:06:45.232 Latency(us) 00:06:45.232 Device Information : IOPS MiB/s Average min max 00:06:45.232 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12237.60 47.80 10459.62 1569.06 46741.45 00:06:45.232 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17689.70 69.10 7236.52 379.76 64141.34 00:06:45.232 ======================================================== 00:06:45.232 Total : 29927.30 116.90 8554.48 379.76 64141.34 00:06:45.232 00:06:45.232 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:45.493 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1c53aba5-4b38-4ddc-9725-0debcd927925 00:06:45.753 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c92154c8-ed75-449f-99d2-36e568bfb54f 00:06:45.753 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:45.753 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:45.753 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:45.753 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:45.753 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:06:45.753 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:45.753 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:06:45.753 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:45.753 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:45.753 rmmod nvme_tcp 00:06:45.753 rmmod nvme_fabrics 00:06:45.753 rmmod nvme_keyring 00:06:45.753 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:46.013 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:06:46.013 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:06:46.013 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2012782 ']' 00:06:46.013 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2012782 00:06:46.013 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2012782 ']' 00:06:46.013 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2012782 00:06:46.013 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:06:46.013 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:46.013 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2012782 00:06:46.013 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:46.013 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:46.013 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2012782' 00:06:46.013 killing process with pid 2012782 00:06:46.013 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2012782 00:06:46.013 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2012782 00:06:46.013 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:46.013 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:46.013 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:46.013 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:06:46.013 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:06:46.013 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:46.013 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:06:46.013 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:46.013 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:46.013 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:46.013 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:46.013 16:18:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:48.628 16:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:48.628 00:06:48.628 real 0m23.629s 00:06:48.628 user 1m4.199s 00:06:48.628 sys 0m8.416s 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:48.628 ************************************ 00:06:48.628 END TEST nvmf_lvol 00:06:48.628 ************************************ 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:48.628 ************************************ 00:06:48.628 START TEST nvmf_lvs_grow 00:06:48.628 ************************************ 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:48.628 * Looking for test storage... 00:06:48.628 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:48.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.628 --rc genhtml_branch_coverage=1 00:06:48.628 --rc genhtml_function_coverage=1 00:06:48.628 --rc genhtml_legend=1 00:06:48.628 --rc geninfo_all_blocks=1 00:06:48.628 --rc geninfo_unexecuted_blocks=1 00:06:48.628 00:06:48.628 ' 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:48.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.628 --rc genhtml_branch_coverage=1 00:06:48.628 --rc genhtml_function_coverage=1 00:06:48.628 --rc genhtml_legend=1 00:06:48.628 --rc geninfo_all_blocks=1 00:06:48.628 --rc geninfo_unexecuted_blocks=1 00:06:48.628 00:06:48.628 ' 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:48.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.628 --rc genhtml_branch_coverage=1 00:06:48.628 --rc genhtml_function_coverage=1 00:06:48.628 --rc genhtml_legend=1 00:06:48.628 --rc geninfo_all_blocks=1 00:06:48.628 --rc geninfo_unexecuted_blocks=1 00:06:48.628 00:06:48.628 ' 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:48.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.628 --rc genhtml_branch_coverage=1 00:06:48.628 --rc genhtml_function_coverage=1 00:06:48.628 --rc genhtml_legend=1 00:06:48.628 --rc geninfo_all_blocks=1 00:06:48.628 --rc geninfo_unexecuted_blocks=1 00:06:48.628 00:06:48.628 ' 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:48.628 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.629 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.629 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.629 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:06:48.629 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.629 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:06:48.629 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:48.629 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:48.629 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:48.629 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:48.629 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:48.629 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:48.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:48.629 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:48.629 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:48.629 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:48.629 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:48.629 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:06:48.629 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:06:48.629 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:48.629 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:48.629 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:48.629 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:48.629 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:48.629 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:48.629 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:48.629 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:48.629 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:48.629 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:48.629 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:06:48.629 16:18:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:56.769 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:56.769 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:56.769 Found net devices under 0000:31:00.0: cvl_0_0 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:56.769 Found net devices under 0000:31:00.1: cvl_0_1 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:56.769 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:56.770 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:56.770 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:56.770 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:56.770 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:56.770 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:56.770 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:56.770 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:56.770 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:56.770 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:56.770 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:56.770 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:56.770 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:56.770 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:56.770 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:56.770 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:56.770 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:56.770 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:56.770 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:56.770 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:56.770 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:56.770 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:56.770 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:56.770 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:56.770 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:06:56.770 00:06:56.770 --- 10.0.0.2 ping statistics --- 00:06:56.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:56.770 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:06:56.770 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:56.770 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:56.770 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:06:56.770 00:06:56.770 --- 10.0.0.1 ping statistics --- 00:06:56.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:56.770 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:06:56.770 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:56.770 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:06:56.770 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:56.770 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:56.770 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:56.770 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:56.770 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:56.770 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:56.770 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:56.770 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:06:56.770 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:56.770 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:56.770 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:56.770 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2019894 00:06:56.770 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2019894 00:06:56.770 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:06:56.770 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2019894 ']' 00:06:56.770 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.770 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:56.770 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.770 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:56.770 16:18:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:56.770 [2024-11-20 16:18:41.793347] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:06:56.770 [2024-11-20 16:18:41.793411] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:56.770 [2024-11-20 16:18:41.876059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.770 [2024-11-20 16:18:41.916266] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:56.770 [2024-11-20 16:18:41.916306] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:56.770 [2024-11-20 16:18:41.916314] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:56.770 [2024-11-20 16:18:41.916321] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:56.770 [2024-11-20 16:18:41.916327] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:56.770 [2024-11-20 16:18:41.916908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.770 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.770 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:06:56.770 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:56.770 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:56.770 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:56.770 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:56.770 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:57.031 [2024-11-20 16:18:42.767891] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:57.031 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:06:57.031 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:57.031 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.031 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:57.031 ************************************ 00:06:57.031 START TEST lvs_grow_clean 00:06:57.031 ************************************ 00:06:57.031 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:06:57.031 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:57.031 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:57.031 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:57.031 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:57.031 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:57.031 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:57.031 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:57.031 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:57.031 16:18:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:57.291 16:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:57.291 16:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:57.291 16:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=8f19d2f9-2e09-47c3-a1ac-72bd618dd312 00:06:57.291 16:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f19d2f9-2e09-47c3-a1ac-72bd618dd312 00:06:57.291 16:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:57.552 16:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:57.552 16:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:57.552 16:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8f19d2f9-2e09-47c3-a1ac-72bd618dd312 lvol 150 00:06:57.811 16:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=4446912b-1d25-4372-bd2c-aaa6594eb1fb 00:06:57.811 16:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:57.811 16:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:57.812 [2024-11-20 16:18:43.732199] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:57.812 [2024-11-20 16:18:43.732250] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:57.812 true 00:06:57.812 16:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f19d2f9-2e09-47c3-a1ac-72bd618dd312 00:06:57.812 16:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:58.071 16:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:58.071 16:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:58.331 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4446912b-1d25-4372-bd2c-aaa6594eb1fb 00:06:58.331 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:58.592 [2024-11-20 16:18:44.402234] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:58.592 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:58.853 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:06:58.853 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2020344 00:06:58.853 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:58.853 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2020344 /var/tmp/bdevperf.sock 00:06:58.853 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2020344 ']' 00:06:58.853 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:58.853 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:58.853 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:58.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:58.853 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:58.853 16:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:58.853 [2024-11-20 16:18:44.617378] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:06:58.853 [2024-11-20 16:18:44.617430] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2020344 ] 00:06:58.853 [2024-11-20 16:18:44.704595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.853 [2024-11-20 16:18:44.740501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.791 16:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:59.791 16:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:06:59.791 16:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:06:59.791 Nvme0n1 00:06:59.791 16:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:00.051 [ 00:07:00.051 { 00:07:00.051 "name": "Nvme0n1", 00:07:00.051 "aliases": [ 00:07:00.051 "4446912b-1d25-4372-bd2c-aaa6594eb1fb" 00:07:00.051 ], 00:07:00.051 "product_name": "NVMe disk", 00:07:00.051 "block_size": 4096, 00:07:00.051 "num_blocks": 38912, 00:07:00.051 "uuid": "4446912b-1d25-4372-bd2c-aaa6594eb1fb", 00:07:00.051 "numa_id": 0, 00:07:00.051 "assigned_rate_limits": { 00:07:00.051 "rw_ios_per_sec": 0, 00:07:00.051 "rw_mbytes_per_sec": 0, 00:07:00.051 "r_mbytes_per_sec": 0, 00:07:00.051 "w_mbytes_per_sec": 0 00:07:00.051 }, 00:07:00.051 "claimed": false, 00:07:00.051 "zoned": false, 00:07:00.051 "supported_io_types": { 00:07:00.051 "read": true, 00:07:00.051 "write": true, 00:07:00.051 "unmap": true, 00:07:00.051 "flush": true, 00:07:00.051 "reset": true, 00:07:00.051 "nvme_admin": true, 00:07:00.051 "nvme_io": true, 00:07:00.051 "nvme_io_md": false, 00:07:00.051 "write_zeroes": true, 00:07:00.051 "zcopy": false, 00:07:00.051 "get_zone_info": false, 00:07:00.051 "zone_management": false, 00:07:00.051 "zone_append": false, 00:07:00.051 "compare": true, 00:07:00.051 "compare_and_write": true, 00:07:00.051 "abort": true, 00:07:00.051 "seek_hole": false, 00:07:00.051 "seek_data": false, 00:07:00.051 "copy": true, 00:07:00.051 "nvme_iov_md": false 00:07:00.051 }, 00:07:00.051 "memory_domains": [ 00:07:00.051 { 00:07:00.051 "dma_device_id": "system", 00:07:00.051 "dma_device_type": 1 00:07:00.051 } 00:07:00.051 ], 00:07:00.051 "driver_specific": { 00:07:00.051 "nvme": [ 00:07:00.051 { 00:07:00.051 "trid": { 00:07:00.051 "trtype": "TCP", 00:07:00.051 "adrfam": "IPv4", 00:07:00.051 "traddr": "10.0.0.2", 00:07:00.051 "trsvcid": "4420", 00:07:00.051 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:00.051 }, 00:07:00.051 "ctrlr_data": { 00:07:00.051 "cntlid": 1, 00:07:00.051 "vendor_id": "0x8086", 00:07:00.051 "model_number": "SPDK bdev Controller", 00:07:00.051 "serial_number": "SPDK0", 00:07:00.051 "firmware_revision": "25.01", 00:07:00.051 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:00.051 "oacs": { 00:07:00.051 "security": 0, 00:07:00.051 "format": 0, 00:07:00.051 "firmware": 0, 00:07:00.051 "ns_manage": 0 00:07:00.051 }, 00:07:00.051 "multi_ctrlr": true, 00:07:00.051 "ana_reporting": false 00:07:00.051 }, 00:07:00.051 "vs": { 00:07:00.051 "nvme_version": "1.3" 00:07:00.051 }, 00:07:00.051 "ns_data": { 00:07:00.051 "id": 1, 00:07:00.051 "can_share": true 00:07:00.051 } 00:07:00.051 } 00:07:00.051 ], 00:07:00.051 "mp_policy": "active_passive" 00:07:00.051 } 00:07:00.051 } 00:07:00.051 ] 00:07:00.051 16:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2020627 00:07:00.051 16:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:00.051 16:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:00.051 Running I/O for 10 seconds... 00:07:01.433 Latency(us) 00:07:01.433 [2024-11-20T15:18:47.392Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:01.433 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:01.433 Nvme0n1 : 1.00 17714.00 69.20 0.00 0.00 0.00 0.00 0.00 00:07:01.433 [2024-11-20T15:18:47.392Z] =================================================================================================================== 00:07:01.433 [2024-11-20T15:18:47.392Z] Total : 17714.00 69.20 0.00 0.00 0.00 0.00 0.00 00:07:01.433 00:07:02.004 16:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8f19d2f9-2e09-47c3-a1ac-72bd618dd312 00:07:02.264 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:02.264 Nvme0n1 : 2.00 17836.00 69.67 0.00 0.00 0.00 0.00 0.00 00:07:02.264 [2024-11-20T15:18:48.223Z] =================================================================================================================== 00:07:02.264 [2024-11-20T15:18:48.223Z] Total : 17836.00 69.67 0.00 0.00 0.00 0.00 0.00 00:07:02.264 00:07:02.264 true 00:07:02.264 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f19d2f9-2e09-47c3-a1ac-72bd618dd312 00:07:02.265 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:02.525 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:02.525 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:02.525 16:18:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2020627 00:07:03.094 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:03.094 Nvme0n1 : 3.00 17875.33 69.83 0.00 0.00 0.00 0.00 0.00 00:07:03.095 [2024-11-20T15:18:49.054Z] =================================================================================================================== 00:07:03.095 [2024-11-20T15:18:49.054Z] Total : 17875.33 69.83 0.00 0.00 0.00 0.00 0.00 00:07:03.095 00:07:04.478 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:04.478 Nvme0n1 : 4.00 17924.50 70.02 0.00 0.00 0.00 0.00 0.00 00:07:04.478 [2024-11-20T15:18:50.437Z] =================================================================================================================== 00:07:04.478 [2024-11-20T15:18:50.437Z] Total : 17924.50 70.02 0.00 0.00 0.00 0.00 0.00 00:07:04.478 00:07:05.049 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:05.049 Nvme0n1 : 5.00 17930.20 70.04 0.00 0.00 0.00 0.00 0.00 00:07:05.049 [2024-11-20T15:18:51.008Z] =================================================================================================================== 00:07:05.049 [2024-11-20T15:18:51.008Z] Total : 17930.20 70.04 0.00 0.00 0.00 0.00 0.00 00:07:05.049 00:07:06.434 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:06.434 Nvme0n1 : 6.00 17953.17 70.13 0.00 0.00 0.00 0.00 0.00 00:07:06.434 [2024-11-20T15:18:52.393Z] =================================================================================================================== 00:07:06.434 [2024-11-20T15:18:52.393Z] Total : 17953.17 70.13 0.00 0.00 0.00 0.00 0.00 00:07:06.434 00:07:07.374 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:07.374 Nvme0n1 : 7.00 17959.71 70.16 0.00 0.00 0.00 0.00 0.00 00:07:07.374 [2024-11-20T15:18:53.333Z] =================================================================================================================== 00:07:07.374 [2024-11-20T15:18:53.333Z] Total : 17959.71 70.16 0.00 0.00 0.00 0.00 0.00 00:07:07.374 00:07:08.382 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:08.382 Nvme0n1 : 8.00 17981.12 70.24 0.00 0.00 0.00 0.00 0.00 00:07:08.382 [2024-11-20T15:18:54.341Z] =================================================================================================================== 00:07:08.382 [2024-11-20T15:18:54.341Z] Total : 17981.12 70.24 0.00 0.00 0.00 0.00 0.00 00:07:08.382 00:07:09.323 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:09.323 Nvme0n1 : 9.00 17990.44 70.28 0.00 0.00 0.00 0.00 0.00 00:07:09.323 [2024-11-20T15:18:55.282Z] =================================================================================================================== 00:07:09.323 [2024-11-20T15:18:55.282Z] Total : 17990.44 70.28 0.00 0.00 0.00 0.00 0.00 00:07:09.323 00:07:10.264 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:10.264 Nvme0n1 : 10.00 18003.50 70.33 0.00 0.00 0.00 0.00 0.00 00:07:10.264 [2024-11-20T15:18:56.223Z] =================================================================================================================== 00:07:10.264 [2024-11-20T15:18:56.223Z] Total : 18003.50 70.33 0.00 0.00 0.00 0.00 0.00 00:07:10.264 00:07:10.264 00:07:10.264 Latency(us) 00:07:10.264 [2024-11-20T15:18:56.223Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:10.264 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:10.264 Nvme0n1 : 10.00 18003.95 70.33 0.00 0.00 7106.26 2867.20 13107.20 00:07:10.264 [2024-11-20T15:18:56.223Z] =================================================================================================================== 00:07:10.264 [2024-11-20T15:18:56.223Z] Total : 18003.95 70.33 0.00 0.00 7106.26 2867.20 13107.20 00:07:10.264 { 00:07:10.265 "results": [ 00:07:10.265 { 00:07:10.265 "job": "Nvme0n1", 00:07:10.265 "core_mask": "0x2", 00:07:10.265 "workload": "randwrite", 00:07:10.265 "status": "finished", 00:07:10.265 "queue_depth": 128, 00:07:10.265 "io_size": 4096, 00:07:10.265 "runtime": 10.003305, 00:07:10.265 "iops": 18003.949694625928, 00:07:10.265 "mibps": 70.32792849463253, 00:07:10.265 "io_failed": 0, 00:07:10.265 "io_timeout": 0, 00:07:10.265 "avg_latency_us": 7106.262141896031, 00:07:10.265 "min_latency_us": 2867.2, 00:07:10.265 "max_latency_us": 13107.2 00:07:10.265 } 00:07:10.265 ], 00:07:10.265 "core_count": 1 00:07:10.265 } 00:07:10.265 16:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2020344 00:07:10.265 16:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2020344 ']' 00:07:10.265 16:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2020344 00:07:10.265 16:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:10.265 16:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:10.265 16:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2020344 00:07:10.265 16:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:10.265 16:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:10.265 16:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2020344' 00:07:10.265 killing process with pid 2020344 00:07:10.265 16:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2020344 00:07:10.265 Received shutdown signal, test time was about 10.000000 seconds 00:07:10.265 00:07:10.265 Latency(us) 00:07:10.265 [2024-11-20T15:18:56.224Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:10.265 [2024-11-20T15:18:56.224Z] =================================================================================================================== 00:07:10.265 [2024-11-20T15:18:56.224Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:10.265 16:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2020344 00:07:10.265 16:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:10.525 16:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:10.786 16:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f19d2f9-2e09-47c3-a1ac-72bd618dd312 00:07:10.786 16:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:11.046 16:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:11.047 16:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:11.047 16:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:11.047 [2024-11-20 16:18:56.905549] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:11.047 16:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f19d2f9-2e09-47c3-a1ac-72bd618dd312 00:07:11.047 16:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:11.047 16:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f19d2f9-2e09-47c3-a1ac-72bd618dd312 00:07:11.047 16:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:11.047 16:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:11.047 16:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:11.047 16:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:11.047 16:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:11.047 16:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:11.047 16:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:11.047 16:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:11.047 16:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f19d2f9-2e09-47c3-a1ac-72bd618dd312 00:07:11.308 request: 00:07:11.308 { 00:07:11.308 "uuid": "8f19d2f9-2e09-47c3-a1ac-72bd618dd312", 00:07:11.308 "method": "bdev_lvol_get_lvstores", 00:07:11.308 "req_id": 1 00:07:11.308 } 00:07:11.308 Got JSON-RPC error response 00:07:11.308 response: 00:07:11.308 { 00:07:11.308 "code": -19, 00:07:11.308 "message": "No such device" 00:07:11.308 } 00:07:11.308 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:11.308 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:11.308 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:11.308 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:11.308 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:11.568 aio_bdev 00:07:11.568 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4446912b-1d25-4372-bd2c-aaa6594eb1fb 00:07:11.568 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=4446912b-1d25-4372-bd2c-aaa6594eb1fb 00:07:11.568 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:11.568 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:11.568 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:11.568 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:11.568 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:11.568 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4446912b-1d25-4372-bd2c-aaa6594eb1fb -t 2000 00:07:11.829 [ 00:07:11.829 { 00:07:11.829 "name": "4446912b-1d25-4372-bd2c-aaa6594eb1fb", 00:07:11.829 "aliases": [ 00:07:11.829 "lvs/lvol" 00:07:11.829 ], 00:07:11.829 "product_name": "Logical Volume", 00:07:11.829 "block_size": 4096, 00:07:11.829 "num_blocks": 38912, 00:07:11.829 "uuid": "4446912b-1d25-4372-bd2c-aaa6594eb1fb", 00:07:11.829 "assigned_rate_limits": { 00:07:11.829 "rw_ios_per_sec": 0, 00:07:11.829 "rw_mbytes_per_sec": 0, 00:07:11.829 "r_mbytes_per_sec": 0, 00:07:11.829 "w_mbytes_per_sec": 0 00:07:11.829 }, 00:07:11.829 "claimed": false, 00:07:11.829 "zoned": false, 00:07:11.829 "supported_io_types": { 00:07:11.829 "read": true, 00:07:11.829 "write": true, 00:07:11.829 "unmap": true, 00:07:11.829 "flush": false, 00:07:11.829 "reset": true, 00:07:11.829 "nvme_admin": false, 00:07:11.829 "nvme_io": false, 00:07:11.829 "nvme_io_md": false, 00:07:11.829 "write_zeroes": true, 00:07:11.829 "zcopy": false, 00:07:11.829 "get_zone_info": false, 00:07:11.829 "zone_management": false, 00:07:11.829 "zone_append": false, 00:07:11.829 "compare": false, 00:07:11.829 "compare_and_write": false, 00:07:11.829 "abort": false, 00:07:11.829 "seek_hole": true, 00:07:11.829 "seek_data": true, 00:07:11.829 "copy": false, 00:07:11.829 "nvme_iov_md": false 00:07:11.829 }, 00:07:11.829 "driver_specific": { 00:07:11.829 "lvol": { 00:07:11.829 "lvol_store_uuid": "8f19d2f9-2e09-47c3-a1ac-72bd618dd312", 00:07:11.829 "base_bdev": "aio_bdev", 00:07:11.829 "thin_provision": false, 00:07:11.829 "num_allocated_clusters": 38, 00:07:11.829 "snapshot": false, 00:07:11.829 "clone": false, 00:07:11.829 "esnap_clone": false 00:07:11.829 } 00:07:11.829 } 00:07:11.829 } 00:07:11.829 ] 00:07:11.829 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:11.829 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f19d2f9-2e09-47c3-a1ac-72bd618dd312 00:07:11.829 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:12.089 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:12.089 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f19d2f9-2e09-47c3-a1ac-72bd618dd312 00:07:12.089 16:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:12.089 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:12.089 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4446912b-1d25-4372-bd2c-aaa6594eb1fb 00:07:12.350 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8f19d2f9-2e09-47c3-a1ac-72bd618dd312 00:07:12.610 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:12.871 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:12.871 00:07:12.871 real 0m15.764s 00:07:12.871 user 0m15.461s 00:07:12.871 sys 0m1.381s 00:07:12.871 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.871 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:12.871 ************************************ 00:07:12.871 END TEST lvs_grow_clean 00:07:12.871 ************************************ 00:07:12.871 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:12.871 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:12.871 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.871 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:12.871 ************************************ 00:07:12.871 START TEST lvs_grow_dirty 00:07:12.871 ************************************ 00:07:12.871 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:12.871 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:12.871 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:12.871 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:12.871 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:12.871 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:12.871 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:12.871 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:12.871 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:12.871 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:13.132 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:13.132 16:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:13.132 16:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=91a5a723-04a2-4eba-ac0c-9e1c3d6452d9 00:07:13.132 16:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 91a5a723-04a2-4eba-ac0c-9e1c3d6452d9 00:07:13.132 16:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:13.393 16:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:13.393 16:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:13.393 16:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 91a5a723-04a2-4eba-ac0c-9e1c3d6452d9 lvol 150 00:07:13.654 16:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=a4003334-8de0-482d-86da-3fc5e09d0263 00:07:13.654 16:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:13.654 16:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:13.654 [2024-11-20 16:18:59.588279] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:13.654 [2024-11-20 16:18:59.588331] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:13.654 true 00:07:13.655 16:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 91a5a723-04a2-4eba-ac0c-9e1c3d6452d9 00:07:13.655 16:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:13.916 16:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:13.916 16:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:14.178 16:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a4003334-8de0-482d-86da-3fc5e09d0263 00:07:14.178 16:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:14.439 [2024-11-20 16:19:00.266336] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:14.439 16:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:14.699 16:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2023703 00:07:14.699 16:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:14.699 16:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:14.699 16:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2023703 /var/tmp/bdevperf.sock 00:07:14.699 16:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2023703 ']' 00:07:14.699 16:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:14.699 16:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:14.699 16:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:14.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:14.699 16:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:14.699 16:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:14.699 [2024-11-20 16:19:00.498759] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:07:14.699 [2024-11-20 16:19:00.498811] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2023703 ] 00:07:14.699 [2024-11-20 16:19:00.587358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.699 [2024-11-20 16:19:00.623282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.642 16:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:15.642 16:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:15.642 16:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:15.902 Nvme0n1 00:07:15.902 16:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:15.902 [ 00:07:15.902 { 00:07:15.902 "name": "Nvme0n1", 00:07:15.902 "aliases": [ 00:07:15.902 "a4003334-8de0-482d-86da-3fc5e09d0263" 00:07:15.902 ], 00:07:15.902 "product_name": "NVMe disk", 00:07:15.902 "block_size": 4096, 00:07:15.902 "num_blocks": 38912, 00:07:15.902 "uuid": "a4003334-8de0-482d-86da-3fc5e09d0263", 00:07:15.902 "numa_id": 0, 00:07:15.902 "assigned_rate_limits": { 00:07:15.902 "rw_ios_per_sec": 0, 00:07:15.902 "rw_mbytes_per_sec": 0, 00:07:15.902 "r_mbytes_per_sec": 0, 00:07:15.902 "w_mbytes_per_sec": 0 00:07:15.902 }, 00:07:15.902 "claimed": false, 00:07:15.902 "zoned": false, 00:07:15.903 "supported_io_types": { 00:07:15.903 "read": true, 00:07:15.903 "write": true, 00:07:15.903 "unmap": true, 00:07:15.903 "flush": true, 00:07:15.903 "reset": true, 00:07:15.903 "nvme_admin": true, 00:07:15.903 "nvme_io": true, 00:07:15.903 "nvme_io_md": false, 00:07:15.903 "write_zeroes": true, 00:07:15.903 "zcopy": false, 00:07:15.903 "get_zone_info": false, 00:07:15.903 "zone_management": false, 00:07:15.903 "zone_append": false, 00:07:15.903 "compare": true, 00:07:15.903 "compare_and_write": true, 00:07:15.903 "abort": true, 00:07:15.903 "seek_hole": false, 00:07:15.903 "seek_data": false, 00:07:15.903 "copy": true, 00:07:15.903 "nvme_iov_md": false 00:07:15.903 }, 00:07:15.903 "memory_domains": [ 00:07:15.903 { 00:07:15.903 "dma_device_id": "system", 00:07:15.903 "dma_device_type": 1 00:07:15.903 } 00:07:15.903 ], 00:07:15.903 "driver_specific": { 00:07:15.903 "nvme": [ 00:07:15.903 { 00:07:15.903 "trid": { 00:07:15.903 "trtype": "TCP", 00:07:15.903 "adrfam": "IPv4", 00:07:15.903 "traddr": "10.0.0.2", 00:07:15.903 "trsvcid": "4420", 00:07:15.903 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:15.903 }, 00:07:15.903 "ctrlr_data": { 00:07:15.903 "cntlid": 1, 00:07:15.903 "vendor_id": "0x8086", 00:07:15.903 "model_number": "SPDK bdev Controller", 00:07:15.903 "serial_number": "SPDK0", 00:07:15.903 "firmware_revision": "25.01", 00:07:15.903 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:15.903 "oacs": { 00:07:15.903 "security": 0, 00:07:15.903 "format": 0, 00:07:15.903 "firmware": 0, 00:07:15.903 "ns_manage": 0 00:07:15.903 }, 00:07:15.903 "multi_ctrlr": true, 00:07:15.903 "ana_reporting": false 00:07:15.903 }, 00:07:15.903 "vs": { 00:07:15.903 "nvme_version": "1.3" 00:07:15.903 }, 00:07:15.903 "ns_data": { 00:07:15.903 "id": 1, 00:07:15.903 "can_share": true 00:07:15.903 } 00:07:15.903 } 00:07:15.903 ], 00:07:15.903 "mp_policy": "active_passive" 00:07:15.903 } 00:07:15.903 } 00:07:15.903 ] 00:07:15.903 16:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:15.903 16:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2023885 00:07:15.903 16:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:16.164 Running I/O for 10 seconds... 00:07:17.107 Latency(us) 00:07:17.107 [2024-11-20T15:19:03.066Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:17.107 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:17.107 Nvme0n1 : 1.00 17722.00 69.23 0.00 0.00 0.00 0.00 0.00 00:07:17.107 [2024-11-20T15:19:03.066Z] =================================================================================================================== 00:07:17.107 [2024-11-20T15:19:03.066Z] Total : 17722.00 69.23 0.00 0.00 0.00 0.00 0.00 00:07:17.107 00:07:18.050 16:19:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 91a5a723-04a2-4eba-ac0c-9e1c3d6452d9 00:07:18.050 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:18.050 Nvme0n1 : 2.00 17816.50 69.60 0.00 0.00 0.00 0.00 0.00 00:07:18.050 [2024-11-20T15:19:04.009Z] =================================================================================================================== 00:07:18.050 [2024-11-20T15:19:04.009Z] Total : 17816.50 69.60 0.00 0.00 0.00 0.00 0.00 00:07:18.050 00:07:18.050 true 00:07:18.050 16:19:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 91a5a723-04a2-4eba-ac0c-9e1c3d6452d9 00:07:18.050 16:19:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:18.310 16:19:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:18.310 16:19:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:18.310 16:19:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2023885 00:07:19.251 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:19.251 Nvme0n1 : 3.00 17870.33 69.81 0.00 0.00 0.00 0.00 0.00 00:07:19.251 [2024-11-20T15:19:05.210Z] =================================================================================================================== 00:07:19.251 [2024-11-20T15:19:05.210Z] Total : 17870.33 69.81 0.00 0.00 0.00 0.00 0.00 00:07:19.251 00:07:20.192 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:20.192 Nvme0n1 : 4.00 17888.50 69.88 0.00 0.00 0.00 0.00 0.00 00:07:20.192 [2024-11-20T15:19:06.151Z] =================================================================================================================== 00:07:20.192 [2024-11-20T15:19:06.151Z] Total : 17888.50 69.88 0.00 0.00 0.00 0.00 0.00 00:07:20.192 00:07:21.132 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:21.132 Nvme0n1 : 5.00 17933.00 70.05 0.00 0.00 0.00 0.00 0.00 00:07:21.132 [2024-11-20T15:19:07.091Z] =================================================================================================================== 00:07:21.132 [2024-11-20T15:19:07.091Z] Total : 17933.00 70.05 0.00 0.00 0.00 0.00 0.00 00:07:21.132 00:07:22.073 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:22.073 Nvme0n1 : 6.00 17948.50 70.11 0.00 0.00 0.00 0.00 0.00 00:07:22.073 [2024-11-20T15:19:08.032Z] =================================================================================================================== 00:07:22.073 [2024-11-20T15:19:08.032Z] Total : 17948.50 70.11 0.00 0.00 0.00 0.00 0.00 00:07:22.073 00:07:23.013 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:23.013 Nvme0n1 : 7.00 17962.57 70.17 0.00 0.00 0.00 0.00 0.00 00:07:23.013 [2024-11-20T15:19:08.972Z] =================================================================================================================== 00:07:23.013 [2024-11-20T15:19:08.972Z] Total : 17962.57 70.17 0.00 0.00 0.00 0.00 0.00 00:07:23.013 00:07:23.953 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:23.953 Nvme0n1 : 8.00 17975.25 70.22 0.00 0.00 0.00 0.00 0.00 00:07:23.953 [2024-11-20T15:19:09.912Z] =================================================================================================================== 00:07:23.953 [2024-11-20T15:19:09.912Z] Total : 17975.25 70.22 0.00 0.00 0.00 0.00 0.00 00:07:23.953 00:07:25.337 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:25.337 Nvme0n1 : 9.00 17986.22 70.26 0.00 0.00 0.00 0.00 0.00 00:07:25.337 [2024-11-20T15:19:11.296Z] =================================================================================================================== 00:07:25.337 [2024-11-20T15:19:11.296Z] Total : 17986.22 70.26 0.00 0.00 0.00 0.00 0.00 00:07:25.337 00:07:26.277 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:26.277 Nvme0n1 : 10.00 17996.00 70.30 0.00 0.00 0.00 0.00 0.00 00:07:26.277 [2024-11-20T15:19:12.236Z] =================================================================================================================== 00:07:26.277 [2024-11-20T15:19:12.236Z] Total : 17996.00 70.30 0.00 0.00 0.00 0.00 0.00 00:07:26.277 00:07:26.277 00:07:26.277 Latency(us) 00:07:26.277 [2024-11-20T15:19:12.236Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:26.277 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:26.277 Nvme0n1 : 10.00 17993.67 70.29 0.00 0.00 7110.10 3440.64 12670.29 00:07:26.277 [2024-11-20T15:19:12.236Z] =================================================================================================================== 00:07:26.277 [2024-11-20T15:19:12.236Z] Total : 17993.67 70.29 0.00 0.00 7110.10 3440.64 12670.29 00:07:26.277 { 00:07:26.277 "results": [ 00:07:26.277 { 00:07:26.277 "job": "Nvme0n1", 00:07:26.277 "core_mask": "0x2", 00:07:26.277 "workload": "randwrite", 00:07:26.277 "status": "finished", 00:07:26.277 "queue_depth": 128, 00:07:26.277 "io_size": 4096, 00:07:26.277 "runtime": 10.002964, 00:07:26.277 "iops": 17993.666677196878, 00:07:26.277 "mibps": 70.2877604578003, 00:07:26.277 "io_failed": 0, 00:07:26.277 "io_timeout": 0, 00:07:26.277 "avg_latency_us": 7110.096675296775, 00:07:26.277 "min_latency_us": 3440.64, 00:07:26.277 "max_latency_us": 12670.293333333333 00:07:26.277 } 00:07:26.277 ], 00:07:26.277 "core_count": 1 00:07:26.277 } 00:07:26.277 16:19:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2023703 00:07:26.277 16:19:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2023703 ']' 00:07:26.277 16:19:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2023703 00:07:26.277 16:19:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:26.277 16:19:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:26.277 16:19:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2023703 00:07:26.277 16:19:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:26.277 16:19:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:26.277 16:19:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2023703' 00:07:26.277 killing process with pid 2023703 00:07:26.277 16:19:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2023703 00:07:26.277 Received shutdown signal, test time was about 10.000000 seconds 00:07:26.277 00:07:26.277 Latency(us) 00:07:26.277 [2024-11-20T15:19:12.236Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:26.277 [2024-11-20T15:19:12.236Z] =================================================================================================================== 00:07:26.277 [2024-11-20T15:19:12.236Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:26.277 16:19:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2023703 00:07:26.277 16:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:26.539 16:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:26.539 16:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 91a5a723-04a2-4eba-ac0c-9e1c3d6452d9 00:07:26.539 16:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:26.800 16:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:26.800 16:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:26.800 16:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2019894 00:07:26.800 16:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2019894 00:07:26.800 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2019894 Killed "${NVMF_APP[@]}" "$@" 00:07:26.800 16:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:26.800 16:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:26.800 16:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:26.800 16:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:26.800 16:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:26.800 16:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2026072 00:07:26.800 16:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2026072 00:07:26.800 16:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:26.800 16:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2026072 ']' 00:07:26.800 16:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.800 16:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:26.800 16:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.800 16:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:26.800 16:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:26.800 [2024-11-20 16:19:12.677594] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:07:26.800 [2024-11-20 16:19:12.677648] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:26.800 [2024-11-20 16:19:12.756245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.061 [2024-11-20 16:19:12.792091] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:27.061 [2024-11-20 16:19:12.792119] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:27.061 [2024-11-20 16:19:12.792126] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:27.061 [2024-11-20 16:19:12.792133] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:27.061 [2024-11-20 16:19:12.792139] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:27.061 [2024-11-20 16:19:12.792733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.634 16:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:27.634 16:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:27.634 16:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:27.634 16:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:27.634 16:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:27.634 16:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:27.634 16:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:27.895 [2024-11-20 16:19:13.664477] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:27.895 [2024-11-20 16:19:13.664564] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:27.895 [2024-11-20 16:19:13.664594] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:27.895 16:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:27.895 16:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev a4003334-8de0-482d-86da-3fc5e09d0263 00:07:27.895 16:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=a4003334-8de0-482d-86da-3fc5e09d0263 00:07:27.895 16:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:27.895 16:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:27.895 16:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:27.895 16:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:27.895 16:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:27.895 16:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a4003334-8de0-482d-86da-3fc5e09d0263 -t 2000 00:07:28.156 [ 00:07:28.156 { 00:07:28.156 "name": "a4003334-8de0-482d-86da-3fc5e09d0263", 00:07:28.156 "aliases": [ 00:07:28.156 "lvs/lvol" 00:07:28.156 ], 00:07:28.156 "product_name": "Logical Volume", 00:07:28.156 "block_size": 4096, 00:07:28.156 "num_blocks": 38912, 00:07:28.156 "uuid": "a4003334-8de0-482d-86da-3fc5e09d0263", 00:07:28.156 "assigned_rate_limits": { 00:07:28.156 "rw_ios_per_sec": 0, 00:07:28.156 "rw_mbytes_per_sec": 0, 00:07:28.156 "r_mbytes_per_sec": 0, 00:07:28.156 "w_mbytes_per_sec": 0 00:07:28.156 }, 00:07:28.156 "claimed": false, 00:07:28.156 "zoned": false, 00:07:28.156 "supported_io_types": { 00:07:28.156 "read": true, 00:07:28.156 "write": true, 00:07:28.156 "unmap": true, 00:07:28.156 "flush": false, 00:07:28.156 "reset": true, 00:07:28.156 "nvme_admin": false, 00:07:28.156 "nvme_io": false, 00:07:28.156 "nvme_io_md": false, 00:07:28.156 "write_zeroes": true, 00:07:28.156 "zcopy": false, 00:07:28.156 "get_zone_info": false, 00:07:28.156 "zone_management": false, 00:07:28.156 "zone_append": false, 00:07:28.156 "compare": false, 00:07:28.156 "compare_and_write": false, 00:07:28.156 "abort": false, 00:07:28.156 "seek_hole": true, 00:07:28.156 "seek_data": true, 00:07:28.156 "copy": false, 00:07:28.156 "nvme_iov_md": false 00:07:28.156 }, 00:07:28.156 "driver_specific": { 00:07:28.156 "lvol": { 00:07:28.156 "lvol_store_uuid": "91a5a723-04a2-4eba-ac0c-9e1c3d6452d9", 00:07:28.156 "base_bdev": "aio_bdev", 00:07:28.156 "thin_provision": false, 00:07:28.156 "num_allocated_clusters": 38, 00:07:28.156 "snapshot": false, 00:07:28.156 "clone": false, 00:07:28.156 "esnap_clone": false 00:07:28.156 } 00:07:28.156 } 00:07:28.156 } 00:07:28.156 ] 00:07:28.156 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:28.156 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 91a5a723-04a2-4eba-ac0c-9e1c3d6452d9 00:07:28.156 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:28.417 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:28.417 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 91a5a723-04a2-4eba-ac0c-9e1c3d6452d9 00:07:28.417 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:28.417 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:28.417 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:28.677 [2024-11-20 16:19:14.492625] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:28.677 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 91a5a723-04a2-4eba-ac0c-9e1c3d6452d9 00:07:28.677 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:28.677 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 91a5a723-04a2-4eba-ac0c-9e1c3d6452d9 00:07:28.677 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:28.677 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:28.677 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:28.677 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:28.677 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:28.677 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:28.677 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:28.677 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:28.678 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 91a5a723-04a2-4eba-ac0c-9e1c3d6452d9 00:07:28.939 request: 00:07:28.939 { 00:07:28.939 "uuid": "91a5a723-04a2-4eba-ac0c-9e1c3d6452d9", 00:07:28.939 "method": "bdev_lvol_get_lvstores", 00:07:28.939 "req_id": 1 00:07:28.939 } 00:07:28.939 Got JSON-RPC error response 00:07:28.939 response: 00:07:28.939 { 00:07:28.939 "code": -19, 00:07:28.939 "message": "No such device" 00:07:28.939 } 00:07:28.939 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:28.939 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:28.939 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:28.939 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:28.939 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:28.939 aio_bdev 00:07:28.939 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a4003334-8de0-482d-86da-3fc5e09d0263 00:07:28.939 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=a4003334-8de0-482d-86da-3fc5e09d0263 00:07:28.939 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:28.939 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:28.939 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:28.939 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:28.939 16:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:29.200 16:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a4003334-8de0-482d-86da-3fc5e09d0263 -t 2000 00:07:29.460 [ 00:07:29.460 { 00:07:29.460 "name": "a4003334-8de0-482d-86da-3fc5e09d0263", 00:07:29.460 "aliases": [ 00:07:29.460 "lvs/lvol" 00:07:29.460 ], 00:07:29.460 "product_name": "Logical Volume", 00:07:29.460 "block_size": 4096, 00:07:29.460 "num_blocks": 38912, 00:07:29.460 "uuid": "a4003334-8de0-482d-86da-3fc5e09d0263", 00:07:29.460 "assigned_rate_limits": { 00:07:29.460 "rw_ios_per_sec": 0, 00:07:29.460 "rw_mbytes_per_sec": 0, 00:07:29.460 "r_mbytes_per_sec": 0, 00:07:29.460 "w_mbytes_per_sec": 0 00:07:29.460 }, 00:07:29.460 "claimed": false, 00:07:29.460 "zoned": false, 00:07:29.460 "supported_io_types": { 00:07:29.460 "read": true, 00:07:29.460 "write": true, 00:07:29.460 "unmap": true, 00:07:29.460 "flush": false, 00:07:29.460 "reset": true, 00:07:29.460 "nvme_admin": false, 00:07:29.460 "nvme_io": false, 00:07:29.460 "nvme_io_md": false, 00:07:29.460 "write_zeroes": true, 00:07:29.460 "zcopy": false, 00:07:29.460 "get_zone_info": false, 00:07:29.460 "zone_management": false, 00:07:29.460 "zone_append": false, 00:07:29.460 "compare": false, 00:07:29.460 "compare_and_write": false, 00:07:29.460 "abort": false, 00:07:29.460 "seek_hole": true, 00:07:29.460 "seek_data": true, 00:07:29.460 "copy": false, 00:07:29.460 "nvme_iov_md": false 00:07:29.460 }, 00:07:29.460 "driver_specific": { 00:07:29.460 "lvol": { 00:07:29.460 "lvol_store_uuid": "91a5a723-04a2-4eba-ac0c-9e1c3d6452d9", 00:07:29.460 "base_bdev": "aio_bdev", 00:07:29.460 "thin_provision": false, 00:07:29.460 "num_allocated_clusters": 38, 00:07:29.460 "snapshot": false, 00:07:29.460 "clone": false, 00:07:29.460 "esnap_clone": false 00:07:29.460 } 00:07:29.460 } 00:07:29.460 } 00:07:29.460 ] 00:07:29.460 16:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:29.460 16:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 91a5a723-04a2-4eba-ac0c-9e1c3d6452d9 00:07:29.460 16:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:29.460 16:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:29.461 16:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 91a5a723-04a2-4eba-ac0c-9e1c3d6452d9 00:07:29.461 16:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:29.721 16:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:29.721 16:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a4003334-8de0-482d-86da-3fc5e09d0263 00:07:29.721 16:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 91a5a723-04a2-4eba-ac0c-9e1c3d6452d9 00:07:29.981 16:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:30.241 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:30.241 00:07:30.241 real 0m17.369s 00:07:30.241 user 0m45.771s 00:07:30.241 sys 0m2.834s 00:07:30.241 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.241 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:30.241 ************************************ 00:07:30.241 END TEST lvs_grow_dirty 00:07:30.241 ************************************ 00:07:30.241 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:30.241 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:30.241 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:30.241 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:30.241 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:30.241 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:30.241 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:30.241 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:30.241 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:30.241 nvmf_trace.0 00:07:30.241 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:30.241 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:30.241 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:30.241 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:30.241 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:30.241 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:30.241 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:30.241 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:30.241 rmmod nvme_tcp 00:07:30.241 rmmod nvme_fabrics 00:07:30.502 rmmod nvme_keyring 00:07:30.502 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:30.502 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:30.502 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:30.502 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2026072 ']' 00:07:30.502 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2026072 00:07:30.502 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2026072 ']' 00:07:30.502 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2026072 00:07:30.502 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:30.502 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:30.502 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2026072 00:07:30.502 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:30.502 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:30.502 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2026072' 00:07:30.502 killing process with pid 2026072 00:07:30.502 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2026072 00:07:30.502 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2026072 00:07:30.502 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:30.502 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:30.502 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:30.502 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:30.502 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:30.502 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:30.502 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:30.502 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:30.502 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:30.502 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.502 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:30.502 16:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.046 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:33.046 00:07:33.046 real 0m44.408s 00:07:33.046 user 1m7.605s 00:07:33.046 sys 0m10.160s 00:07:33.046 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.046 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:33.046 ************************************ 00:07:33.046 END TEST nvmf_lvs_grow 00:07:33.046 ************************************ 00:07:33.046 16:19:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:33.046 16:19:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:33.046 16:19:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.046 16:19:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:33.046 ************************************ 00:07:33.046 START TEST nvmf_bdev_io_wait 00:07:33.046 ************************************ 00:07:33.046 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:33.046 * Looking for test storage... 00:07:33.046 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:33.046 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:33.046 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:07:33.046 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:33.046 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:33.046 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:33.046 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:33.046 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:33.046 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:33.046 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:33.046 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:33.046 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:33.046 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:33.046 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:33.046 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:33.046 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:33.046 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:33.046 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:33.046 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:33.046 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:33.046 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:33.046 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:33.046 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:33.046 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:33.046 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:33.046 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:33.046 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:33.046 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:33.046 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:33.046 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:33.046 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:33.046 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:33.046 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:33.046 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:33.046 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:33.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.046 --rc genhtml_branch_coverage=1 00:07:33.046 --rc genhtml_function_coverage=1 00:07:33.046 --rc genhtml_legend=1 00:07:33.046 --rc geninfo_all_blocks=1 00:07:33.046 --rc geninfo_unexecuted_blocks=1 00:07:33.046 00:07:33.046 ' 00:07:33.046 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:33.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.046 --rc genhtml_branch_coverage=1 00:07:33.046 --rc genhtml_function_coverage=1 00:07:33.046 --rc genhtml_legend=1 00:07:33.046 --rc geninfo_all_blocks=1 00:07:33.046 --rc geninfo_unexecuted_blocks=1 00:07:33.046 00:07:33.046 ' 00:07:33.046 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:33.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.046 --rc genhtml_branch_coverage=1 00:07:33.046 --rc genhtml_function_coverage=1 00:07:33.046 --rc genhtml_legend=1 00:07:33.046 --rc geninfo_all_blocks=1 00:07:33.046 --rc geninfo_unexecuted_blocks=1 00:07:33.046 00:07:33.046 ' 00:07:33.046 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:33.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.047 --rc genhtml_branch_coverage=1 00:07:33.047 --rc genhtml_function_coverage=1 00:07:33.047 --rc genhtml_legend=1 00:07:33.047 --rc geninfo_all_blocks=1 00:07:33.047 --rc geninfo_unexecuted_blocks=1 00:07:33.047 00:07:33.047 ' 00:07:33.047 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:33.047 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:33.047 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:33.047 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:33.047 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:33.047 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:33.047 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:33.047 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:33.047 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:33.047 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:33.047 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:33.047 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:33.047 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:07:33.047 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:07:33.047 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:33.047 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:33.047 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:33.047 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:33.047 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:33.047 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:33.047 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:33.047 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:33.047 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:33.047 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.047 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.047 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.047 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:33.047 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.047 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:33.047 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:33.047 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:33.047 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:33.047 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:33.047 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:33.047 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:33.047 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:33.047 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:33.047 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:33.047 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:33.047 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:33.047 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:33.047 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:33.047 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:33.047 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:33.047 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:33.047 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:33.047 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:33.047 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.047 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:33.047 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.047 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:33.047 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:33.047 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:33.047 16:19:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:41.323 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:41.323 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:41.323 Found net devices under 0000:31:00.0: cvl_0_0 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:41.323 Found net devices under 0000:31:00.1: cvl_0_1 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:41.323 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:41.324 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:41.324 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:41.324 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:41.324 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:41.324 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:41.324 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:41.324 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:41.324 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:41.324 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:41.324 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:41.324 16:19:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:41.324 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:41.324 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:41.324 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:41.324 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:41.324 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:41.324 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:41.324 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:41.324 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:41.324 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:41.324 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.707 ms 00:07:41.324 00:07:41.324 --- 10.0.0.2 ping statistics --- 00:07:41.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.324 rtt min/avg/max/mdev = 0.707/0.707/0.707/0.000 ms 00:07:41.324 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:41.324 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:41.324 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:07:41.324 00:07:41.324 --- 10.0.0.1 ping statistics --- 00:07:41.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.324 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:07:41.324 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:41.324 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:07:41.324 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:41.324 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:41.324 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:41.324 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:41.324 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:41.324 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:41.324 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:41.324 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:41.324 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:41.324 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:41.324 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:41.324 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2031186 00:07:41.324 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2031186 00:07:41.324 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:41.324 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2031186 ']' 00:07:41.324 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.324 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:41.324 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.324 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:41.324 16:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:41.324 [2024-11-20 16:19:26.309795] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:07:41.324 [2024-11-20 16:19:26.309860] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:41.324 [2024-11-20 16:19:26.393949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:41.324 [2024-11-20 16:19:26.437440] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:41.324 [2024-11-20 16:19:26.437476] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:41.324 [2024-11-20 16:19:26.437485] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:41.324 [2024-11-20 16:19:26.437491] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:41.324 [2024-11-20 16:19:26.437497] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:41.324 [2024-11-20 16:19:26.439372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.324 [2024-11-20 16:19:26.439491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:41.324 [2024-11-20 16:19:26.439649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.324 [2024-11-20 16:19:26.439649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:41.324 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:41.324 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:07:41.324 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:41.324 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:41.324 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:41.324 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:41.324 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:41.324 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.324 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:41.324 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.324 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:41.324 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.324 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:41.324 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.324 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:41.324 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.324 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:41.324 [2024-11-20 16:19:27.220263] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:41.324 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.324 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:41.324 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.324 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:41.324 Malloc0 00:07:41.324 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.324 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:41.324 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.324 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:41.324 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.324 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:41.324 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.324 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:41.324 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.325 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:41.325 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.325 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:41.586 [2024-11-20 16:19:27.279525] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:41.586 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.586 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2031251 00:07:41.586 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:41.586 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2031254 00:07:41.586 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:41.586 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:41.586 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:41.586 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:41.586 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:41.586 { 00:07:41.586 "params": { 00:07:41.586 "name": "Nvme$subsystem", 00:07:41.586 "trtype": "$TEST_TRANSPORT", 00:07:41.586 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:41.586 "adrfam": "ipv4", 00:07:41.586 "trsvcid": "$NVMF_PORT", 00:07:41.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:41.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:41.586 "hdgst": ${hdgst:-false}, 00:07:41.586 "ddgst": ${ddgst:-false} 00:07:41.586 }, 00:07:41.586 "method": "bdev_nvme_attach_controller" 00:07:41.586 } 00:07:41.586 EOF 00:07:41.586 )") 00:07:41.586 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2031257 00:07:41.586 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:41.586 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:41.586 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:41.586 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:41.586 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:41.586 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:41.586 { 00:07:41.586 "params": { 00:07:41.586 "name": "Nvme$subsystem", 00:07:41.586 "trtype": "$TEST_TRANSPORT", 00:07:41.586 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:41.586 "adrfam": "ipv4", 00:07:41.586 "trsvcid": "$NVMF_PORT", 00:07:41.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:41.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:41.586 "hdgst": ${hdgst:-false}, 00:07:41.586 "ddgst": ${ddgst:-false} 00:07:41.586 }, 00:07:41.586 "method": "bdev_nvme_attach_controller" 00:07:41.586 } 00:07:41.586 EOF 00:07:41.586 )") 00:07:41.586 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2031261 00:07:41.586 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:41.586 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:41.586 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:41.586 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:41.586 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:41.586 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:41.586 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:41.586 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:41.586 { 00:07:41.586 "params": { 00:07:41.587 "name": "Nvme$subsystem", 00:07:41.587 "trtype": "$TEST_TRANSPORT", 00:07:41.587 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:41.587 "adrfam": "ipv4", 00:07:41.587 "trsvcid": "$NVMF_PORT", 00:07:41.587 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:41.587 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:41.587 "hdgst": ${hdgst:-false}, 00:07:41.587 "ddgst": ${ddgst:-false} 00:07:41.587 }, 00:07:41.587 "method": "bdev_nvme_attach_controller" 00:07:41.587 } 00:07:41.587 EOF 00:07:41.587 )") 00:07:41.587 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:41.587 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:41.587 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:41.587 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:41.587 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:41.587 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:41.587 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:41.587 { 00:07:41.587 "params": { 00:07:41.587 "name": "Nvme$subsystem", 00:07:41.587 "trtype": "$TEST_TRANSPORT", 00:07:41.587 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:41.587 "adrfam": "ipv4", 00:07:41.587 "trsvcid": "$NVMF_PORT", 00:07:41.587 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:41.587 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:41.587 "hdgst": ${hdgst:-false}, 00:07:41.587 "ddgst": ${ddgst:-false} 00:07:41.587 }, 00:07:41.587 "method": "bdev_nvme_attach_controller" 00:07:41.587 } 00:07:41.587 EOF 00:07:41.587 )") 00:07:41.587 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:41.587 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2031251 00:07:41.587 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:41.587 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:41.587 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:41.587 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:41.587 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:41.587 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:41.587 "params": { 00:07:41.587 "name": "Nvme1", 00:07:41.587 "trtype": "tcp", 00:07:41.587 "traddr": "10.0.0.2", 00:07:41.587 "adrfam": "ipv4", 00:07:41.587 "trsvcid": "4420", 00:07:41.587 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:41.587 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:41.587 "hdgst": false, 00:07:41.587 "ddgst": false 00:07:41.587 }, 00:07:41.587 "method": "bdev_nvme_attach_controller" 00:07:41.587 }' 00:07:41.587 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:41.587 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:41.587 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:41.587 "params": { 00:07:41.587 "name": "Nvme1", 00:07:41.587 "trtype": "tcp", 00:07:41.587 "traddr": "10.0.0.2", 00:07:41.587 "adrfam": "ipv4", 00:07:41.587 "trsvcid": "4420", 00:07:41.587 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:41.587 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:41.587 "hdgst": false, 00:07:41.587 "ddgst": false 00:07:41.587 }, 00:07:41.587 "method": "bdev_nvme_attach_controller" 00:07:41.587 }' 00:07:41.587 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:41.587 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:41.587 "params": { 00:07:41.587 "name": "Nvme1", 00:07:41.587 "trtype": "tcp", 00:07:41.587 "traddr": "10.0.0.2", 00:07:41.587 "adrfam": "ipv4", 00:07:41.587 "trsvcid": "4420", 00:07:41.587 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:41.587 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:41.587 "hdgst": false, 00:07:41.587 "ddgst": false 00:07:41.587 }, 00:07:41.587 "method": "bdev_nvme_attach_controller" 00:07:41.587 }' 00:07:41.587 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:41.587 16:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:41.587 "params": { 00:07:41.587 "name": "Nvme1", 00:07:41.587 "trtype": "tcp", 00:07:41.587 "traddr": "10.0.0.2", 00:07:41.587 "adrfam": "ipv4", 00:07:41.587 "trsvcid": "4420", 00:07:41.587 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:41.587 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:41.587 "hdgst": false, 00:07:41.587 "ddgst": false 00:07:41.587 }, 00:07:41.587 "method": "bdev_nvme_attach_controller" 00:07:41.587 }' 00:07:41.587 [2024-11-20 16:19:27.333365] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:07:41.587 [2024-11-20 16:19:27.333415] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:41.587 [2024-11-20 16:19:27.334664] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:07:41.587 [2024-11-20 16:19:27.334710] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:41.587 [2024-11-20 16:19:27.338318] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:07:41.587 [2024-11-20 16:19:27.338363] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:41.587 [2024-11-20 16:19:27.338859] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:07:41.587 [2024-11-20 16:19:27.338901] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:41.587 [2024-11-20 16:19:27.483784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.587 [2024-11-20 16:19:27.512641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:41.587 [2024-11-20 16:19:27.538476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.848 [2024-11-20 16:19:27.567330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:41.848 [2024-11-20 16:19:27.599688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.848 [2024-11-20 16:19:27.627993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:41.848 [2024-11-20 16:19:27.661035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.848 [2024-11-20 16:19:27.689908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:41.848 Running I/O for 1 seconds... 00:07:41.848 Running I/O for 1 seconds... 00:07:41.848 Running I/O for 1 seconds... 00:07:42.110 Running I/O for 1 seconds... 00:07:43.053 10995.00 IOPS, 42.95 MiB/s 00:07:43.053 Latency(us) 00:07:43.053 [2024-11-20T15:19:29.012Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:43.053 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:43.053 Nvme1n1 : 1.01 11003.99 42.98 0.00 0.00 11564.40 4833.28 15837.87 00:07:43.053 [2024-11-20T15:19:29.012Z] =================================================================================================================== 00:07:43.053 [2024-11-20T15:19:29.012Z] Total : 11003.99 42.98 0.00 0.00 11564.40 4833.28 15837.87 00:07:43.053 12242.00 IOPS, 47.82 MiB/s 00:07:43.053 Latency(us) 00:07:43.053 [2024-11-20T15:19:29.012Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:43.053 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:43.053 Nvme1n1 : 1.01 12300.85 48.05 0.00 0.00 10369.62 5297.49 21517.65 00:07:43.053 [2024-11-20T15:19:29.012Z] =================================================================================================================== 00:07:43.053 [2024-11-20T15:19:29.012Z] Total : 12300.85 48.05 0.00 0.00 10369.62 5297.49 21517.65 00:07:43.053 11301.00 IOPS, 44.14 MiB/s 00:07:43.053 Latency(us) 00:07:43.053 [2024-11-20T15:19:29.012Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:43.053 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:43.053 Nvme1n1 : 1.01 11437.03 44.68 0.00 0.00 11166.86 2334.72 26651.31 00:07:43.053 [2024-11-20T15:19:29.012Z] =================================================================================================================== 00:07:43.053 [2024-11-20T15:19:29.012Z] Total : 11437.03 44.68 0.00 0.00 11166.86 2334.72 26651.31 00:07:43.053 180000.00 IOPS, 703.12 MiB/s 00:07:43.053 Latency(us) 00:07:43.053 [2024-11-20T15:19:29.012Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:43.053 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:43.053 Nvme1n1 : 1.00 179646.37 701.74 0.00 0.00 708.34 303.79 1966.08 00:07:43.053 [2024-11-20T15:19:29.012Z] =================================================================================================================== 00:07:43.053 [2024-11-20T15:19:29.012Z] Total : 179646.37 701.74 0.00 0.00 708.34 303.79 1966.08 00:07:43.053 16:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2031254 00:07:43.053 16:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2031257 00:07:43.053 16:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2031261 00:07:43.053 16:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:43.053 16:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.053 16:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:43.053 16:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.053 16:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:43.053 16:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:43.053 16:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:43.053 16:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:43.053 16:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:43.053 16:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:43.053 16:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:43.053 16:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:43.053 rmmod nvme_tcp 00:07:43.053 rmmod nvme_fabrics 00:07:43.053 rmmod nvme_keyring 00:07:43.314 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:43.314 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:43.314 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:43.314 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2031186 ']' 00:07:43.314 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2031186 00:07:43.314 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2031186 ']' 00:07:43.314 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2031186 00:07:43.314 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:07:43.314 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:43.314 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2031186 00:07:43.314 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:43.314 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:43.314 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2031186' 00:07:43.314 killing process with pid 2031186 00:07:43.314 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2031186 00:07:43.314 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2031186 00:07:43.314 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:43.314 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:43.314 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:43.314 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:43.314 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:07:43.314 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:43.314 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:07:43.314 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:43.314 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:43.314 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.314 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:43.314 16:19:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.862 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:45.862 00:07:45.862 real 0m12.727s 00:07:45.862 user 0m18.353s 00:07:45.862 sys 0m6.976s 00:07:45.862 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.862 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:45.862 ************************************ 00:07:45.863 END TEST nvmf_bdev_io_wait 00:07:45.863 ************************************ 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:45.863 ************************************ 00:07:45.863 START TEST nvmf_queue_depth 00:07:45.863 ************************************ 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:45.863 * Looking for test storage... 00:07:45.863 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:45.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.863 --rc genhtml_branch_coverage=1 00:07:45.863 --rc genhtml_function_coverage=1 00:07:45.863 --rc genhtml_legend=1 00:07:45.863 --rc geninfo_all_blocks=1 00:07:45.863 --rc geninfo_unexecuted_blocks=1 00:07:45.863 00:07:45.863 ' 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:45.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.863 --rc genhtml_branch_coverage=1 00:07:45.863 --rc genhtml_function_coverage=1 00:07:45.863 --rc genhtml_legend=1 00:07:45.863 --rc geninfo_all_blocks=1 00:07:45.863 --rc geninfo_unexecuted_blocks=1 00:07:45.863 00:07:45.863 ' 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:45.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.863 --rc genhtml_branch_coverage=1 00:07:45.863 --rc genhtml_function_coverage=1 00:07:45.863 --rc genhtml_legend=1 00:07:45.863 --rc geninfo_all_blocks=1 00:07:45.863 --rc geninfo_unexecuted_blocks=1 00:07:45.863 00:07:45.863 ' 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:45.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.863 --rc genhtml_branch_coverage=1 00:07:45.863 --rc genhtml_function_coverage=1 00:07:45.863 --rc genhtml_legend=1 00:07:45.863 --rc geninfo_all_blocks=1 00:07:45.863 --rc geninfo_unexecuted_blocks=1 00:07:45.863 00:07:45.863 ' 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.863 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:45.864 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.864 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:45.864 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:45.864 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:45.864 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:45.864 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:45.864 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:45.864 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:45.864 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:45.864 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:45.864 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:45.864 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:45.864 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:45.864 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:45.864 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:45.864 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:45.864 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:45.864 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:45.864 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:45.864 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:45.864 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:45.864 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.864 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:45.864 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.864 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:45.864 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:45.864 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:07:45.864 16:19:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:52.455 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:52.455 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:07:52.455 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:52.455 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:52.455 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:52.455 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:52.455 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:52.455 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:07:52.455 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:52.455 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:07:52.455 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:07:52.455 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:07:52.455 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:07:52.455 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:07:52.455 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:07:52.455 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:52.455 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:52.455 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:52.455 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:52.455 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:52.455 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:52.455 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:52.455 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:52.455 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:52.456 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:52.456 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:52.456 Found net devices under 0000:31:00.0: cvl_0_0 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:52.456 Found net devices under 0000:31:00.1: cvl_0_1 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:52.456 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:52.456 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:07:52.456 00:07:52.456 --- 10.0.0.2 ping statistics --- 00:07:52.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.456 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:52.456 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:52.456 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.332 ms 00:07:52.456 00:07:52.456 --- 10.0.0.1 ping statistics --- 00:07:52.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.456 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:52.456 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:52.717 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:52.717 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:52.717 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:52.717 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:52.717 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2035951 00:07:52.717 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2035951 00:07:52.717 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:52.717 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2035951 ']' 00:07:52.717 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.717 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:52.717 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.717 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:52.717 16:19:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:52.717 [2024-11-20 16:19:38.491715] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:07:52.717 [2024-11-20 16:19:38.491766] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:52.717 [2024-11-20 16:19:38.590413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.717 [2024-11-20 16:19:38.639440] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:52.717 [2024-11-20 16:19:38.639487] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:52.717 [2024-11-20 16:19:38.639495] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:52.717 [2024-11-20 16:19:38.639502] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:52.717 [2024-11-20 16:19:38.639509] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:52.717 [2024-11-20 16:19:38.640303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.662 16:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:53.662 16:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:53.662 16:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:53.662 16:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:53.662 16:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:53.662 16:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:53.662 16:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:53.662 16:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.662 16:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:53.662 [2024-11-20 16:19:39.344378] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:53.662 16:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.662 16:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:53.662 16:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.662 16:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:53.662 Malloc0 00:07:53.662 16:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.662 16:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:53.662 16:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.662 16:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:53.662 16:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.662 16:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:53.662 16:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.662 16:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:53.662 16:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.662 16:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:53.662 16:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.662 16:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:53.662 [2024-11-20 16:19:39.389438] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:53.662 16:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.662 16:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2036002 00:07:53.662 16:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:53.662 16:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:53.662 16:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2036002 /var/tmp/bdevperf.sock 00:07:53.662 16:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2036002 ']' 00:07:53.662 16:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:53.662 16:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:53.662 16:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:53.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:53.662 16:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:53.662 16:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:53.662 [2024-11-20 16:19:39.449067] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:07:53.662 [2024-11-20 16:19:39.449132] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2036002 ] 00:07:53.662 [2024-11-20 16:19:39.525003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.662 [2024-11-20 16:19:39.567390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.606 16:19:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:54.606 16:19:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:54.606 16:19:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:54.606 16:19:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.606 16:19:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:54.606 NVMe0n1 00:07:54.606 16:19:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.606 16:19:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:54.606 Running I/O for 10 seconds... 00:07:56.497 9729.00 IOPS, 38.00 MiB/s [2024-11-20T15:19:43.841Z] 10752.00 IOPS, 42.00 MiB/s [2024-11-20T15:19:44.783Z] 10945.67 IOPS, 42.76 MiB/s [2024-11-20T15:19:45.725Z] 11254.50 IOPS, 43.96 MiB/s [2024-11-20T15:19:46.668Z] 11271.40 IOPS, 44.03 MiB/s [2024-11-20T15:19:47.608Z] 11382.50 IOPS, 44.46 MiB/s [2024-11-20T15:19:48.547Z] 11410.86 IOPS, 44.57 MiB/s [2024-11-20T15:19:49.486Z] 11393.50 IOPS, 44.51 MiB/s [2024-11-20T15:19:50.870Z] 11437.67 IOPS, 44.68 MiB/s [2024-11-20T15:19:50.870Z] 11470.00 IOPS, 44.80 MiB/s 00:08:04.911 Latency(us) 00:08:04.911 [2024-11-20T15:19:50.871Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.912 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:04.912 Verification LBA range: start 0x0 length 0x4000 00:08:04.912 NVMe0n1 : 10.07 11497.08 44.91 0.00 0.00 88784.92 24357.55 76458.67 00:08:04.912 [2024-11-20T15:19:50.871Z] =================================================================================================================== 00:08:04.912 [2024-11-20T15:19:50.871Z] Total : 11497.08 44.91 0.00 0.00 88784.92 24357.55 76458.67 00:08:04.912 { 00:08:04.912 "results": [ 00:08:04.912 { 00:08:04.912 "job": "NVMe0n1", 00:08:04.912 "core_mask": "0x1", 00:08:04.912 "workload": "verify", 00:08:04.912 "status": "finished", 00:08:04.912 "verify_range": { 00:08:04.912 "start": 0, 00:08:04.912 "length": 16384 00:08:04.912 }, 00:08:04.912 "queue_depth": 1024, 00:08:04.912 "io_size": 4096, 00:08:04.912 "runtime": 10.065514, 00:08:04.912 "iops": 11497.078042909681, 00:08:04.912 "mibps": 44.91046110511594, 00:08:04.912 "io_failed": 0, 00:08:04.912 "io_timeout": 0, 00:08:04.912 "avg_latency_us": 88784.91733112118, 00:08:04.912 "min_latency_us": 24357.546666666665, 00:08:04.912 "max_latency_us": 76458.66666666667 00:08:04.912 } 00:08:04.912 ], 00:08:04.912 "core_count": 1 00:08:04.912 } 00:08:04.912 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2036002 00:08:04.912 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2036002 ']' 00:08:04.912 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2036002 00:08:04.912 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:04.912 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:04.912 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2036002 00:08:04.912 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:04.912 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:04.912 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2036002' 00:08:04.912 killing process with pid 2036002 00:08:04.912 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2036002 00:08:04.912 Received shutdown signal, test time was about 10.000000 seconds 00:08:04.912 00:08:04.912 Latency(us) 00:08:04.912 [2024-11-20T15:19:50.871Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.912 [2024-11-20T15:19:50.871Z] =================================================================================================================== 00:08:04.912 [2024-11-20T15:19:50.871Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:04.912 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2036002 00:08:04.912 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:04.912 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:04.912 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:04.912 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:04.912 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:04.912 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:04.912 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:04.912 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:04.912 rmmod nvme_tcp 00:08:04.912 rmmod nvme_fabrics 00:08:04.912 rmmod nvme_keyring 00:08:04.912 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:04.912 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:04.912 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:04.912 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2035951 ']' 00:08:04.912 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2035951 00:08:04.912 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2035951 ']' 00:08:04.912 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2035951 00:08:04.912 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:04.912 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:04.912 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2035951 00:08:04.912 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:04.912 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:04.912 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2035951' 00:08:04.912 killing process with pid 2035951 00:08:04.912 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2035951 00:08:04.912 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2035951 00:08:05.172 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:05.172 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:05.172 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:05.172 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:05.172 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:05.172 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:05.172 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:05.172 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:05.172 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:05.172 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:05.172 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:05.172 16:19:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.089 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:07.351 00:08:07.351 real 0m21.663s 00:08:07.351 user 0m25.345s 00:08:07.351 sys 0m6.485s 00:08:07.351 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.351 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:07.351 ************************************ 00:08:07.351 END TEST nvmf_queue_depth 00:08:07.351 ************************************ 00:08:07.351 16:19:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:07.351 16:19:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:07.351 16:19:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.351 16:19:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:07.351 ************************************ 00:08:07.351 START TEST nvmf_target_multipath 00:08:07.351 ************************************ 00:08:07.351 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:07.351 * Looking for test storage... 00:08:07.351 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:07.351 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:07.351 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:08:07.351 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:07.351 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:07.351 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:07.351 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:07.351 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:07.351 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:07.351 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:07.351 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:07.351 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:07.351 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:07.351 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:07.351 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:07.351 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:07.351 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:07.351 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:07.351 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:07.351 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:07.351 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:07.351 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:07.351 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:07.351 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:07.351 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:07.351 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:07.351 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:07.351 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:07.351 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:07.613 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:07.613 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:07.613 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:07.613 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:07.613 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:07.613 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:07.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.614 --rc genhtml_branch_coverage=1 00:08:07.614 --rc genhtml_function_coverage=1 00:08:07.614 --rc genhtml_legend=1 00:08:07.614 --rc geninfo_all_blocks=1 00:08:07.614 --rc geninfo_unexecuted_blocks=1 00:08:07.614 00:08:07.614 ' 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:07.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.614 --rc genhtml_branch_coverage=1 00:08:07.614 --rc genhtml_function_coverage=1 00:08:07.614 --rc genhtml_legend=1 00:08:07.614 --rc geninfo_all_blocks=1 00:08:07.614 --rc geninfo_unexecuted_blocks=1 00:08:07.614 00:08:07.614 ' 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:07.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.614 --rc genhtml_branch_coverage=1 00:08:07.614 --rc genhtml_function_coverage=1 00:08:07.614 --rc genhtml_legend=1 00:08:07.614 --rc geninfo_all_blocks=1 00:08:07.614 --rc geninfo_unexecuted_blocks=1 00:08:07.614 00:08:07.614 ' 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:07.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.614 --rc genhtml_branch_coverage=1 00:08:07.614 --rc genhtml_function_coverage=1 00:08:07.614 --rc genhtml_legend=1 00:08:07.614 --rc geninfo_all_blocks=1 00:08:07.614 --rc geninfo_unexecuted_blocks=1 00:08:07.614 00:08:07.614 ' 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:07.614 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:07.614 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:07.615 16:19:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:15.760 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:15.760 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:15.760 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:15.760 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:15.760 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:15.760 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:15.760 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:15.760 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:15.760 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:15.760 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:15.761 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:15.761 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:15.761 Found net devices under 0000:31:00.0: cvl_0_0 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:15.761 Found net devices under 0000:31:00.1: cvl_0_1 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:15.761 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:15.761 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.621 ms 00:08:15.761 00:08:15.761 --- 10.0.0.2 ping statistics --- 00:08:15.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.761 rtt min/avg/max/mdev = 0.621/0.621/0.621/0.000 ms 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:15.761 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:15.761 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:08:15.761 00:08:15.761 --- 10.0.0.1 ping statistics --- 00:08:15.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.761 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:15.761 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:15.762 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:15.762 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:15.762 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:15.762 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:15.762 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:15.762 only one NIC for nvmf test 00:08:15.762 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:15.762 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:15.762 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:15.762 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:15.762 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:15.762 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:15.762 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:15.762 rmmod nvme_tcp 00:08:15.762 rmmod nvme_fabrics 00:08:15.762 rmmod nvme_keyring 00:08:15.762 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:15.762 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:15.762 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:15.762 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:15.762 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:15.762 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:15.762 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:15.762 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:15.762 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:15.762 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:15.762 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:15.762 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:15.762 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:15.762 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.762 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:15.762 16:20:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.148 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:17.148 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:17.148 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:17.148 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:17.148 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:17.148 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:17.148 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:17.148 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:17.148 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:17.148 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:17.148 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:17.148 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:17.148 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:17.148 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:17.148 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:17.148 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:17.148 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:17.148 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:17.148 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:17.148 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:17.148 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:17.148 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:17.148 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.148 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:17.148 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.148 16:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:17.148 00:08:17.148 real 0m9.867s 00:08:17.148 user 0m2.201s 00:08:17.148 sys 0m5.596s 00:08:17.148 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:17.148 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:17.148 ************************************ 00:08:17.148 END TEST nvmf_target_multipath 00:08:17.148 ************************************ 00:08:17.148 16:20:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:17.148 16:20:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:17.148 16:20:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.148 16:20:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:17.148 ************************************ 00:08:17.148 START TEST nvmf_zcopy 00:08:17.148 ************************************ 00:08:17.148 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:17.410 * Looking for test storage... 00:08:17.410 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:17.410 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:17.410 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:08:17.410 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:17.410 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:17.410 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:17.410 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:17.410 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:17.410 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:17.410 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:17.410 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:17.410 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:17.410 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:17.410 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:17.410 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:17.410 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:17.410 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:17.410 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:17.410 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:17.410 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:17.410 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:17.410 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:17.410 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:17.410 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:17.410 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:17.410 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:17.410 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:17.410 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:17.410 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:17.410 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:17.410 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:17.410 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:17.410 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:17.410 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:17.410 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:17.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.410 --rc genhtml_branch_coverage=1 00:08:17.410 --rc genhtml_function_coverage=1 00:08:17.410 --rc genhtml_legend=1 00:08:17.410 --rc geninfo_all_blocks=1 00:08:17.410 --rc geninfo_unexecuted_blocks=1 00:08:17.410 00:08:17.410 ' 00:08:17.410 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:17.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.411 --rc genhtml_branch_coverage=1 00:08:17.411 --rc genhtml_function_coverage=1 00:08:17.411 --rc genhtml_legend=1 00:08:17.411 --rc geninfo_all_blocks=1 00:08:17.411 --rc geninfo_unexecuted_blocks=1 00:08:17.411 00:08:17.411 ' 00:08:17.411 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:17.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.411 --rc genhtml_branch_coverage=1 00:08:17.411 --rc genhtml_function_coverage=1 00:08:17.411 --rc genhtml_legend=1 00:08:17.411 --rc geninfo_all_blocks=1 00:08:17.411 --rc geninfo_unexecuted_blocks=1 00:08:17.411 00:08:17.411 ' 00:08:17.411 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:17.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.411 --rc genhtml_branch_coverage=1 00:08:17.411 --rc genhtml_function_coverage=1 00:08:17.411 --rc genhtml_legend=1 00:08:17.411 --rc geninfo_all_blocks=1 00:08:17.411 --rc geninfo_unexecuted_blocks=1 00:08:17.411 00:08:17.411 ' 00:08:17.411 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:17.411 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:17.411 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:17.411 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:17.411 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:17.411 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:17.411 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:17.411 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:17.411 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:17.411 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:17.411 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:17.411 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:17.411 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:08:17.411 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:08:17.411 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:17.411 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:17.411 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:17.411 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:17.411 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:17.411 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:17.411 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:17.411 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:17.411 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:17.411 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.411 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.411 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.411 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:17.411 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.411 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:17.411 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:17.411 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:17.411 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:17.411 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:17.411 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:17.411 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:17.411 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:17.411 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:17.411 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:17.411 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:17.411 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:17.411 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:17.411 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:17.411 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:17.411 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:17.411 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:17.411 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.411 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:17.411 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.411 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:17.411 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:17.411 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:17.411 16:20:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:25.594 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:25.594 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:25.594 Found net devices under 0000:31:00.0: cvl_0_0 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:25.594 Found net devices under 0000:31:00.1: cvl_0_1 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:25.594 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:25.594 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.457 ms 00:08:25.594 00:08:25.594 --- 10.0.0.2 ping statistics --- 00:08:25.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.594 rtt min/avg/max/mdev = 0.457/0.457/0.457/0.000 ms 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:25.594 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:25.594 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:08:25.594 00:08:25.594 --- 10.0.0.1 ping statistics --- 00:08:25.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.594 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:25.594 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:25.595 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:25.595 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:25.595 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:25.595 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:25.595 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:25.595 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:25.595 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:25.595 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:25.595 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:25.595 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:25.595 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:25.595 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2046730 00:08:25.595 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2046730 00:08:25.595 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:25.595 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2046730 ']' 00:08:25.595 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.595 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:25.595 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.595 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:25.595 16:20:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:25.595 [2024-11-20 16:20:10.462207] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:08:25.595 [2024-11-20 16:20:10.462260] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.595 [2024-11-20 16:20:10.560207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.595 [2024-11-20 16:20:10.601800] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:25.595 [2024-11-20 16:20:10.601843] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:25.595 [2024-11-20 16:20:10.601851] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:25.595 [2024-11-20 16:20:10.601859] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:25.595 [2024-11-20 16:20:10.601865] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:25.595 [2024-11-20 16:20:10.602599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.595 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:25.595 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:25.595 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:25.595 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:25.595 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:25.595 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:25.595 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:25.595 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:25.595 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.595 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:25.595 [2024-11-20 16:20:11.317161] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:25.595 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.595 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:25.595 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.595 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:25.595 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.595 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:25.595 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.595 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:25.595 [2024-11-20 16:20:11.333459] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:25.595 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.595 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:25.595 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.595 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:25.595 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.595 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:25.595 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.595 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:25.595 malloc0 00:08:25.595 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.595 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:25.595 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.595 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:25.595 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.595 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:25.595 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:25.595 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:25.595 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:25.595 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:25.595 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:25.595 { 00:08:25.595 "params": { 00:08:25.595 "name": "Nvme$subsystem", 00:08:25.595 "trtype": "$TEST_TRANSPORT", 00:08:25.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:25.595 "adrfam": "ipv4", 00:08:25.595 "trsvcid": "$NVMF_PORT", 00:08:25.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:25.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:25.595 "hdgst": ${hdgst:-false}, 00:08:25.595 "ddgst": ${ddgst:-false} 00:08:25.595 }, 00:08:25.595 "method": "bdev_nvme_attach_controller" 00:08:25.595 } 00:08:25.595 EOF 00:08:25.595 )") 00:08:25.595 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:25.595 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:25.595 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:25.595 16:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:25.595 "params": { 00:08:25.595 "name": "Nvme1", 00:08:25.595 "trtype": "tcp", 00:08:25.595 "traddr": "10.0.0.2", 00:08:25.595 "adrfam": "ipv4", 00:08:25.595 "trsvcid": "4420", 00:08:25.595 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:25.595 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:25.595 "hdgst": false, 00:08:25.595 "ddgst": false 00:08:25.595 }, 00:08:25.595 "method": "bdev_nvme_attach_controller" 00:08:25.595 }' 00:08:25.595 [2024-11-20 16:20:11.423435] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:08:25.595 [2024-11-20 16:20:11.423501] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2047080 ] 00:08:25.595 [2024-11-20 16:20:11.498864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.595 [2024-11-20 16:20:11.540777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.856 Running I/O for 10 seconds... 00:08:28.182 7550.00 IOPS, 58.98 MiB/s [2024-11-20T15:20:14.712Z] 8646.50 IOPS, 67.55 MiB/s [2024-11-20T15:20:16.097Z] 9007.67 IOPS, 70.37 MiB/s [2024-11-20T15:20:17.038Z] 9189.50 IOPS, 71.79 MiB/s [2024-11-20T15:20:17.980Z] 9300.20 IOPS, 72.66 MiB/s [2024-11-20T15:20:18.922Z] 9373.17 IOPS, 73.23 MiB/s [2024-11-20T15:20:19.861Z] 9427.00 IOPS, 73.65 MiB/s [2024-11-20T15:20:20.802Z] 9467.75 IOPS, 73.97 MiB/s [2024-11-20T15:20:21.747Z] 9495.89 IOPS, 74.19 MiB/s [2024-11-20T15:20:21.747Z] 9521.40 IOPS, 74.39 MiB/s 00:08:35.788 Latency(us) 00:08:35.788 [2024-11-20T15:20:21.747Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:35.788 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:35.788 Verification LBA range: start 0x0 length 0x1000 00:08:35.788 Nvme1n1 : 10.01 9520.11 74.38 0.00 0.00 13393.87 1843.20 28835.84 00:08:35.788 [2024-11-20T15:20:21.747Z] =================================================================================================================== 00:08:35.788 [2024-11-20T15:20:21.747Z] Total : 9520.11 74.38 0.00 0.00 13393.87 1843.20 28835.84 00:08:36.049 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2049096 00:08:36.049 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:36.049 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:36.049 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:36.049 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:36.049 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:36.049 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:36.049 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:36.049 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:36.049 { 00:08:36.049 "params": { 00:08:36.049 "name": "Nvme$subsystem", 00:08:36.049 "trtype": "$TEST_TRANSPORT", 00:08:36.049 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:36.049 "adrfam": "ipv4", 00:08:36.049 "trsvcid": "$NVMF_PORT", 00:08:36.049 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:36.049 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:36.049 "hdgst": ${hdgst:-false}, 00:08:36.049 "ddgst": ${ddgst:-false} 00:08:36.049 }, 00:08:36.049 "method": "bdev_nvme_attach_controller" 00:08:36.049 } 00:08:36.049 EOF 00:08:36.049 )") 00:08:36.049 [2024-11-20 16:20:21.849146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.049 [2024-11-20 16:20:21.849176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.049 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:36.049 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:36.049 [2024-11-20 16:20:21.857127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.049 [2024-11-20 16:20:21.857135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.049 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:36.050 16:20:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:36.050 "params": { 00:08:36.050 "name": "Nvme1", 00:08:36.050 "trtype": "tcp", 00:08:36.050 "traddr": "10.0.0.2", 00:08:36.050 "adrfam": "ipv4", 00:08:36.050 "trsvcid": "4420", 00:08:36.050 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:36.050 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:36.050 "hdgst": false, 00:08:36.050 "ddgst": false 00:08:36.050 }, 00:08:36.050 "method": "bdev_nvme_attach_controller" 00:08:36.050 }' 00:08:36.050 [2024-11-20 16:20:21.865144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.050 [2024-11-20 16:20:21.865152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.050 [2024-11-20 16:20:21.873164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.050 [2024-11-20 16:20:21.873170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.050 [2024-11-20 16:20:21.881184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.050 [2024-11-20 16:20:21.881191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.050 [2024-11-20 16:20:21.893216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.050 [2024-11-20 16:20:21.893228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.050 [2024-11-20 16:20:21.894740] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:08:36.050 [2024-11-20 16:20:21.894787] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2049096 ] 00:08:36.050 [2024-11-20 16:20:21.901236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.050 [2024-11-20 16:20:21.901243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.050 [2024-11-20 16:20:21.909256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.050 [2024-11-20 16:20:21.909262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.050 [2024-11-20 16:20:21.917275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.050 [2024-11-20 16:20:21.917282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.050 [2024-11-20 16:20:21.925295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.050 [2024-11-20 16:20:21.925302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.050 [2024-11-20 16:20:21.933317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.050 [2024-11-20 16:20:21.933323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.050 [2024-11-20 16:20:21.941337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.050 [2024-11-20 16:20:21.941343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.050 [2024-11-20 16:20:21.949356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.050 [2024-11-20 16:20:21.949362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.050 [2024-11-20 16:20:21.957376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.050 [2024-11-20 16:20:21.957382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.050 [2024-11-20 16:20:21.964608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.050 [2024-11-20 16:20:21.965397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.050 [2024-11-20 16:20:21.965404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.050 [2024-11-20 16:20:21.973418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.050 [2024-11-20 16:20:21.973425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.050 [2024-11-20 16:20:21.981438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.050 [2024-11-20 16:20:21.981444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.050 [2024-11-20 16:20:21.989458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.050 [2024-11-20 16:20:21.989466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.050 [2024-11-20 16:20:21.997478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.050 [2024-11-20 16:20:21.997486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.050 [2024-11-20 16:20:21.999905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.050 [2024-11-20 16:20:22.005498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.050 [2024-11-20 16:20:22.005505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.311 [2024-11-20 16:20:22.013525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.311 [2024-11-20 16:20:22.013534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.311 [2024-11-20 16:20:22.021541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.311 [2024-11-20 16:20:22.021554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.311 [2024-11-20 16:20:22.029562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.311 [2024-11-20 16:20:22.029571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.311 [2024-11-20 16:20:22.037581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.311 [2024-11-20 16:20:22.037589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.311 [2024-11-20 16:20:22.045602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.311 [2024-11-20 16:20:22.045610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.311 [2024-11-20 16:20:22.053624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.311 [2024-11-20 16:20:22.053633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.311 [2024-11-20 16:20:22.061643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.311 [2024-11-20 16:20:22.061650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.311 [2024-11-20 16:20:22.069663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.311 [2024-11-20 16:20:22.069670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.311 [2024-11-20 16:20:22.077699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.311 [2024-11-20 16:20:22.077713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.311 [2024-11-20 16:20:22.085711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.311 [2024-11-20 16:20:22.085721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.311 [2024-11-20 16:20:22.093727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.311 [2024-11-20 16:20:22.093735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.311 [2024-11-20 16:20:22.101748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.311 [2024-11-20 16:20:22.101756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.311 [2024-11-20 16:20:22.109768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.311 [2024-11-20 16:20:22.109778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.311 [2024-11-20 16:20:22.117788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.311 [2024-11-20 16:20:22.117797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.311 [2024-11-20 16:20:22.125808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.311 [2024-11-20 16:20:22.125817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.311 [2024-11-20 16:20:22.133828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.311 [2024-11-20 16:20:22.133834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.311 [2024-11-20 16:20:22.179464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.311 [2024-11-20 16:20:22.179478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.311 [2024-11-20 16:20:22.186032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.311 [2024-11-20 16:20:22.186041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.311 Running I/O for 5 seconds... 00:08:36.311 [2024-11-20 16:20:22.194047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.311 [2024-11-20 16:20:22.194054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.311 [2024-11-20 16:20:22.205612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.311 [2024-11-20 16:20:22.205628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.311 [2024-11-20 16:20:22.213827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.311 [2024-11-20 16:20:22.213845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.311 [2024-11-20 16:20:22.222449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.311 [2024-11-20 16:20:22.222465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.311 [2024-11-20 16:20:22.231032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.311 [2024-11-20 16:20:22.231047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.311 [2024-11-20 16:20:22.239477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.311 [2024-11-20 16:20:22.239492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.311 [2024-11-20 16:20:22.248395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.311 [2024-11-20 16:20:22.248409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.311 [2024-11-20 16:20:22.256872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.311 [2024-11-20 16:20:22.256887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.311 [2024-11-20 16:20:22.265630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.311 [2024-11-20 16:20:22.265644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.573 [2024-11-20 16:20:22.274734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.573 [2024-11-20 16:20:22.274748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.573 [2024-11-20 16:20:22.283513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.573 [2024-11-20 16:20:22.283528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.573 [2024-11-20 16:20:22.292519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.573 [2024-11-20 16:20:22.292534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.573 [2024-11-20 16:20:22.301603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.573 [2024-11-20 16:20:22.301617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.573 [2024-11-20 16:20:22.310288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.573 [2024-11-20 16:20:22.310302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.573 [2024-11-20 16:20:22.319201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.573 [2024-11-20 16:20:22.319215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.573 [2024-11-20 16:20:22.328158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.573 [2024-11-20 16:20:22.328172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.573 [2024-11-20 16:20:22.337081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.573 [2024-11-20 16:20:22.337095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.573 [2024-11-20 16:20:22.345889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.573 [2024-11-20 16:20:22.345902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.573 [2024-11-20 16:20:22.354211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.573 [2024-11-20 16:20:22.354225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.573 [2024-11-20 16:20:22.363433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.573 [2024-11-20 16:20:22.363447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.573 [2024-11-20 16:20:22.372043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.573 [2024-11-20 16:20:22.372057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.573 [2024-11-20 16:20:22.381065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.573 [2024-11-20 16:20:22.381079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.573 [2024-11-20 16:20:22.390220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.573 [2024-11-20 16:20:22.390234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.573 [2024-11-20 16:20:22.398978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.573 [2024-11-20 16:20:22.398997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.573 [2024-11-20 16:20:22.407275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.573 [2024-11-20 16:20:22.407289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.573 [2024-11-20 16:20:22.416073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.573 [2024-11-20 16:20:22.416088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.573 [2024-11-20 16:20:22.424502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.573 [2024-11-20 16:20:22.424516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.573 [2024-11-20 16:20:22.433643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.573 [2024-11-20 16:20:22.433657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.573 [2024-11-20 16:20:22.442707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.573 [2024-11-20 16:20:22.442722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.573 [2024-11-20 16:20:22.451324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.573 [2024-11-20 16:20:22.451338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.573 [2024-11-20 16:20:22.460336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.573 [2024-11-20 16:20:22.460351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.573 [2024-11-20 16:20:22.469292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.573 [2024-11-20 16:20:22.469306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.573 [2024-11-20 16:20:22.478244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.573 [2024-11-20 16:20:22.478259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.573 [2024-11-20 16:20:22.486791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.573 [2024-11-20 16:20:22.486805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.573 [2024-11-20 16:20:22.495602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.573 [2024-11-20 16:20:22.495616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.573 [2024-11-20 16:20:22.504618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.573 [2024-11-20 16:20:22.504632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.573 [2024-11-20 16:20:22.513971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.573 [2024-11-20 16:20:22.513991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.573 [2024-11-20 16:20:22.523119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.573 [2024-11-20 16:20:22.523134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.834 [2024-11-20 16:20:22.531582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.834 [2024-11-20 16:20:22.531596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.834 [2024-11-20 16:20:22.540579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.834 [2024-11-20 16:20:22.540593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.834 [2024-11-20 16:20:22.549039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.834 [2024-11-20 16:20:22.549053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.834 [2024-11-20 16:20:22.557614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.834 [2024-11-20 16:20:22.557628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.834 [2024-11-20 16:20:22.566412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.834 [2024-11-20 16:20:22.566426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.834 [2024-11-20 16:20:22.575255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.834 [2024-11-20 16:20:22.575269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.834 [2024-11-20 16:20:22.584103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.834 [2024-11-20 16:20:22.584117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.834 [2024-11-20 16:20:22.593313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.834 [2024-11-20 16:20:22.593328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.834 [2024-11-20 16:20:22.601918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.834 [2024-11-20 16:20:22.601932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.834 [2024-11-20 16:20:22.610327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.834 [2024-11-20 16:20:22.610341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.834 [2024-11-20 16:20:22.619224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.834 [2024-11-20 16:20:22.619238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.834 [2024-11-20 16:20:22.627854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.834 [2024-11-20 16:20:22.627868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.834 [2024-11-20 16:20:22.636126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.834 [2024-11-20 16:20:22.636140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.834 [2024-11-20 16:20:22.644524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.835 [2024-11-20 16:20:22.644538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.835 [2024-11-20 16:20:22.653563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.835 [2024-11-20 16:20:22.653578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.835 [2024-11-20 16:20:22.662619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.835 [2024-11-20 16:20:22.662633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.835 [2024-11-20 16:20:22.671480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.835 [2024-11-20 16:20:22.671494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.835 [2024-11-20 16:20:22.680520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.835 [2024-11-20 16:20:22.680535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.835 [2024-11-20 16:20:22.689065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.835 [2024-11-20 16:20:22.689080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.835 [2024-11-20 16:20:22.698085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.835 [2024-11-20 16:20:22.698099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.835 [2024-11-20 16:20:22.706815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.835 [2024-11-20 16:20:22.706829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.835 [2024-11-20 16:20:22.715912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.835 [2024-11-20 16:20:22.715927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.835 [2024-11-20 16:20:22.723775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.835 [2024-11-20 16:20:22.723789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.835 [2024-11-20 16:20:22.732602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.835 [2024-11-20 16:20:22.732617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.835 [2024-11-20 16:20:22.741565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.835 [2024-11-20 16:20:22.741580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.835 [2024-11-20 16:20:22.750535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.835 [2024-11-20 16:20:22.750550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.835 [2024-11-20 16:20:22.759429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.835 [2024-11-20 16:20:22.759444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.835 [2024-11-20 16:20:22.767914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.835 [2024-11-20 16:20:22.767929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.835 [2024-11-20 16:20:22.776925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.835 [2024-11-20 16:20:22.776940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.835 [2024-11-20 16:20:22.785881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.835 [2024-11-20 16:20:22.785895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.096 [2024-11-20 16:20:22.794265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.096 [2024-11-20 16:20:22.794279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.096 [2024-11-20 16:20:22.803378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.096 [2024-11-20 16:20:22.803393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.096 [2024-11-20 16:20:22.812667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.096 [2024-11-20 16:20:22.812682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.096 [2024-11-20 16:20:22.820564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.096 [2024-11-20 16:20:22.820579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.096 [2024-11-20 16:20:22.829038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.096 [2024-11-20 16:20:22.829053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.096 [2024-11-20 16:20:22.837840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.096 [2024-11-20 16:20:22.837855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.096 [2024-11-20 16:20:22.846916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.096 [2024-11-20 16:20:22.846930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.096 [2024-11-20 16:20:22.855489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.096 [2024-11-20 16:20:22.855504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.096 [2024-11-20 16:20:22.864223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.096 [2024-11-20 16:20:22.864238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.096 [2024-11-20 16:20:22.873270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.096 [2024-11-20 16:20:22.873285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.096 [2024-11-20 16:20:22.881883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.096 [2024-11-20 16:20:22.881898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.096 [2024-11-20 16:20:22.890238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.096 [2024-11-20 16:20:22.890253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.096 [2024-11-20 16:20:22.899365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.096 [2024-11-20 16:20:22.899379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.096 [2024-11-20 16:20:22.907246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.096 [2024-11-20 16:20:22.907261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.096 [2024-11-20 16:20:22.916420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.096 [2024-11-20 16:20:22.916434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.096 [2024-11-20 16:20:22.924980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.096 [2024-11-20 16:20:22.924999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.096 [2024-11-20 16:20:22.933308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.096 [2024-11-20 16:20:22.933322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.096 [2024-11-20 16:20:22.942415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.096 [2024-11-20 16:20:22.942430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.096 [2024-11-20 16:20:22.951461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.096 [2024-11-20 16:20:22.951475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.096 [2024-11-20 16:20:22.959454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.096 [2024-11-20 16:20:22.959468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.096 [2024-11-20 16:20:22.967696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.096 [2024-11-20 16:20:22.967711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.096 [2024-11-20 16:20:22.976587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.096 [2024-11-20 16:20:22.976602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.096 [2024-11-20 16:20:22.985578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.096 [2024-11-20 16:20:22.985592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.096 [2024-11-20 16:20:22.993834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.096 [2024-11-20 16:20:22.993848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.096 [2024-11-20 16:20:23.002381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.096 [2024-11-20 16:20:23.002395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.096 [2024-11-20 16:20:23.011582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.096 [2024-11-20 16:20:23.011597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.096 [2024-11-20 16:20:23.020492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.096 [2024-11-20 16:20:23.020506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.096 [2024-11-20 16:20:23.029141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.096 [2024-11-20 16:20:23.029155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.096 [2024-11-20 16:20:23.038290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.096 [2024-11-20 16:20:23.038308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.096 [2024-11-20 16:20:23.047617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.096 [2024-11-20 16:20:23.047631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.359 [2024-11-20 16:20:23.056252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.359 [2024-11-20 16:20:23.056266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.359 [2024-11-20 16:20:23.065260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.359 [2024-11-20 16:20:23.065274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.359 [2024-11-20 16:20:23.074303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.359 [2024-11-20 16:20:23.074318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.359 [2024-11-20 16:20:23.083644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.359 [2024-11-20 16:20:23.083658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.359 [2024-11-20 16:20:23.092203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.359 [2024-11-20 16:20:23.092217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.359 [2024-11-20 16:20:23.100676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.359 [2024-11-20 16:20:23.100690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.359 [2024-11-20 16:20:23.109709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.359 [2024-11-20 16:20:23.109723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.359 [2024-11-20 16:20:23.118788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.359 [2024-11-20 16:20:23.118802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.359 [2024-11-20 16:20:23.127913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.359 [2024-11-20 16:20:23.127927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.359 [2024-11-20 16:20:23.137069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.359 [2024-11-20 16:20:23.137083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.359 [2024-11-20 16:20:23.145721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.359 [2024-11-20 16:20:23.145735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.359 [2024-11-20 16:20:23.154444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.359 [2024-11-20 16:20:23.154458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.359 [2024-11-20 16:20:23.163352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.359 [2024-11-20 16:20:23.163366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.359 [2024-11-20 16:20:23.172282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.359 [2024-11-20 16:20:23.172296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.359 [2024-11-20 16:20:23.181420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.359 [2024-11-20 16:20:23.181434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.359 [2024-11-20 16:20:23.190129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.359 [2024-11-20 16:20:23.190142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.359 18940.00 IOPS, 147.97 MiB/s [2024-11-20T15:20:23.318Z] [2024-11-20 16:20:23.199269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.359 [2024-11-20 16:20:23.199283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.359 [2024-11-20 16:20:23.208239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.359 [2024-11-20 16:20:23.208256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.359 [2024-11-20 16:20:23.217088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.359 [2024-11-20 16:20:23.217102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.359 [2024-11-20 16:20:23.226180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.359 [2024-11-20 16:20:23.226194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.359 [2024-11-20 16:20:23.235347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.359 [2024-11-20 16:20:23.235361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.359 [2024-11-20 16:20:23.244228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.359 [2024-11-20 16:20:23.244243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.359 [2024-11-20 16:20:23.253411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.359 [2024-11-20 16:20:23.253426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.359 [2024-11-20 16:20:23.261786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.359 [2024-11-20 16:20:23.261800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.359 [2024-11-20 16:20:23.270596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.359 [2024-11-20 16:20:23.270610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.359 [2024-11-20 16:20:23.278922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.359 [2024-11-20 16:20:23.278935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.359 [2024-11-20 16:20:23.288320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.359 [2024-11-20 16:20:23.288334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.359 [2024-11-20 16:20:23.296631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.359 [2024-11-20 16:20:23.296644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.359 [2024-11-20 16:20:23.305707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.359 [2024-11-20 16:20:23.305721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.359 [2024-11-20 16:20:23.314262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.359 [2024-11-20 16:20:23.314275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.621 [2024-11-20 16:20:23.322708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.621 [2024-11-20 16:20:23.322722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.621 [2024-11-20 16:20:23.331366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.621 [2024-11-20 16:20:23.331380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.621 [2024-11-20 16:20:23.340378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.621 [2024-11-20 16:20:23.340392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.621 [2024-11-20 16:20:23.349460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.621 [2024-11-20 16:20:23.349474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.621 [2024-11-20 16:20:23.358561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.621 [2024-11-20 16:20:23.358575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.621 [2024-11-20 16:20:23.367481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.621 [2024-11-20 16:20:23.367495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.621 [2024-11-20 16:20:23.376376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.621 [2024-11-20 16:20:23.376393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.621 [2024-11-20 16:20:23.385465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.621 [2024-11-20 16:20:23.385480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.621 [2024-11-20 16:20:23.394515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.621 [2024-11-20 16:20:23.394529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.621 [2024-11-20 16:20:23.403504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.621 [2024-11-20 16:20:23.403518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.621 [2024-11-20 16:20:23.411876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.621 [2024-11-20 16:20:23.411890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.621 [2024-11-20 16:20:23.420416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.621 [2024-11-20 16:20:23.420430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.621 [2024-11-20 16:20:23.429111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.621 [2024-11-20 16:20:23.429125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.621 [2024-11-20 16:20:23.437901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.621 [2024-11-20 16:20:23.437915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.621 [2024-11-20 16:20:23.447110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.621 [2024-11-20 16:20:23.447124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.621 [2024-11-20 16:20:23.455817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.621 [2024-11-20 16:20:23.455831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.621 [2024-11-20 16:20:23.464862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.621 [2024-11-20 16:20:23.464876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.621 [2024-11-20 16:20:23.472932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.621 [2024-11-20 16:20:23.472945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.621 [2024-11-20 16:20:23.481584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.621 [2024-11-20 16:20:23.481598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.621 [2024-11-20 16:20:23.490421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.621 [2024-11-20 16:20:23.490434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.621 [2024-11-20 16:20:23.499016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.621 [2024-11-20 16:20:23.499029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.621 [2024-11-20 16:20:23.507716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.621 [2024-11-20 16:20:23.507730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.621 [2024-11-20 16:20:23.516777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.621 [2024-11-20 16:20:23.516791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.621 [2024-11-20 16:20:23.525851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.621 [2024-11-20 16:20:23.525865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.621 [2024-11-20 16:20:23.534310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.621 [2024-11-20 16:20:23.534324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.621 [2024-11-20 16:20:23.542884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.621 [2024-11-20 16:20:23.542898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.621 [2024-11-20 16:20:23.551892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.621 [2024-11-20 16:20:23.551906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.621 [2024-11-20 16:20:23.559799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.621 [2024-11-20 16:20:23.559813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.621 [2024-11-20 16:20:23.568507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.621 [2024-11-20 16:20:23.568521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.622 [2024-11-20 16:20:23.577182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.622 [2024-11-20 16:20:23.577196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.960 [2024-11-20 16:20:23.586100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.960 [2024-11-20 16:20:23.586115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.960 [2024-11-20 16:20:23.595190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.960 [2024-11-20 16:20:23.595204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.960 [2024-11-20 16:20:23.603979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.960 [2024-11-20 16:20:23.603998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.960 [2024-11-20 16:20:23.612947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.960 [2024-11-20 16:20:23.612961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.960 [2024-11-20 16:20:23.621666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.960 [2024-11-20 16:20:23.621680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.960 [2024-11-20 16:20:23.629904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.960 [2024-11-20 16:20:23.629917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.961 [2024-11-20 16:20:23.638368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.961 [2024-11-20 16:20:23.638382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.961 [2024-11-20 16:20:23.646954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.961 [2024-11-20 16:20:23.646968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.961 [2024-11-20 16:20:23.655721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.961 [2024-11-20 16:20:23.655735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.961 [2024-11-20 16:20:23.665045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.961 [2024-11-20 16:20:23.665059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.961 [2024-11-20 16:20:23.673912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.961 [2024-11-20 16:20:23.673926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.961 [2024-11-20 16:20:23.682739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.961 [2024-11-20 16:20:23.682753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.961 [2024-11-20 16:20:23.691558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.961 [2024-11-20 16:20:23.691571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.961 [2024-11-20 16:20:23.700758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.961 [2024-11-20 16:20:23.700772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.961 [2024-11-20 16:20:23.708721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.961 [2024-11-20 16:20:23.708735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.961 [2024-11-20 16:20:23.717721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.961 [2024-11-20 16:20:23.717736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.961 [2024-11-20 16:20:23.726406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.961 [2024-11-20 16:20:23.726419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.961 [2024-11-20 16:20:23.735193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.961 [2024-11-20 16:20:23.735207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.961 [2024-11-20 16:20:23.743754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.961 [2024-11-20 16:20:23.743768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.961 [2024-11-20 16:20:23.752425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.961 [2024-11-20 16:20:23.752439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.961 [2024-11-20 16:20:23.761146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.961 [2024-11-20 16:20:23.761159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.961 [2024-11-20 16:20:23.770005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.961 [2024-11-20 16:20:23.770019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.961 [2024-11-20 16:20:23.778709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.961 [2024-11-20 16:20:23.778723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.961 [2024-11-20 16:20:23.787755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.961 [2024-11-20 16:20:23.787769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.961 [2024-11-20 16:20:23.796027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.961 [2024-11-20 16:20:23.796041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.961 [2024-11-20 16:20:23.805035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.961 [2024-11-20 16:20:23.805049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.961 [2024-11-20 16:20:23.814254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.961 [2024-11-20 16:20:23.814267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.961 [2024-11-20 16:20:23.822959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.961 [2024-11-20 16:20:23.822972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.961 [2024-11-20 16:20:23.831877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.961 [2024-11-20 16:20:23.831891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.961 [2024-11-20 16:20:23.840920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.961 [2024-11-20 16:20:23.840934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.961 [2024-11-20 16:20:23.849423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.961 [2024-11-20 16:20:23.849436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.961 [2024-11-20 16:20:23.858179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.961 [2024-11-20 16:20:23.858193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.961 [2024-11-20 16:20:23.866671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.961 [2024-11-20 16:20:23.866685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.961 [2024-11-20 16:20:23.875711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.961 [2024-11-20 16:20:23.875725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.241 [2024-11-20 16:20:23.884750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.241 [2024-11-20 16:20:23.884764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.241 [2024-11-20 16:20:23.893188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.241 [2024-11-20 16:20:23.893202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.241 [2024-11-20 16:20:23.901881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.241 [2024-11-20 16:20:23.901895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.241 [2024-11-20 16:20:23.910086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.241 [2024-11-20 16:20:23.910100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.241 [2024-11-20 16:20:23.918794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.241 [2024-11-20 16:20:23.918808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.241 [2024-11-20 16:20:23.927912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.241 [2024-11-20 16:20:23.927926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.241 [2024-11-20 16:20:23.937239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.241 [2024-11-20 16:20:23.937253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.241 [2024-11-20 16:20:23.945701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.241 [2024-11-20 16:20:23.945716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.241 [2024-11-20 16:20:23.954632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.241 [2024-11-20 16:20:23.954646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.241 [2024-11-20 16:20:23.963259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.241 [2024-11-20 16:20:23.963273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.241 [2024-11-20 16:20:23.971874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.241 [2024-11-20 16:20:23.971888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.241 [2024-11-20 16:20:23.980908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.241 [2024-11-20 16:20:23.980922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.241 [2024-11-20 16:20:23.989450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.241 [2024-11-20 16:20:23.989464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.241 [2024-11-20 16:20:23.998192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.241 [2024-11-20 16:20:23.998205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.241 [2024-11-20 16:20:24.007044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.241 [2024-11-20 16:20:24.007058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.241 [2024-11-20 16:20:24.016064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.241 [2024-11-20 16:20:24.016078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.241 [2024-11-20 16:20:24.024603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.241 [2024-11-20 16:20:24.024616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.241 [2024-11-20 16:20:24.033576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.241 [2024-11-20 16:20:24.033593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.241 [2024-11-20 16:20:24.042611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.241 [2024-11-20 16:20:24.042624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.241 [2024-11-20 16:20:24.051724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.242 [2024-11-20 16:20:24.051738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.242 [2024-11-20 16:20:24.059993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.242 [2024-11-20 16:20:24.060007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.242 [2024-11-20 16:20:24.068446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.242 [2024-11-20 16:20:24.068460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.242 [2024-11-20 16:20:24.077905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.242 [2024-11-20 16:20:24.077919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.242 [2024-11-20 16:20:24.086450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.242 [2024-11-20 16:20:24.086464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.242 [2024-11-20 16:20:24.094939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.242 [2024-11-20 16:20:24.094953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.242 [2024-11-20 16:20:24.103644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.242 [2024-11-20 16:20:24.103658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.242 [2024-11-20 16:20:24.112442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.242 [2024-11-20 16:20:24.112456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.242 [2024-11-20 16:20:24.120921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.242 [2024-11-20 16:20:24.120935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.242 [2024-11-20 16:20:24.129428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.242 [2024-11-20 16:20:24.129442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.242 [2024-11-20 16:20:24.138609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.242 [2024-11-20 16:20:24.138623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.242 [2024-11-20 16:20:24.147462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.242 [2024-11-20 16:20:24.147476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.242 [2024-11-20 16:20:24.155874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.242 [2024-11-20 16:20:24.155888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.242 [2024-11-20 16:20:24.164791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.242 [2024-11-20 16:20:24.164805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.242 [2024-11-20 16:20:24.173124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.242 [2024-11-20 16:20:24.173138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.242 [2024-11-20 16:20:24.181348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.242 [2024-11-20 16:20:24.181363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.242 [2024-11-20 16:20:24.189827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.242 [2024-11-20 16:20:24.189841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.524 19041.00 IOPS, 148.76 MiB/s [2024-11-20T15:20:24.483Z] [2024-11-20 16:20:24.198891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.524 [2024-11-20 16:20:24.198909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.524 [2024-11-20 16:20:24.207957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.524 [2024-11-20 16:20:24.207971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.524 [2024-11-20 16:20:24.216578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.524 [2024-11-20 16:20:24.216592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.524 [2024-11-20 16:20:24.225357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.524 [2024-11-20 16:20:24.225371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.524 [2024-11-20 16:20:24.234330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.524 [2024-11-20 16:20:24.234344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.524 [2024-11-20 16:20:24.243362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.524 [2024-11-20 16:20:24.243375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.524 [2024-11-20 16:20:24.252459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.524 [2024-11-20 16:20:24.252473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.524 [2024-11-20 16:20:24.261753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.524 [2024-11-20 16:20:24.261767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.524 [2024-11-20 16:20:24.270299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.524 [2024-11-20 16:20:24.270313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.524 [2024-11-20 16:20:24.279067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.524 [2024-11-20 16:20:24.279081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.524 [2024-11-20 16:20:24.287626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.524 [2024-11-20 16:20:24.287640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.524 [2024-11-20 16:20:24.296802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.524 [2024-11-20 16:20:24.296816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.524 [2024-11-20 16:20:24.304819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.524 [2024-11-20 16:20:24.304834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.524 [2024-11-20 16:20:24.313400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.524 [2024-11-20 16:20:24.313414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.524 [2024-11-20 16:20:24.322358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.524 [2024-11-20 16:20:24.322373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.524 [2024-11-20 16:20:24.331011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.524 [2024-11-20 16:20:24.331025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.524 [2024-11-20 16:20:24.340168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.524 [2024-11-20 16:20:24.340182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.524 [2024-11-20 16:20:24.349252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.524 [2024-11-20 16:20:24.349266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.524 [2024-11-20 16:20:24.357689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.524 [2024-11-20 16:20:24.357703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.524 [2024-11-20 16:20:24.366827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.524 [2024-11-20 16:20:24.366845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.524 [2024-11-20 16:20:24.375280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.524 [2024-11-20 16:20:24.375294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.524 [2024-11-20 16:20:24.384482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.524 [2024-11-20 16:20:24.384496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.524 [2024-11-20 16:20:24.393629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.524 [2024-11-20 16:20:24.393645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.524 [2024-11-20 16:20:24.402152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.524 [2024-11-20 16:20:24.402166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.524 [2024-11-20 16:20:24.410760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.524 [2024-11-20 16:20:24.410774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.524 [2024-11-20 16:20:24.419412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.524 [2024-11-20 16:20:24.419427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.524 [2024-11-20 16:20:24.427873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.524 [2024-11-20 16:20:24.427888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.524 [2024-11-20 16:20:24.437071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.524 [2024-11-20 16:20:24.437085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.524 [2024-11-20 16:20:24.445754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.524 [2024-11-20 16:20:24.445768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.524 [2024-11-20 16:20:24.454432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.524 [2024-11-20 16:20:24.454447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.524 [2024-11-20 16:20:24.463479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.524 [2024-11-20 16:20:24.463494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.524 [2024-11-20 16:20:24.471775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.524 [2024-11-20 16:20:24.471790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.786 [2024-11-20 16:20:24.480788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.786 [2024-11-20 16:20:24.480802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.786 [2024-11-20 16:20:24.489862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.786 [2024-11-20 16:20:24.489877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.786 [2024-11-20 16:20:24.499010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.786 [2024-11-20 16:20:24.499025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.786 [2024-11-20 16:20:24.507615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.786 [2024-11-20 16:20:24.507629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.786 [2024-11-20 16:20:24.516475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.786 [2024-11-20 16:20:24.516489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.786 [2024-11-20 16:20:24.525471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.786 [2024-11-20 16:20:24.525485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.786 [2024-11-20 16:20:24.534144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.786 [2024-11-20 16:20:24.534161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.786 [2024-11-20 16:20:24.542961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.786 [2024-11-20 16:20:24.542975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.786 [2024-11-20 16:20:24.551714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.786 [2024-11-20 16:20:24.551728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.786 [2024-11-20 16:20:24.560890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.786 [2024-11-20 16:20:24.560904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.786 [2024-11-20 16:20:24.569437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.786 [2024-11-20 16:20:24.569451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.786 [2024-11-20 16:20:24.578578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.786 [2024-11-20 16:20:24.578592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.786 [2024-11-20 16:20:24.587097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.786 [2024-11-20 16:20:24.587111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.786 [2024-11-20 16:20:24.595545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.786 [2024-11-20 16:20:24.595560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.786 [2024-11-20 16:20:24.604359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.786 [2024-11-20 16:20:24.604374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.786 [2024-11-20 16:20:24.613235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.786 [2024-11-20 16:20:24.613249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.786 [2024-11-20 16:20:24.622277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.786 [2024-11-20 16:20:24.622292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.786 [2024-11-20 16:20:24.630342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.786 [2024-11-20 16:20:24.630357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.787 [2024-11-20 16:20:24.639366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.787 [2024-11-20 16:20:24.639380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.787 [2024-11-20 16:20:24.648329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.787 [2024-11-20 16:20:24.648344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.787 [2024-11-20 16:20:24.657221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.787 [2024-11-20 16:20:24.657235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.787 [2024-11-20 16:20:24.665856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.787 [2024-11-20 16:20:24.665871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.787 [2024-11-20 16:20:24.675137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.787 [2024-11-20 16:20:24.675151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.787 [2024-11-20 16:20:24.684013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.787 [2024-11-20 16:20:24.684027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.787 [2024-11-20 16:20:24.693046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.787 [2024-11-20 16:20:24.693060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.787 [2024-11-20 16:20:24.702128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.787 [2024-11-20 16:20:24.702143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.787 [2024-11-20 16:20:24.711260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.787 [2024-11-20 16:20:24.711274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.787 [2024-11-20 16:20:24.720397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.787 [2024-11-20 16:20:24.720411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.787 [2024-11-20 16:20:24.729833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.787 [2024-11-20 16:20:24.729848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.787 [2024-11-20 16:20:24.737876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.787 [2024-11-20 16:20:24.737891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.048 [2024-11-20 16:20:24.747035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.048 [2024-11-20 16:20:24.747050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.048 [2024-11-20 16:20:24.756095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.048 [2024-11-20 16:20:24.756109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.048 [2024-11-20 16:20:24.765193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.048 [2024-11-20 16:20:24.765207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.048 [2024-11-20 16:20:24.774362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.048 [2024-11-20 16:20:24.774376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.048 [2024-11-20 16:20:24.782974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.048 [2024-11-20 16:20:24.782993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.048 [2024-11-20 16:20:24.791645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.048 [2024-11-20 16:20:24.791659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.048 [2024-11-20 16:20:24.800092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.048 [2024-11-20 16:20:24.800107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.048 [2024-11-20 16:20:24.808999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.048 [2024-11-20 16:20:24.809013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.048 [2024-11-20 16:20:24.818192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.048 [2024-11-20 16:20:24.818206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.048 [2024-11-20 16:20:24.826688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.048 [2024-11-20 16:20:24.826703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.048 [2024-11-20 16:20:24.835785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.048 [2024-11-20 16:20:24.835798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.048 [2024-11-20 16:20:24.844667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.048 [2024-11-20 16:20:24.844681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.048 [2024-11-20 16:20:24.853535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.048 [2024-11-20 16:20:24.853549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.048 [2024-11-20 16:20:24.862862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.048 [2024-11-20 16:20:24.862876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.048 [2024-11-20 16:20:24.871667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.048 [2024-11-20 16:20:24.871681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.048 [2024-11-20 16:20:24.880375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.048 [2024-11-20 16:20:24.880389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.048 [2024-11-20 16:20:24.889278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.048 [2024-11-20 16:20:24.889292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.048 [2024-11-20 16:20:24.897963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.048 [2024-11-20 16:20:24.897976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.048 [2024-11-20 16:20:24.907111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.048 [2024-11-20 16:20:24.907124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.048 [2024-11-20 16:20:24.915407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.048 [2024-11-20 16:20:24.915421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.048 [2024-11-20 16:20:24.924412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.048 [2024-11-20 16:20:24.924426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.048 [2024-11-20 16:20:24.932947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.049 [2024-11-20 16:20:24.932961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.049 [2024-11-20 16:20:24.941435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.049 [2024-11-20 16:20:24.941449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.049 [2024-11-20 16:20:24.950374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.049 [2024-11-20 16:20:24.950387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.049 [2024-11-20 16:20:24.959462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.049 [2024-11-20 16:20:24.959475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.049 [2024-11-20 16:20:24.968457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.049 [2024-11-20 16:20:24.968471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.049 [2024-11-20 16:20:24.977573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.049 [2024-11-20 16:20:24.977587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.049 [2024-11-20 16:20:24.986616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.049 [2024-11-20 16:20:24.986629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.049 [2024-11-20 16:20:24.995689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.049 [2024-11-20 16:20:24.995703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.049 [2024-11-20 16:20:25.004158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.049 [2024-11-20 16:20:25.004172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.310 [2024-11-20 16:20:25.012974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.310 [2024-11-20 16:20:25.012993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.310 [2024-11-20 16:20:25.021640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.310 [2024-11-20 16:20:25.021654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.310 [2024-11-20 16:20:25.030485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.310 [2024-11-20 16:20:25.030499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.310 [2024-11-20 16:20:25.039238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.310 [2024-11-20 16:20:25.039252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.310 [2024-11-20 16:20:25.047900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.310 [2024-11-20 16:20:25.047913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.310 [2024-11-20 16:20:25.056311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.310 [2024-11-20 16:20:25.056326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.310 [2024-11-20 16:20:25.065095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.310 [2024-11-20 16:20:25.065109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.310 [2024-11-20 16:20:25.073756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.310 [2024-11-20 16:20:25.073770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.310 [2024-11-20 16:20:25.082418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.310 [2024-11-20 16:20:25.082432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.310 [2024-11-20 16:20:25.090946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.310 [2024-11-20 16:20:25.090960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.310 [2024-11-20 16:20:25.100100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.310 [2024-11-20 16:20:25.100114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.310 [2024-11-20 16:20:25.108615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.310 [2024-11-20 16:20:25.108629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.310 [2024-11-20 16:20:25.117466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.310 [2024-11-20 16:20:25.117480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.310 [2024-11-20 16:20:25.125475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.310 [2024-11-20 16:20:25.125489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.310 [2024-11-20 16:20:25.134848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.310 [2024-11-20 16:20:25.134862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.310 [2024-11-20 16:20:25.143302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.310 [2024-11-20 16:20:25.143315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.310 [2024-11-20 16:20:25.152109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.310 [2024-11-20 16:20:25.152122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.310 [2024-11-20 16:20:25.160968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.310 [2024-11-20 16:20:25.160986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.310 [2024-11-20 16:20:25.169564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.311 [2024-11-20 16:20:25.169577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.311 [2024-11-20 16:20:25.178076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.311 [2024-11-20 16:20:25.178090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.311 [2024-11-20 16:20:25.186975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.311 [2024-11-20 16:20:25.186992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.311 [2024-11-20 16:20:25.196568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.311 [2024-11-20 16:20:25.196590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.311 19066.67 IOPS, 148.96 MiB/s [2024-11-20T15:20:25.270Z] [2024-11-20 16:20:25.204725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.311 [2024-11-20 16:20:25.204739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.311 [2024-11-20 16:20:25.213161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.311 [2024-11-20 16:20:25.213175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.311 [2024-11-20 16:20:25.221702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.311 [2024-11-20 16:20:25.221716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.311 [2024-11-20 16:20:25.230948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.311 [2024-11-20 16:20:25.230962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.311 [2024-11-20 16:20:25.239458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.311 [2024-11-20 16:20:25.239472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.311 [2024-11-20 16:20:25.248632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.311 [2024-11-20 16:20:25.248646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.311 [2024-11-20 16:20:25.257309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.311 [2024-11-20 16:20:25.257323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.572 [2024-11-20 16:20:25.266517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.572 [2024-11-20 16:20:25.266531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.572 [2024-11-20 16:20:25.275064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.572 [2024-11-20 16:20:25.275078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.572 [2024-11-20 16:20:25.284443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.572 [2024-11-20 16:20:25.284457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.572 [2024-11-20 16:20:25.293083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.572 [2024-11-20 16:20:25.293098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.572 [2024-11-20 16:20:25.302124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.572 [2024-11-20 16:20:25.302139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.572 [2024-11-20 16:20:25.310884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.572 [2024-11-20 16:20:25.310898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.572 [2024-11-20 16:20:25.319797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.572 [2024-11-20 16:20:25.319812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.572 [2024-11-20 16:20:25.328356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.572 [2024-11-20 16:20:25.328370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.572 [2024-11-20 16:20:25.337277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.572 [2024-11-20 16:20:25.337291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.572 [2024-11-20 16:20:25.345224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.572 [2024-11-20 16:20:25.345238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.572 [2024-11-20 16:20:25.354178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.572 [2024-11-20 16:20:25.354192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.572 [2024-11-20 16:20:25.362706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.572 [2024-11-20 16:20:25.362724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.572 [2024-11-20 16:20:25.371267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.572 [2024-11-20 16:20:25.371281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.572 [2024-11-20 16:20:25.380159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.572 [2024-11-20 16:20:25.380173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.572 [2024-11-20 16:20:25.388903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.572 [2024-11-20 16:20:25.388917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.572 [2024-11-20 16:20:25.398120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.573 [2024-11-20 16:20:25.398134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.573 [2024-11-20 16:20:25.406690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.573 [2024-11-20 16:20:25.406704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.573 [2024-11-20 16:20:25.415549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.573 [2024-11-20 16:20:25.415563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.573 [2024-11-20 16:20:25.424038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.573 [2024-11-20 16:20:25.424051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.573 [2024-11-20 16:20:25.432990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.573 [2024-11-20 16:20:25.433004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.573 [2024-11-20 16:20:25.441975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.573 [2024-11-20 16:20:25.441993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.573 [2024-11-20 16:20:25.450439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.573 [2024-11-20 16:20:25.450453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.573 [2024-11-20 16:20:25.459867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.573 [2024-11-20 16:20:25.459881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.573 [2024-11-20 16:20:25.467842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.573 [2024-11-20 16:20:25.467856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.573 [2024-11-20 16:20:25.476634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.573 [2024-11-20 16:20:25.476648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.573 [2024-11-20 16:20:25.485712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.573 [2024-11-20 16:20:25.485726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.573 [2024-11-20 16:20:25.494906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.573 [2024-11-20 16:20:25.494921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.573 [2024-11-20 16:20:25.504516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.573 [2024-11-20 16:20:25.504531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.573 [2024-11-20 16:20:25.512857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.573 [2024-11-20 16:20:25.512871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.573 [2024-11-20 16:20:25.521921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.573 [2024-11-20 16:20:25.521935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.834 [2024-11-20 16:20:25.529934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.834 [2024-11-20 16:20:25.529952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.834 [2024-11-20 16:20:25.539053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.834 [2024-11-20 16:20:25.539067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.834 [2024-11-20 16:20:25.547779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.834 [2024-11-20 16:20:25.547793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.834 [2024-11-20 16:20:25.556322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.834 [2024-11-20 16:20:25.556335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.834 [2024-11-20 16:20:25.565121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.834 [2024-11-20 16:20:25.565135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.834 [2024-11-20 16:20:25.573639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.834 [2024-11-20 16:20:25.573653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.834 [2024-11-20 16:20:25.582179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.834 [2024-11-20 16:20:25.582192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.834 [2024-11-20 16:20:25.590669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.834 [2024-11-20 16:20:25.590684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.834 [2024-11-20 16:20:25.599258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.834 [2024-11-20 16:20:25.599272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.834 [2024-11-20 16:20:25.608082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.834 [2024-11-20 16:20:25.608096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.834 [2024-11-20 16:20:25.617156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.834 [2024-11-20 16:20:25.617170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.834 [2024-11-20 16:20:25.625179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.834 [2024-11-20 16:20:25.625193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.834 [2024-11-20 16:20:25.633860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.834 [2024-11-20 16:20:25.633874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.834 [2024-11-20 16:20:25.642796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.834 [2024-11-20 16:20:25.642810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.834 [2024-11-20 16:20:25.651928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.834 [2024-11-20 16:20:25.651942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.834 [2024-11-20 16:20:25.661014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.834 [2024-11-20 16:20:25.661028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.834 [2024-11-20 16:20:25.669519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.834 [2024-11-20 16:20:25.669533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.834 [2024-11-20 16:20:25.678247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.834 [2024-11-20 16:20:25.678261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.834 [2024-11-20 16:20:25.687035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.834 [2024-11-20 16:20:25.687049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.834 [2024-11-20 16:20:25.696345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.834 [2024-11-20 16:20:25.696359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.834 [2024-11-20 16:20:25.704620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.834 [2024-11-20 16:20:25.704634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.834 [2024-11-20 16:20:25.713909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.834 [2024-11-20 16:20:25.713923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.834 [2024-11-20 16:20:25.722206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.834 [2024-11-20 16:20:25.722220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.834 [2024-11-20 16:20:25.731387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.834 [2024-11-20 16:20:25.731401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.834 [2024-11-20 16:20:25.740055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.834 [2024-11-20 16:20:25.740068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.834 [2024-11-20 16:20:25.748468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.834 [2024-11-20 16:20:25.748481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.834 [2024-11-20 16:20:25.757503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.834 [2024-11-20 16:20:25.757517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.834 [2024-11-20 16:20:25.765960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.834 [2024-11-20 16:20:25.765974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.835 [2024-11-20 16:20:25.774426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.835 [2024-11-20 16:20:25.774440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.835 [2024-11-20 16:20:25.783093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.835 [2024-11-20 16:20:25.783107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.096 [2024-11-20 16:20:25.792313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.096 [2024-11-20 16:20:25.792328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.096 [2024-11-20 16:20:25.801026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.096 [2024-11-20 16:20:25.801040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.096 [2024-11-20 16:20:25.809398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.096 [2024-11-20 16:20:25.809412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.096 [2024-11-20 16:20:25.818135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.096 [2024-11-20 16:20:25.818149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.096 [2024-11-20 16:20:25.827352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.096 [2024-11-20 16:20:25.827367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.096 [2024-11-20 16:20:25.835862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.096 [2024-11-20 16:20:25.835876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.096 [2024-11-20 16:20:25.844885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.096 [2024-11-20 16:20:25.844899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.096 [2024-11-20 16:20:25.854005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.096 [2024-11-20 16:20:25.854019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.096 [2024-11-20 16:20:25.862532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.096 [2024-11-20 16:20:25.862547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.096 [2024-11-20 16:20:25.871711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.096 [2024-11-20 16:20:25.871726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.096 [2024-11-20 16:20:25.880310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.096 [2024-11-20 16:20:25.880324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.096 [2024-11-20 16:20:25.889074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.096 [2024-11-20 16:20:25.889089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.096 [2024-11-20 16:20:25.898274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.096 [2024-11-20 16:20:25.898289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.096 [2024-11-20 16:20:25.906936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.096 [2024-11-20 16:20:25.906950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.096 [2024-11-20 16:20:25.916010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.096 [2024-11-20 16:20:25.916024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.096 [2024-11-20 16:20:25.924440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.096 [2024-11-20 16:20:25.924453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.096 [2024-11-20 16:20:25.933359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.096 [2024-11-20 16:20:25.933373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.096 [2024-11-20 16:20:25.942474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.096 [2024-11-20 16:20:25.942488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.096 [2024-11-20 16:20:25.951367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.096 [2024-11-20 16:20:25.951382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.096 [2024-11-20 16:20:25.959898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.096 [2024-11-20 16:20:25.959912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.096 [2024-11-20 16:20:25.968747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.096 [2024-11-20 16:20:25.968762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.096 [2024-11-20 16:20:25.977668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.096 [2024-11-20 16:20:25.977682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.096 [2024-11-20 16:20:25.986261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.096 [2024-11-20 16:20:25.986276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.096 [2024-11-20 16:20:25.994963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.096 [2024-11-20 16:20:25.994978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.096 [2024-11-20 16:20:26.003825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.096 [2024-11-20 16:20:26.003839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.096 [2024-11-20 16:20:26.012793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.096 [2024-11-20 16:20:26.012807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.096 [2024-11-20 16:20:26.021588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.096 [2024-11-20 16:20:26.021603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.096 [2024-11-20 16:20:26.029401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.096 [2024-11-20 16:20:26.029415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.096 [2024-11-20 16:20:26.038248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.096 [2024-11-20 16:20:26.038263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.096 [2024-11-20 16:20:26.046814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.096 [2024-11-20 16:20:26.046828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.358 [2024-11-20 16:20:26.054809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.358 [2024-11-20 16:20:26.054824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.358 [2024-11-20 16:20:26.063549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.358 [2024-11-20 16:20:26.063564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.358 [2024-11-20 16:20:26.072298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.358 [2024-11-20 16:20:26.072312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.358 [2024-11-20 16:20:26.080844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.358 [2024-11-20 16:20:26.080858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.358 [2024-11-20 16:20:26.088690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.358 [2024-11-20 16:20:26.088704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.358 [2024-11-20 16:20:26.097844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.358 [2024-11-20 16:20:26.097858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.358 [2024-11-20 16:20:26.106491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.358 [2024-11-20 16:20:26.106505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.358 [2024-11-20 16:20:26.115419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.358 [2024-11-20 16:20:26.115434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.358 [2024-11-20 16:20:26.123374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.358 [2024-11-20 16:20:26.123388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.358 [2024-11-20 16:20:26.132477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.358 [2024-11-20 16:20:26.132491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.358 [2024-11-20 16:20:26.141086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.358 [2024-11-20 16:20:26.141101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.358 [2024-11-20 16:20:26.149557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.358 [2024-11-20 16:20:26.149572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.358 [2024-11-20 16:20:26.158590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.358 [2024-11-20 16:20:26.158605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.358 [2024-11-20 16:20:26.172420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.358 [2024-11-20 16:20:26.172435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.358 [2024-11-20 16:20:26.180405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.358 [2024-11-20 16:20:26.180419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.358 [2024-11-20 16:20:26.188755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.358 [2024-11-20 16:20:26.188770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.358 [2024-11-20 16:20:26.197989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.358 [2024-11-20 16:20:26.198004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.358 19097.75 IOPS, 149.20 MiB/s [2024-11-20T15:20:26.317Z] [2024-11-20 16:20:26.207306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.358 [2024-11-20 16:20:26.207320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.358 [2024-11-20 16:20:26.216420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.358 [2024-11-20 16:20:26.216434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.358 [2024-11-20 16:20:26.224949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.358 [2024-11-20 16:20:26.224963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.358 [2024-11-20 16:20:26.234194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.358 [2024-11-20 16:20:26.234208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.358 [2024-11-20 16:20:26.242692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.358 [2024-11-20 16:20:26.242706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.358 [2024-11-20 16:20:26.251734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.358 [2024-11-20 16:20:26.251748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.358 [2024-11-20 16:20:26.260600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.358 [2024-11-20 16:20:26.260614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.358 [2024-11-20 16:20:26.269111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.358 [2024-11-20 16:20:26.269125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.358 [2024-11-20 16:20:26.278323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.358 [2024-11-20 16:20:26.278337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.358 [2024-11-20 16:20:26.287488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.358 [2024-11-20 16:20:26.287502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.358 [2024-11-20 16:20:26.295975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.358 [2024-11-20 16:20:26.295994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.358 [2024-11-20 16:20:26.304603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.358 [2024-11-20 16:20:26.304617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.358 [2024-11-20 16:20:26.313401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.358 [2024-11-20 16:20:26.313416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.619 [2024-11-20 16:20:26.322183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.620 [2024-11-20 16:20:26.322198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.620 [2024-11-20 16:20:26.330602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.620 [2024-11-20 16:20:26.330616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.620 [2024-11-20 16:20:26.339217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.620 [2024-11-20 16:20:26.339231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.620 [2024-11-20 16:20:26.347807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.620 [2024-11-20 16:20:26.347822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.620 [2024-11-20 16:20:26.356542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.620 [2024-11-20 16:20:26.356561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.620 [2024-11-20 16:20:26.365086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.620 [2024-11-20 16:20:26.365101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.620 [2024-11-20 16:20:26.374277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.620 [2024-11-20 16:20:26.374292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.620 [2024-11-20 16:20:26.382877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.620 [2024-11-20 16:20:26.382891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.620 [2024-11-20 16:20:26.391521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.620 [2024-11-20 16:20:26.391535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.620 [2024-11-20 16:20:26.400340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.620 [2024-11-20 16:20:26.400355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.620 [2024-11-20 16:20:26.409489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.620 [2024-11-20 16:20:26.409503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.620 [2024-11-20 16:20:26.418704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.620 [2024-11-20 16:20:26.418719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.620 [2024-11-20 16:20:26.427378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.620 [2024-11-20 16:20:26.427393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.620 [2024-11-20 16:20:26.436184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.620 [2024-11-20 16:20:26.436198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.620 [2024-11-20 16:20:26.444875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.620 [2024-11-20 16:20:26.444889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.620 [2024-11-20 16:20:26.453478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.620 [2024-11-20 16:20:26.453492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.620 [2024-11-20 16:20:26.462225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.620 [2024-11-20 16:20:26.462239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.620 [2024-11-20 16:20:26.470870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.620 [2024-11-20 16:20:26.470885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.620 [2024-11-20 16:20:26.479448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.620 [2024-11-20 16:20:26.479462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.620 [2024-11-20 16:20:26.488488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.620 [2024-11-20 16:20:26.488502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.620 [2024-11-20 16:20:26.497137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.620 [2024-11-20 16:20:26.497151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.620 [2024-11-20 16:20:26.505647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.620 [2024-11-20 16:20:26.505661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.620 [2024-11-20 16:20:26.514474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.620 [2024-11-20 16:20:26.514488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.620 [2024-11-20 16:20:26.523130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.620 [2024-11-20 16:20:26.523149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.620 [2024-11-20 16:20:26.531679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.620 [2024-11-20 16:20:26.531693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.620 [2024-11-20 16:20:26.540666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.620 [2024-11-20 16:20:26.540680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.620 [2024-11-20 16:20:26.549118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.620 [2024-11-20 16:20:26.549133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.620 [2024-11-20 16:20:26.558065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.620 [2024-11-20 16:20:26.558079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.620 [2024-11-20 16:20:26.567099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.620 [2024-11-20 16:20:26.567113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.620 [2024-11-20 16:20:26.575503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.620 [2024-11-20 16:20:26.575516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.881 [2024-11-20 16:20:26.584025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.881 [2024-11-20 16:20:26.584039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.881 [2024-11-20 16:20:26.593161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.881 [2024-11-20 16:20:26.593176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.881 [2024-11-20 16:20:26.601162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.881 [2024-11-20 16:20:26.601176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.881 [2024-11-20 16:20:26.610469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.881 [2024-11-20 16:20:26.610483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.881 [2024-11-20 16:20:26.619027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.881 [2024-11-20 16:20:26.619042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.881 [2024-11-20 16:20:26.628243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.881 [2024-11-20 16:20:26.628257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.881 [2024-11-20 16:20:26.637342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.881 [2024-11-20 16:20:26.637356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.881 [2024-11-20 16:20:26.646627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.881 [2024-11-20 16:20:26.646641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.881 [2024-11-20 16:20:26.654452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.881 [2024-11-20 16:20:26.654466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.881 [2024-11-20 16:20:26.663575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.881 [2024-11-20 16:20:26.663589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.881 [2024-11-20 16:20:26.672870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.881 [2024-11-20 16:20:26.672884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.881 [2024-11-20 16:20:26.681443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.881 [2024-11-20 16:20:26.681457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.881 [2024-11-20 16:20:26.690236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.881 [2024-11-20 16:20:26.690253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.881 [2024-11-20 16:20:26.699344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.881 [2024-11-20 16:20:26.699358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.881 [2024-11-20 16:20:26.707963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.881 [2024-11-20 16:20:26.707977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.881 [2024-11-20 16:20:26.716747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.881 [2024-11-20 16:20:26.716761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.881 [2024-11-20 16:20:26.725952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.881 [2024-11-20 16:20:26.725966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.881 [2024-11-20 16:20:26.734582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.881 [2024-11-20 16:20:26.734596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.881 [2024-11-20 16:20:26.743508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.881 [2024-11-20 16:20:26.743522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.881 [2024-11-20 16:20:26.752274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.881 [2024-11-20 16:20:26.752288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.881 [2024-11-20 16:20:26.760968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.881 [2024-11-20 16:20:26.760985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.881 [2024-11-20 16:20:26.770163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.881 [2024-11-20 16:20:26.770178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.881 [2024-11-20 16:20:26.779143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.881 [2024-11-20 16:20:26.779157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.881 [2024-11-20 16:20:26.788414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.881 [2024-11-20 16:20:26.788428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.881 [2024-11-20 16:20:26.796903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.881 [2024-11-20 16:20:26.796917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.881 [2024-11-20 16:20:26.805718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.881 [2024-11-20 16:20:26.805733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.881 [2024-11-20 16:20:26.814544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.881 [2024-11-20 16:20:26.814558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.881 [2024-11-20 16:20:26.823393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.881 [2024-11-20 16:20:26.823406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.882 [2024-11-20 16:20:26.832507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.882 [2024-11-20 16:20:26.832521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.142 [2024-11-20 16:20:26.840840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.142 [2024-11-20 16:20:26.840854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.142 [2024-11-20 16:20:26.849596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.142 [2024-11-20 16:20:26.849610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.142 [2024-11-20 16:20:26.858266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.142 [2024-11-20 16:20:26.858280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.142 [2024-11-20 16:20:26.866952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.142 [2024-11-20 16:20:26.866965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.142 [2024-11-20 16:20:26.876035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.142 [2024-11-20 16:20:26.876048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.142 [2024-11-20 16:20:26.884931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.142 [2024-11-20 16:20:26.884945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.142 [2024-11-20 16:20:26.894007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.142 [2024-11-20 16:20:26.894020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.142 [2024-11-20 16:20:26.902650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.142 [2024-11-20 16:20:26.902664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.142 [2024-11-20 16:20:26.911062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.142 [2024-11-20 16:20:26.911075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.142 [2024-11-20 16:20:26.920053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.142 [2024-11-20 16:20:26.920067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.142 [2024-11-20 16:20:26.929184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.142 [2024-11-20 16:20:26.929198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.142 [2024-11-20 16:20:26.938299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.142 [2024-11-20 16:20:26.938313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.142 [2024-11-20 16:20:26.947382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.142 [2024-11-20 16:20:26.947396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.142 [2024-11-20 16:20:26.956333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.142 [2024-11-20 16:20:26.956347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.142 [2024-11-20 16:20:26.965375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.142 [2024-11-20 16:20:26.965389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.142 [2024-11-20 16:20:26.974583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.142 [2024-11-20 16:20:26.974597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.142 [2024-11-20 16:20:26.983082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.142 [2024-11-20 16:20:26.983096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.142 [2024-11-20 16:20:26.992387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.142 [2024-11-20 16:20:26.992401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.142 [2024-11-20 16:20:27.001020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.142 [2024-11-20 16:20:27.001034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.142 [2024-11-20 16:20:27.009451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.142 [2024-11-20 16:20:27.009465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.142 [2024-11-20 16:20:27.018552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.142 [2024-11-20 16:20:27.018566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.142 [2024-11-20 16:20:27.027274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.142 [2024-11-20 16:20:27.027288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.142 [2024-11-20 16:20:27.036071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.143 [2024-11-20 16:20:27.036085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.143 [2024-11-20 16:20:27.044775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.143 [2024-11-20 16:20:27.044789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.143 [2024-11-20 16:20:27.053452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.143 [2024-11-20 16:20:27.053466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.143 [2024-11-20 16:20:27.061978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.143 [2024-11-20 16:20:27.061997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.143 [2024-11-20 16:20:27.071099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.143 [2024-11-20 16:20:27.071113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.143 [2024-11-20 16:20:27.080372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.143 [2024-11-20 16:20:27.080385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.143 [2024-11-20 16:20:27.088795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.143 [2024-11-20 16:20:27.088809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.143 [2024-11-20 16:20:27.097614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.143 [2024-11-20 16:20:27.097628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.404 [2024-11-20 16:20:27.106800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.404 [2024-11-20 16:20:27.106814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.404 [2024-11-20 16:20:27.115215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.404 [2024-11-20 16:20:27.115229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.404 [2024-11-20 16:20:27.124087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.404 [2024-11-20 16:20:27.124100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.405 [2024-11-20 16:20:27.132653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.405 [2024-11-20 16:20:27.132667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.405 [2024-11-20 16:20:27.141104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.405 [2024-11-20 16:20:27.141117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.405 [2024-11-20 16:20:27.149032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.405 [2024-11-20 16:20:27.149046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.405 [2024-11-20 16:20:27.157715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.405 [2024-11-20 16:20:27.157729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.405 [2024-11-20 16:20:27.166668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.405 [2024-11-20 16:20:27.166682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.405 [2024-11-20 16:20:27.175255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.405 [2024-11-20 16:20:27.175269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.405 [2024-11-20 16:20:27.184001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.405 [2024-11-20 16:20:27.184015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.405 [2024-11-20 16:20:27.192684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.405 [2024-11-20 16:20:27.192698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.405 [2024-11-20 16:20:27.201371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.405 [2024-11-20 16:20:27.201385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.405 19118.00 IOPS, 149.36 MiB/s [2024-11-20T15:20:27.364Z] [2024-11-20 16:20:27.207600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.405 [2024-11-20 16:20:27.207614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.405 00:08:41.405 Latency(us) 00:08:41.405 [2024-11-20T15:20:27.364Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:41.405 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:41.405 Nvme1n1 : 5.01 19123.89 149.41 0.00 0.00 6687.46 2402.99 17367.04 00:08:41.405 [2024-11-20T15:20:27.364Z] =================================================================================================================== 00:08:41.405 [2024-11-20T15:20:27.364Z] Total : 19123.89 149.41 0.00 0.00 6687.46 2402.99 17367.04 00:08:41.405 [2024-11-20 16:20:27.215491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.405 [2024-11-20 16:20:27.215501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.405 [2024-11-20 16:20:27.223512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.405 [2024-11-20 16:20:27.223522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.405 [2024-11-20 16:20:27.231536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.405 [2024-11-20 16:20:27.231546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.405 [2024-11-20 16:20:27.239556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.405 [2024-11-20 16:20:27.239566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.405 [2024-11-20 16:20:27.247574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.405 [2024-11-20 16:20:27.247583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.405 [2024-11-20 16:20:27.255594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.405 [2024-11-20 16:20:27.255602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.405 [2024-11-20 16:20:27.263615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.405 [2024-11-20 16:20:27.263623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.405 [2024-11-20 16:20:27.271635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.405 [2024-11-20 16:20:27.271643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.405 [2024-11-20 16:20:27.279654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.405 [2024-11-20 16:20:27.279662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.405 [2024-11-20 16:20:27.287675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.405 [2024-11-20 16:20:27.287682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.405 [2024-11-20 16:20:27.295695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.405 [2024-11-20 16:20:27.295702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.405 [2024-11-20 16:20:27.303718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.405 [2024-11-20 16:20:27.303727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.405 [2024-11-20 16:20:27.311737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.405 [2024-11-20 16:20:27.311749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.405 [2024-11-20 16:20:27.319758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.405 [2024-11-20 16:20:27.319765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.405 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2049096) - No such process 00:08:41.405 16:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2049096 00:08:41.405 16:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:41.405 16:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.405 16:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:41.405 16:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.405 16:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:41.405 16:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.405 16:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:41.405 delay0 00:08:41.405 16:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.405 16:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:41.405 16:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.405 16:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:41.405 16:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.405 16:20:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:41.665 [2024-11-20 16:20:27.418808] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:49.805 Initializing NVMe Controllers 00:08:49.806 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:49.806 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:49.806 Initialization complete. Launching workers. 00:08:49.806 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 237, failed: 30006 00:08:49.806 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 30109, failed to submit 134 00:08:49.806 success 30027, unsuccessful 82, failed 0 00:08:49.806 16:20:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:49.806 16:20:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:49.806 16:20:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:49.806 16:20:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:49.806 16:20:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:49.806 16:20:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:49.806 16:20:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:49.806 16:20:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:49.806 rmmod nvme_tcp 00:08:49.806 rmmod nvme_fabrics 00:08:49.806 rmmod nvme_keyring 00:08:49.806 16:20:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:49.806 16:20:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:49.806 16:20:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:49.806 16:20:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2046730 ']' 00:08:49.806 16:20:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2046730 00:08:49.806 16:20:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2046730 ']' 00:08:49.806 16:20:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2046730 00:08:49.806 16:20:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:08:49.806 16:20:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:49.806 16:20:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2046730 00:08:49.806 16:20:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:49.806 16:20:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:49.806 16:20:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2046730' 00:08:49.806 killing process with pid 2046730 00:08:49.806 16:20:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2046730 00:08:49.806 16:20:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2046730 00:08:49.806 16:20:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:49.806 16:20:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:49.806 16:20:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:49.806 16:20:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:08:49.806 16:20:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:08:49.806 16:20:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:49.806 16:20:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:08:49.806 16:20:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:49.806 16:20:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:49.806 16:20:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.806 16:20:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:49.806 16:20:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.191 16:20:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:51.191 00:08:51.191 real 0m33.703s 00:08:51.191 user 0m45.289s 00:08:51.191 sys 0m11.329s 00:08:51.191 16:20:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.191 16:20:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:51.191 ************************************ 00:08:51.191 END TEST nvmf_zcopy 00:08:51.191 ************************************ 00:08:51.191 16:20:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:51.191 16:20:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:51.191 16:20:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.191 16:20:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:51.191 ************************************ 00:08:51.191 START TEST nvmf_nmic 00:08:51.191 ************************************ 00:08:51.191 16:20:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:51.191 * Looking for test storage... 00:08:51.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:51.191 16:20:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:51.191 16:20:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:08:51.191 16:20:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:51.191 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:51.191 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:51.191 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:51.191 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:51.191 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:51.191 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:51.191 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:51.191 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:51.191 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:51.191 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:51.191 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:51.191 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:51.191 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:51.191 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:51.191 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:51.191 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:51.191 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:51.191 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:51.191 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:51.191 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:51.191 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:51.191 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:51.191 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:51.191 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:51.191 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:51.191 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:51.191 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:51.191 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:51.191 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:51.191 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:51.191 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:51.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.191 --rc genhtml_branch_coverage=1 00:08:51.191 --rc genhtml_function_coverage=1 00:08:51.191 --rc genhtml_legend=1 00:08:51.191 --rc geninfo_all_blocks=1 00:08:51.191 --rc geninfo_unexecuted_blocks=1 00:08:51.191 00:08:51.191 ' 00:08:51.191 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:51.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.191 --rc genhtml_branch_coverage=1 00:08:51.191 --rc genhtml_function_coverage=1 00:08:51.191 --rc genhtml_legend=1 00:08:51.191 --rc geninfo_all_blocks=1 00:08:51.191 --rc geninfo_unexecuted_blocks=1 00:08:51.191 00:08:51.191 ' 00:08:51.191 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:51.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.191 --rc genhtml_branch_coverage=1 00:08:51.191 --rc genhtml_function_coverage=1 00:08:51.191 --rc genhtml_legend=1 00:08:51.191 --rc geninfo_all_blocks=1 00:08:51.191 --rc geninfo_unexecuted_blocks=1 00:08:51.191 00:08:51.191 ' 00:08:51.191 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:51.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.191 --rc genhtml_branch_coverage=1 00:08:51.191 --rc genhtml_function_coverage=1 00:08:51.191 --rc genhtml_legend=1 00:08:51.191 --rc geninfo_all_blocks=1 00:08:51.192 --rc geninfo_unexecuted_blocks=1 00:08:51.192 00:08:51.192 ' 00:08:51.192 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:51.192 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:51.192 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:51.192 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:51.192 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:51.192 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:51.192 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:51.192 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:51.192 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:51.192 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:51.192 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:51.192 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:51.192 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:08:51.192 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:08:51.192 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:51.192 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:51.192 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:51.192 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:51.192 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:51.192 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:51.192 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:51.192 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:51.192 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:51.192 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.192 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.192 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.192 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:51.192 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.192 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:51.192 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:51.192 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:51.192 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:51.192 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:51.192 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:51.192 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:51.192 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:51.192 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:51.192 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:51.192 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:51.192 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:51.192 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:51.192 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:51.192 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:51.192 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:51.192 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:51.192 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:51.192 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:51.192 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.192 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:51.192 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.192 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:51.192 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:51.192 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:08:51.192 16:20:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:59.334 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:59.334 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:59.334 Found net devices under 0000:31:00.0: cvl_0_0 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:59.334 Found net devices under 0000:31:00.1: cvl_0_1 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:59.334 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:59.334 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:59.334 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:08:59.334 00:08:59.334 --- 10.0.0.2 ping statistics --- 00:08:59.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.334 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:08:59.335 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:59.335 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:59.335 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:08:59.335 00:08:59.335 --- 10.0.0.1 ping statistics --- 00:08:59.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.335 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:08:59.335 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:59.335 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:08:59.335 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:59.335 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:59.335 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:59.335 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:59.335 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:59.335 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:59.335 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:59.335 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:59.335 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:59.335 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:59.335 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:59.335 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2055821 00:08:59.335 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2055821 00:08:59.335 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:59.335 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2055821 ']' 00:08:59.335 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.335 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:59.335 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.335 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:59.335 16:20:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:59.335 [2024-11-20 16:20:44.480440] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:08:59.335 [2024-11-20 16:20:44.480488] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:59.335 [2024-11-20 16:20:44.561182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:59.335 [2024-11-20 16:20:44.597700] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:59.335 [2024-11-20 16:20:44.597733] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:59.335 [2024-11-20 16:20:44.597742] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:59.335 [2024-11-20 16:20:44.597749] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:59.335 [2024-11-20 16:20:44.597755] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:59.335 [2024-11-20 16:20:44.599509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:59.335 [2024-11-20 16:20:44.599622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:59.335 [2024-11-20 16:20:44.599776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.335 [2024-11-20 16:20:44.599777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:59.335 16:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:59.335 16:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:08:59.335 16:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:59.335 16:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:59.335 16:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:59.596 16:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:59.596 16:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:59.596 16:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.596 16:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:59.596 [2024-11-20 16:20:45.322930] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:59.596 16:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.596 16:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:59.596 16:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.596 16:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:59.596 Malloc0 00:08:59.596 16:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.596 16:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:59.596 16:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.596 16:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:59.596 16:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.596 16:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:59.596 16:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.596 16:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:59.596 16:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.596 16:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:59.596 16:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.596 16:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:59.596 [2024-11-20 16:20:45.397462] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:59.596 16:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.596 16:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:59.596 test case1: single bdev can't be used in multiple subsystems 00:08:59.596 16:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:59.596 16:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.596 16:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:59.596 16:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.596 16:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:59.596 16:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.596 16:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:59.596 16:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.596 16:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:59.596 16:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:59.596 16:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.596 16:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:59.596 [2024-11-20 16:20:45.433345] bdev.c:8467:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:59.596 [2024-11-20 16:20:45.433366] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:59.596 [2024-11-20 16:20:45.433374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.596 request: 00:08:59.596 { 00:08:59.596 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:59.596 "namespace": { 00:08:59.596 "bdev_name": "Malloc0", 00:08:59.596 "no_auto_visible": false 00:08:59.596 }, 00:08:59.596 "method": "nvmf_subsystem_add_ns", 00:08:59.596 "req_id": 1 00:08:59.596 } 00:08:59.596 Got JSON-RPC error response 00:08:59.596 response: 00:08:59.596 { 00:08:59.596 "code": -32602, 00:08:59.596 "message": "Invalid parameters" 00:08:59.596 } 00:08:59.596 16:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:59.596 16:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:59.596 16:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:59.596 16:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:59.596 Adding namespace failed - expected result. 00:08:59.596 16:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:59.596 test case2: host connect to nvmf target in multiple paths 00:08:59.596 16:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:08:59.596 16:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.596 16:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:59.596 [2024-11-20 16:20:45.445507] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:08:59.597 16:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.597 16:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:01.510 16:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:02.895 16:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:02.895 16:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:02.895 16:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:02.895 16:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:02.895 16:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:04.807 16:20:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:04.807 16:20:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:04.807 16:20:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:04.807 16:20:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:04.807 16:20:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:04.807 16:20:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:04.807 16:20:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:04.807 [global] 00:09:04.807 thread=1 00:09:04.807 invalidate=1 00:09:04.807 rw=write 00:09:04.807 time_based=1 00:09:04.807 runtime=1 00:09:04.807 ioengine=libaio 00:09:04.807 direct=1 00:09:04.807 bs=4096 00:09:04.807 iodepth=1 00:09:04.807 norandommap=0 00:09:04.807 numjobs=1 00:09:04.807 00:09:04.807 verify_dump=1 00:09:04.807 verify_backlog=512 00:09:04.807 verify_state_save=0 00:09:04.807 do_verify=1 00:09:04.807 verify=crc32c-intel 00:09:04.807 [job0] 00:09:04.807 filename=/dev/nvme0n1 00:09:04.807 Could not set queue depth (nvme0n1) 00:09:05.067 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:05.067 fio-3.35 00:09:05.067 Starting 1 thread 00:09:06.454 00:09:06.454 job0: (groupid=0, jobs=1): err= 0: pid=2057364: Wed Nov 20 16:20:52 2024 00:09:06.454 read: IOPS=35, BW=144KiB/s (147kB/s)(148KiB/1028msec) 00:09:06.454 slat (nsec): min=6820, max=29683, avg=24040.62, stdev=7588.24 00:09:06.454 clat (usec): min=778, max=42040, avg=18591.17, stdev=20493.07 00:09:06.454 lat (usec): min=785, max=42068, avg=18615.21, stdev=20496.06 00:09:06.454 clat percentiles (usec): 00:09:06.454 | 1.00th=[ 783], 5.00th=[ 783], 10.00th=[ 791], 20.00th=[ 914], 00:09:06.454 | 30.00th=[ 979], 40.00th=[ 1012], 50.00th=[ 1045], 60.00th=[41157], 00:09:06.454 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:09:06.454 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:06.454 | 99.99th=[42206] 00:09:06.454 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:09:06.454 slat (usec): min=9, max=28308, avg=86.02, stdev=1249.74 00:09:06.454 clat (usec): min=220, max=775, avg=568.54, stdev=88.40 00:09:06.454 lat (usec): min=233, max=28802, avg=654.56, stdev=1249.87 00:09:06.454 clat percentiles (usec): 00:09:06.454 | 1.00th=[ 363], 5.00th=[ 412], 10.00th=[ 445], 20.00th=[ 498], 00:09:06.454 | 30.00th=[ 529], 40.00th=[ 545], 50.00th=[ 570], 60.00th=[ 586], 00:09:06.454 | 70.00th=[ 619], 80.00th=[ 652], 90.00th=[ 685], 95.00th=[ 701], 00:09:06.454 | 99.00th=[ 742], 99.50th=[ 766], 99.90th=[ 775], 99.95th=[ 775], 00:09:06.454 | 99.99th=[ 775] 00:09:06.454 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:06.454 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:06.454 lat (usec) : 250=0.18%, 500=18.94%, 750=73.22%, 1000=3.46% 00:09:06.454 lat (msec) : 2=1.28%, 50=2.91% 00:09:06.454 cpu : usr=2.04%, sys=0.97%, ctx=552, majf=0, minf=1 00:09:06.454 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:06.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.454 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.454 issued rwts: total=37,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:06.454 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:06.454 00:09:06.454 Run status group 0 (all jobs): 00:09:06.454 READ: bw=144KiB/s (147kB/s), 144KiB/s-144KiB/s (147kB/s-147kB/s), io=148KiB (152kB), run=1028-1028msec 00:09:06.454 WRITE: bw=1992KiB/s (2040kB/s), 1992KiB/s-1992KiB/s (2040kB/s-2040kB/s), io=2048KiB (2097kB), run=1028-1028msec 00:09:06.454 00:09:06.454 Disk stats (read/write): 00:09:06.454 nvme0n1: ios=58/512, merge=0/0, ticks=1488/248, in_queue=1736, util=98.80% 00:09:06.454 16:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:06.454 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:06.454 16:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:06.454 16:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:06.454 16:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:06.454 16:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:06.454 16:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:06.454 16:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:06.454 16:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:06.454 16:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:06.454 16:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:06.454 16:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:06.454 16:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:06.454 16:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:06.454 16:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:06.454 16:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:06.454 16:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:06.454 rmmod nvme_tcp 00:09:06.454 rmmod nvme_fabrics 00:09:06.454 rmmod nvme_keyring 00:09:06.454 16:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:06.454 16:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:06.454 16:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:06.454 16:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2055821 ']' 00:09:06.454 16:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2055821 00:09:06.454 16:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2055821 ']' 00:09:06.454 16:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2055821 00:09:06.454 16:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:06.454 16:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:06.454 16:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2055821 00:09:06.454 16:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:06.454 16:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:06.454 16:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2055821' 00:09:06.454 killing process with pid 2055821 00:09:06.454 16:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2055821 00:09:06.454 16:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2055821 00:09:06.715 16:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:06.716 16:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:06.716 16:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:06.716 16:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:06.716 16:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:06.716 16:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:06.716 16:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:06.716 16:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:06.716 16:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:06.716 16:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.716 16:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:06.716 16:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.265 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:09.265 00:09:09.265 real 0m17.746s 00:09:09.265 user 0m45.908s 00:09:09.265 sys 0m6.398s 00:09:09.265 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.265 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:09.265 ************************************ 00:09:09.265 END TEST nvmf_nmic 00:09:09.265 ************************************ 00:09:09.265 16:20:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:09.266 ************************************ 00:09:09.266 START TEST nvmf_fio_target 00:09:09.266 ************************************ 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:09.266 * Looking for test storage... 00:09:09.266 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:09.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.266 --rc genhtml_branch_coverage=1 00:09:09.266 --rc genhtml_function_coverage=1 00:09:09.266 --rc genhtml_legend=1 00:09:09.266 --rc geninfo_all_blocks=1 00:09:09.266 --rc geninfo_unexecuted_blocks=1 00:09:09.266 00:09:09.266 ' 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:09.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.266 --rc genhtml_branch_coverage=1 00:09:09.266 --rc genhtml_function_coverage=1 00:09:09.266 --rc genhtml_legend=1 00:09:09.266 --rc geninfo_all_blocks=1 00:09:09.266 --rc geninfo_unexecuted_blocks=1 00:09:09.266 00:09:09.266 ' 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:09.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.266 --rc genhtml_branch_coverage=1 00:09:09.266 --rc genhtml_function_coverage=1 00:09:09.266 --rc genhtml_legend=1 00:09:09.266 --rc geninfo_all_blocks=1 00:09:09.266 --rc geninfo_unexecuted_blocks=1 00:09:09.266 00:09:09.266 ' 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:09.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.266 --rc genhtml_branch_coverage=1 00:09:09.266 --rc genhtml_function_coverage=1 00:09:09.266 --rc genhtml_legend=1 00:09:09.266 --rc geninfo_all_blocks=1 00:09:09.266 --rc geninfo_unexecuted_blocks=1 00:09:09.266 00:09:09.266 ' 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:09.266 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:09.267 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:09.267 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:09.267 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:09.267 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:09.267 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:09.267 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:09.267 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:09.267 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:09.267 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:09.267 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:09.267 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:09.267 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:09.267 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:09.267 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:09.267 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:09.267 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:09.267 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.267 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:09.267 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.267 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:09.267 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:09.267 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:09.267 16:20:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:17.408 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:17.408 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:17.408 Found net devices under 0000:31:00.0: cvl_0_0 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:17.408 Found net devices under 0000:31:00.1: cvl_0_1 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:17.408 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:17.409 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:17.409 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:17.409 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:17.409 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:17.409 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:17.409 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:17.409 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:17.409 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:17.409 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:17.409 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:17.409 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:17.409 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:17.409 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:17.409 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:17.409 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:17.409 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:17.409 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:17.409 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:17.409 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:17.409 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.520 ms 00:09:17.409 00:09:17.409 --- 10.0.0.2 ping statistics --- 00:09:17.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.409 rtt min/avg/max/mdev = 0.520/0.520/0.520/0.000 ms 00:09:17.409 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:17.409 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:17.409 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:09:17.409 00:09:17.409 --- 10.0.0.1 ping statistics --- 00:09:17.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.409 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:09:17.409 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:17.409 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:09:17.409 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:17.409 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:17.409 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:17.409 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:17.409 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:17.409 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:17.409 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:17.409 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:17.409 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:17.409 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:17.409 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:17.409 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2061917 00:09:17.409 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2061917 00:09:17.409 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:17.409 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2061917 ']' 00:09:17.409 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.409 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:17.409 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.409 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:17.409 16:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:17.409 [2024-11-20 16:21:02.475022] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:09:17.409 [2024-11-20 16:21:02.475114] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:17.409 [2024-11-20 16:21:02.562301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:17.409 [2024-11-20 16:21:02.604720] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:17.409 [2024-11-20 16:21:02.604755] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:17.409 [2024-11-20 16:21:02.604763] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:17.409 [2024-11-20 16:21:02.604770] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:17.409 [2024-11-20 16:21:02.604776] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:17.409 [2024-11-20 16:21:02.606589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:17.409 [2024-11-20 16:21:02.606724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:17.409 [2024-11-20 16:21:02.606874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.409 [2024-11-20 16:21:02.606875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:17.409 16:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:17.409 16:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:17.409 16:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:17.409 16:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:17.409 16:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:17.409 16:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:17.409 16:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:17.670 [2024-11-20 16:21:03.471933] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:17.670 16:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:17.930 16:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:17.930 16:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:18.192 16:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:18.192 16:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:18.192 16:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:18.192 16:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:18.453 16:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:18.453 16:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:18.713 16:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:18.713 16:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:18.713 16:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:18.973 16:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:18.973 16:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:19.235 16:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:19.235 16:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:19.496 16:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:19.496 16:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:19.496 16:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:19.756 16:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:19.756 16:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:20.018 16:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:20.018 [2024-11-20 16:21:05.910792] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:20.018 16:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:20.278 16:21:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:20.539 16:21:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:21.922 16:21:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:21.922 16:21:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:21.922 16:21:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:21.922 16:21:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:21.922 16:21:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:21.922 16:21:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:24.468 16:21:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:24.468 16:21:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:24.468 16:21:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:24.468 16:21:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:24.468 16:21:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:24.468 16:21:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:24.468 16:21:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:24.468 [global] 00:09:24.468 thread=1 00:09:24.468 invalidate=1 00:09:24.468 rw=write 00:09:24.468 time_based=1 00:09:24.468 runtime=1 00:09:24.468 ioengine=libaio 00:09:24.468 direct=1 00:09:24.468 bs=4096 00:09:24.468 iodepth=1 00:09:24.468 norandommap=0 00:09:24.468 numjobs=1 00:09:24.468 00:09:24.468 verify_dump=1 00:09:24.468 verify_backlog=512 00:09:24.468 verify_state_save=0 00:09:24.468 do_verify=1 00:09:24.468 verify=crc32c-intel 00:09:24.468 [job0] 00:09:24.468 filename=/dev/nvme0n1 00:09:24.468 [job1] 00:09:24.468 filename=/dev/nvme0n2 00:09:24.468 [job2] 00:09:24.468 filename=/dev/nvme0n3 00:09:24.468 [job3] 00:09:24.468 filename=/dev/nvme0n4 00:09:24.468 Could not set queue depth (nvme0n1) 00:09:24.468 Could not set queue depth (nvme0n2) 00:09:24.468 Could not set queue depth (nvme0n3) 00:09:24.468 Could not set queue depth (nvme0n4) 00:09:24.468 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:24.468 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:24.468 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:24.468 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:24.468 fio-3.35 00:09:24.468 Starting 4 threads 00:09:25.881 00:09:25.881 job0: (groupid=0, jobs=1): err= 0: pid=2064007: Wed Nov 20 16:21:11 2024 00:09:25.881 read: IOPS=15, BW=63.4KiB/s (64.9kB/s)(64.0KiB/1010msec) 00:09:25.881 slat (nsec): min=26302, max=42429, avg=29015.56, stdev=3630.52 00:09:25.881 clat (usec): min=40802, max=43679, avg=41802.68, stdev=656.65 00:09:25.881 lat (usec): min=40831, max=43708, avg=41831.70, stdev=656.27 00:09:25.881 clat percentiles (usec): 00:09:25.881 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:25.881 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:09:25.881 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[43779], 00:09:25.881 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:09:25.881 | 99.99th=[43779] 00:09:25.881 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:09:25.881 slat (nsec): min=9550, max=56437, avg=32037.26, stdev=9965.02 00:09:25.881 clat (usec): min=211, max=940, avg=623.02, stdev=113.31 00:09:25.881 lat (usec): min=222, max=975, avg=655.05, stdev=117.74 00:09:25.881 clat percentiles (usec): 00:09:25.881 | 1.00th=[ 338], 5.00th=[ 408], 10.00th=[ 469], 20.00th=[ 529], 00:09:25.881 | 30.00th=[ 578], 40.00th=[ 594], 50.00th=[ 627], 60.00th=[ 660], 00:09:25.881 | 70.00th=[ 693], 80.00th=[ 725], 90.00th=[ 758], 95.00th=[ 791], 00:09:25.881 | 99.00th=[ 824], 99.50th=[ 865], 99.90th=[ 938], 99.95th=[ 938], 00:09:25.881 | 99.99th=[ 938] 00:09:25.881 bw ( KiB/s): min= 4096, max= 4096, per=44.16%, avg=4096.00, stdev= 0.00, samples=1 00:09:25.881 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:25.881 lat (usec) : 250=0.19%, 500=13.83%, 750=70.83%, 1000=12.12% 00:09:25.881 lat (msec) : 50=3.03% 00:09:25.881 cpu : usr=1.29%, sys=1.78%, ctx=529, majf=0, minf=1 00:09:25.881 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:25.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.881 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.881 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:25.881 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:25.881 job1: (groupid=0, jobs=1): err= 0: pid=2064008: Wed Nov 20 16:21:11 2024 00:09:25.881 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:25.881 slat (nsec): min=7492, max=55668, avg=25430.88, stdev=5475.21 00:09:25.881 clat (usec): min=393, max=42071, avg=1366.48, stdev=4034.44 00:09:25.881 lat (usec): min=419, max=42101, avg=1391.91, stdev=4034.37 00:09:25.881 clat percentiles (usec): 00:09:25.881 | 1.00th=[ 494], 5.00th=[ 644], 10.00th=[ 742], 20.00th=[ 840], 00:09:25.881 | 30.00th=[ 906], 40.00th=[ 963], 50.00th=[ 1004], 60.00th=[ 1037], 00:09:25.881 | 70.00th=[ 1074], 80.00th=[ 1090], 90.00th=[ 1156], 95.00th=[ 1188], 00:09:25.881 | 99.00th=[ 1385], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:09:25.881 | 99.99th=[42206] 00:09:25.881 write: IOPS=677, BW=2709KiB/s (2774kB/s)(2712KiB/1001msec); 0 zone resets 00:09:25.881 slat (nsec): min=9694, max=59312, avg=17964.22, stdev=10758.20 00:09:25.881 clat (usec): min=110, max=997, avg=392.91, stdev=164.61 00:09:25.881 lat (usec): min=120, max=1032, avg=410.87, stdev=168.30 00:09:25.881 clat percentiles (usec): 00:09:25.881 | 1.00th=[ 119], 5.00th=[ 131], 10.00th=[ 163], 20.00th=[ 265], 00:09:25.881 | 30.00th=[ 285], 40.00th=[ 330], 50.00th=[ 375], 60.00th=[ 420], 00:09:25.881 | 70.00th=[ 474], 80.00th=[ 537], 90.00th=[ 627], 95.00th=[ 693], 00:09:25.881 | 99.00th=[ 791], 99.50th=[ 848], 99.90th=[ 996], 99.95th=[ 996], 00:09:25.881 | 99.99th=[ 996] 00:09:25.881 bw ( KiB/s): min= 4096, max= 4096, per=44.16%, avg=4096.00, stdev= 0.00, samples=1 00:09:25.881 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:25.881 lat (usec) : 250=9.66%, 500=33.87%, 750=16.89%, 1000=17.56% 00:09:25.881 lat (msec) : 2=21.60%, 50=0.42% 00:09:25.881 cpu : usr=1.50%, sys=2.40%, ctx=1192, majf=0, minf=1 00:09:25.881 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:25.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.881 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.881 issued rwts: total=512,678,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:25.881 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:25.881 job2: (groupid=0, jobs=1): err= 0: pid=2064019: Wed Nov 20 16:21:11 2024 00:09:25.881 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:25.881 slat (nsec): min=8627, max=58953, avg=27034.02, stdev=4127.36 00:09:25.881 clat (usec): min=776, max=1777, avg=1104.55, stdev=83.04 00:09:25.881 lat (usec): min=785, max=1816, avg=1131.58, stdev=83.84 00:09:25.881 clat percentiles (usec): 00:09:25.881 | 1.00th=[ 889], 5.00th=[ 971], 10.00th=[ 1012], 20.00th=[ 1045], 00:09:25.881 | 30.00th=[ 1074], 40.00th=[ 1090], 50.00th=[ 1106], 60.00th=[ 1123], 00:09:25.882 | 70.00th=[ 1139], 80.00th=[ 1156], 90.00th=[ 1188], 95.00th=[ 1221], 00:09:25.882 | 99.00th=[ 1287], 99.50th=[ 1336], 99.90th=[ 1778], 99.95th=[ 1778], 00:09:25.882 | 99.99th=[ 1778] 00:09:25.882 write: IOPS=639, BW=2557KiB/s (2619kB/s)(2560KiB/1001msec); 0 zone resets 00:09:25.882 slat (nsec): min=9212, max=55680, avg=30081.16, stdev=9727.30 00:09:25.882 clat (usec): min=221, max=904, avg=612.95, stdev=122.78 00:09:25.882 lat (usec): min=246, max=938, avg=643.03, stdev=128.31 00:09:25.882 clat percentiles (usec): 00:09:25.882 | 1.00th=[ 306], 5.00th=[ 379], 10.00th=[ 437], 20.00th=[ 502], 00:09:25.882 | 30.00th=[ 570], 40.00th=[ 594], 50.00th=[ 627], 60.00th=[ 660], 00:09:25.882 | 70.00th=[ 693], 80.00th=[ 717], 90.00th=[ 758], 95.00th=[ 783], 00:09:25.882 | 99.00th=[ 848], 99.50th=[ 881], 99.90th=[ 906], 99.95th=[ 906], 00:09:25.882 | 99.99th=[ 906] 00:09:25.882 bw ( KiB/s): min= 4096, max= 4096, per=44.16%, avg=4096.00, stdev= 0.00, samples=1 00:09:25.882 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:25.882 lat (usec) : 250=0.17%, 500=10.68%, 750=38.19%, 1000=9.98% 00:09:25.882 lat (msec) : 2=40.97% 00:09:25.882 cpu : usr=2.30%, sys=4.60%, ctx=1152, majf=0, minf=2 00:09:25.882 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:25.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.882 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.882 issued rwts: total=512,640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:25.882 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:25.882 job3: (groupid=0, jobs=1): err= 0: pid=2064024: Wed Nov 20 16:21:11 2024 00:09:25.882 read: IOPS=293, BW=1175KiB/s (1203kB/s)(1176KiB/1001msec) 00:09:25.882 slat (nsec): min=7414, max=47151, avg=27720.92, stdev=3568.80 00:09:25.882 clat (usec): min=751, max=42655, avg=2220.96, stdev=6670.23 00:09:25.882 lat (usec): min=777, max=42682, avg=2248.68, stdev=6670.26 00:09:25.882 clat percentiles (usec): 00:09:25.882 | 1.00th=[ 807], 5.00th=[ 930], 10.00th=[ 1004], 20.00th=[ 1045], 00:09:25.882 | 30.00th=[ 1074], 40.00th=[ 1106], 50.00th=[ 1123], 60.00th=[ 1139], 00:09:25.882 | 70.00th=[ 1156], 80.00th=[ 1188], 90.00th=[ 1205], 95.00th=[ 1270], 00:09:25.882 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:09:25.882 | 99.99th=[42730] 00:09:25.882 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:09:25.882 slat (nsec): min=9377, max=56593, avg=32344.80, stdev=10306.70 00:09:25.882 clat (usec): min=266, max=887, avg=614.89, stdev=123.68 00:09:25.882 lat (usec): min=279, max=942, avg=647.24, stdev=128.58 00:09:25.882 clat percentiles (usec): 00:09:25.882 | 1.00th=[ 314], 5.00th=[ 383], 10.00th=[ 433], 20.00th=[ 506], 00:09:25.882 | 30.00th=[ 570], 40.00th=[ 594], 50.00th=[ 627], 60.00th=[ 668], 00:09:25.882 | 70.00th=[ 693], 80.00th=[ 725], 90.00th=[ 758], 95.00th=[ 799], 00:09:25.882 | 99.00th=[ 840], 99.50th=[ 848], 99.90th=[ 889], 99.95th=[ 889], 00:09:25.882 | 99.99th=[ 889] 00:09:25.882 bw ( KiB/s): min= 4096, max= 4096, per=44.16%, avg=4096.00, stdev= 0.00, samples=1 00:09:25.882 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:25.882 lat (usec) : 500=12.66%, 750=43.05%, 1000=11.41% 00:09:25.882 lat (msec) : 2=31.89%, 50=0.99% 00:09:25.882 cpu : usr=1.50%, sys=3.30%, ctx=807, majf=0, minf=1 00:09:25.882 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:25.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.882 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.882 issued rwts: total=294,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:25.882 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:25.882 00:09:25.882 Run status group 0 (all jobs): 00:09:25.882 READ: bw=5283KiB/s (5410kB/s), 63.4KiB/s-2046KiB/s (64.9kB/s-2095kB/s), io=5336KiB (5464kB), run=1001-1010msec 00:09:25.882 WRITE: bw=9275KiB/s (9498kB/s), 2028KiB/s-2709KiB/s (2076kB/s-2774kB/s), io=9368KiB (9593kB), run=1001-1010msec 00:09:25.882 00:09:25.882 Disk stats (read/write): 00:09:25.882 nvme0n1: ios=60/512, merge=0/0, ticks=947/255, in_queue=1202, util=83.97% 00:09:25.882 nvme0n2: ios=405/512, merge=0/0, ticks=1441/216, in_queue=1657, util=87.82% 00:09:25.882 nvme0n3: ios=496/512, merge=0/0, ticks=538/255, in_queue=793, util=95.03% 00:09:25.882 nvme0n4: ios=249/512, merge=0/0, ticks=952/251, in_queue=1203, util=94.01% 00:09:25.882 16:21:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:25.882 [global] 00:09:25.882 thread=1 00:09:25.882 invalidate=1 00:09:25.882 rw=randwrite 00:09:25.882 time_based=1 00:09:25.882 runtime=1 00:09:25.882 ioengine=libaio 00:09:25.882 direct=1 00:09:25.882 bs=4096 00:09:25.882 iodepth=1 00:09:25.882 norandommap=0 00:09:25.882 numjobs=1 00:09:25.882 00:09:25.882 verify_dump=1 00:09:25.882 verify_backlog=512 00:09:25.882 verify_state_save=0 00:09:25.882 do_verify=1 00:09:25.882 verify=crc32c-intel 00:09:25.882 [job0] 00:09:25.882 filename=/dev/nvme0n1 00:09:25.882 [job1] 00:09:25.882 filename=/dev/nvme0n2 00:09:25.882 [job2] 00:09:25.882 filename=/dev/nvme0n3 00:09:25.882 [job3] 00:09:25.882 filename=/dev/nvme0n4 00:09:25.882 Could not set queue depth (nvme0n1) 00:09:25.882 Could not set queue depth (nvme0n2) 00:09:25.882 Could not set queue depth (nvme0n3) 00:09:25.882 Could not set queue depth (nvme0n4) 00:09:26.145 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:26.145 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:26.145 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:26.145 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:26.145 fio-3.35 00:09:26.145 Starting 4 threads 00:09:27.544 00:09:27.544 job0: (groupid=0, jobs=1): err= 0: pid=2064752: Wed Nov 20 16:21:13 2024 00:09:27.544 read: IOPS=16, BW=65.6KiB/s (67.1kB/s)(68.0KiB/1037msec) 00:09:27.544 slat (nsec): min=25080, max=28544, avg=25664.12, stdev=785.74 00:09:27.544 clat (usec): min=41903, max=43030, avg=42169.28, stdev=403.81 00:09:27.544 lat (usec): min=41929, max=43056, avg=42194.94, stdev=404.07 00:09:27.544 clat percentiles (usec): 00:09:27.544 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:09:27.544 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:09:27.544 | 70.00th=[42206], 80.00th=[42206], 90.00th=[43254], 95.00th=[43254], 00:09:27.544 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:09:27.544 | 99.99th=[43254] 00:09:27.544 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:09:27.544 slat (nsec): min=9381, max=65005, avg=28227.02, stdev=9163.13 00:09:27.544 clat (usec): min=322, max=881, avg=588.14, stdev=102.43 00:09:27.544 lat (usec): min=346, max=912, avg=616.37, stdev=105.45 00:09:27.544 clat percentiles (usec): 00:09:27.544 | 1.00th=[ 343], 5.00th=[ 392], 10.00th=[ 469], 20.00th=[ 506], 00:09:27.544 | 30.00th=[ 545], 40.00th=[ 570], 50.00th=[ 586], 60.00th=[ 611], 00:09:27.544 | 70.00th=[ 644], 80.00th=[ 676], 90.00th=[ 717], 95.00th=[ 750], 00:09:27.544 | 99.00th=[ 824], 99.50th=[ 857], 99.90th=[ 881], 99.95th=[ 881], 00:09:27.544 | 99.99th=[ 881] 00:09:27.544 bw ( KiB/s): min= 4096, max= 4096, per=46.91%, avg=4096.00, stdev= 0.00, samples=1 00:09:27.544 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:27.544 lat (usec) : 500=18.90%, 750=72.78%, 1000=5.10% 00:09:27.544 lat (msec) : 50=3.21% 00:09:27.544 cpu : usr=0.48%, sys=1.54%, ctx=529, majf=0, minf=1 00:09:27.544 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:27.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.544 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.544 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:27.544 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:27.544 job1: (groupid=0, jobs=1): err= 0: pid=2064754: Wed Nov 20 16:21:13 2024 00:09:27.544 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:27.544 slat (nsec): min=8585, max=62669, avg=26649.25, stdev=3694.26 00:09:27.544 clat (usec): min=490, max=1237, avg=1039.63, stdev=95.73 00:09:27.544 lat (usec): min=516, max=1263, avg=1066.28, stdev=96.14 00:09:27.544 clat percentiles (usec): 00:09:27.544 | 1.00th=[ 742], 5.00th=[ 881], 10.00th=[ 906], 20.00th=[ 971], 00:09:27.544 | 30.00th=[ 1004], 40.00th=[ 1037], 50.00th=[ 1057], 60.00th=[ 1074], 00:09:27.544 | 70.00th=[ 1090], 80.00th=[ 1106], 90.00th=[ 1139], 95.00th=[ 1156], 00:09:27.544 | 99.00th=[ 1221], 99.50th=[ 1237], 99.90th=[ 1237], 99.95th=[ 1237], 00:09:27.544 | 99.99th=[ 1237] 00:09:27.544 write: IOPS=731, BW=2925KiB/s (2995kB/s)(2928KiB/1001msec); 0 zone resets 00:09:27.544 slat (nsec): min=8851, max=64501, avg=29624.87, stdev=9199.82 00:09:27.544 clat (usec): min=245, max=877, avg=577.91, stdev=105.71 00:09:27.544 lat (usec): min=278, max=909, avg=607.53, stdev=108.72 00:09:27.544 clat percentiles (usec): 00:09:27.544 | 1.00th=[ 330], 5.00th=[ 388], 10.00th=[ 433], 20.00th=[ 498], 00:09:27.544 | 30.00th=[ 529], 40.00th=[ 553], 50.00th=[ 578], 60.00th=[ 611], 00:09:27.544 | 70.00th=[ 644], 80.00th=[ 676], 90.00th=[ 709], 95.00th=[ 742], 00:09:27.544 | 99.00th=[ 799], 99.50th=[ 807], 99.90th=[ 881], 99.95th=[ 881], 00:09:27.544 | 99.99th=[ 881] 00:09:27.544 bw ( KiB/s): min= 4096, max= 4096, per=46.91%, avg=4096.00, stdev= 0.00, samples=1 00:09:27.544 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:27.544 lat (usec) : 250=0.08%, 500=12.70%, 750=44.21%, 1000=13.50% 00:09:27.544 lat (msec) : 2=29.50% 00:09:27.544 cpu : usr=2.40%, sys=4.90%, ctx=1244, majf=0, minf=1 00:09:27.544 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:27.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.544 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.544 issued rwts: total=512,732,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:27.544 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:27.544 job2: (groupid=0, jobs=1): err= 0: pid=2064756: Wed Nov 20 16:21:13 2024 00:09:27.544 read: IOPS=68, BW=275KiB/s (281kB/s)(284KiB/1034msec) 00:09:27.544 slat (nsec): min=8081, max=29570, avg=26008.61, stdev=2291.40 00:09:27.544 clat (usec): min=464, max=42050, avg=11290.32, stdev=17817.80 00:09:27.544 lat (usec): min=493, max=42077, avg=11316.33, stdev=17818.43 00:09:27.544 clat percentiles (usec): 00:09:27.544 | 1.00th=[ 465], 5.00th=[ 816], 10.00th=[ 889], 20.00th=[ 963], 00:09:27.544 | 30.00th=[ 1004], 40.00th=[ 1020], 50.00th=[ 1037], 60.00th=[ 1057], 00:09:27.544 | 70.00th=[ 1074], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:09:27.544 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:27.544 | 99.99th=[42206] 00:09:27.544 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:09:27.544 slat (nsec): min=9631, max=50325, avg=25215.23, stdev=10807.04 00:09:27.544 clat (usec): min=103, max=653, avg=416.95, stdev=85.11 00:09:27.544 lat (usec): min=113, max=663, avg=442.17, stdev=91.45 00:09:27.544 clat percentiles (usec): 00:09:27.544 | 1.00th=[ 128], 5.00th=[ 273], 10.00th=[ 297], 20.00th=[ 338], 00:09:27.544 | 30.00th=[ 371], 40.00th=[ 416], 50.00th=[ 441], 60.00th=[ 457], 00:09:27.544 | 70.00th=[ 469], 80.00th=[ 486], 90.00th=[ 510], 95.00th=[ 529], 00:09:27.544 | 99.00th=[ 570], 99.50th=[ 578], 99.90th=[ 652], 99.95th=[ 652], 00:09:27.544 | 99.99th=[ 652] 00:09:27.544 bw ( KiB/s): min= 4096, max= 4096, per=46.91%, avg=4096.00, stdev= 0.00, samples=1 00:09:27.544 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:27.544 lat (usec) : 250=2.23%, 500=74.27%, 750=11.66%, 1000=3.09% 00:09:27.544 lat (msec) : 2=5.66%, 50=3.09% 00:09:27.544 cpu : usr=0.97%, sys=1.16%, ctx=583, majf=0, minf=1 00:09:27.544 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:27.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.544 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.544 issued rwts: total=71,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:27.544 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:27.544 job3: (groupid=0, jobs=1): err= 0: pid=2064757: Wed Nov 20 16:21:13 2024 00:09:27.544 read: IOPS=18, BW=73.1KiB/s (74.9kB/s)(76.0KiB/1039msec) 00:09:27.544 slat (nsec): min=26266, max=27033, avg=26630.32, stdev=193.40 00:09:27.544 clat (usec): min=40727, max=42173, avg=41551.33, stdev=535.30 00:09:27.544 lat (usec): min=40754, max=42199, avg=41577.96, stdev=535.27 00:09:27.544 clat percentiles (usec): 00:09:27.544 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:27.544 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:09:27.544 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:27.544 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:27.544 | 99.99th=[42206] 00:09:27.544 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:09:27.544 slat (nsec): min=9631, max=50876, avg=29571.81, stdev=8410.71 00:09:27.544 clat (usec): min=158, max=827, avg=448.35, stdev=113.67 00:09:27.544 lat (usec): min=169, max=837, avg=477.92, stdev=115.54 00:09:27.544 clat percentiles (usec): 00:09:27.544 | 1.00th=[ 208], 5.00th=[ 277], 10.00th=[ 293], 20.00th=[ 318], 00:09:27.545 | 30.00th=[ 351], 40.00th=[ 424], 50.00th=[ 494], 60.00th=[ 519], 00:09:27.545 | 70.00th=[ 529], 80.00th=[ 545], 90.00th=[ 570], 95.00th=[ 594], 00:09:27.545 | 99.00th=[ 627], 99.50th=[ 668], 99.90th=[ 824], 99.95th=[ 824], 00:09:27.545 | 99.99th=[ 824] 00:09:27.545 bw ( KiB/s): min= 4096, max= 4096, per=46.91%, avg=4096.00, stdev= 0.00, samples=1 00:09:27.545 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:27.545 lat (usec) : 250=2.26%, 500=47.83%, 750=46.14%, 1000=0.19% 00:09:27.545 lat (msec) : 50=3.58% 00:09:27.545 cpu : usr=0.67%, sys=1.54%, ctx=531, majf=0, minf=1 00:09:27.545 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:27.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.545 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.545 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:27.545 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:27.545 00:09:27.545 Run status group 0 (all jobs): 00:09:27.545 READ: bw=2383KiB/s (2440kB/s), 65.6KiB/s-2046KiB/s (67.1kB/s-2095kB/s), io=2476KiB (2535kB), run=1001-1039msec 00:09:27.545 WRITE: bw=8731KiB/s (8941kB/s), 1971KiB/s-2925KiB/s (2018kB/s-2995kB/s), io=9072KiB (9290kB), run=1001-1039msec 00:09:27.545 00:09:27.545 Disk stats (read/write): 00:09:27.545 nvme0n1: ios=62/512, merge=0/0, ticks=574/286, in_queue=860, util=88.28% 00:09:27.545 nvme0n2: ios=519/512, merge=0/0, ticks=716/238, in_queue=954, util=91.96% 00:09:27.545 nvme0n3: ios=87/512, merge=0/0, ticks=778/209, in_queue=987, util=92.24% 00:09:27.545 nvme0n4: ios=63/512, merge=0/0, ticks=644/217, in_queue=861, util=92.58% 00:09:27.545 16:21:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:27.545 [global] 00:09:27.545 thread=1 00:09:27.545 invalidate=1 00:09:27.545 rw=write 00:09:27.545 time_based=1 00:09:27.545 runtime=1 00:09:27.545 ioengine=libaio 00:09:27.545 direct=1 00:09:27.545 bs=4096 00:09:27.545 iodepth=128 00:09:27.545 norandommap=0 00:09:27.545 numjobs=1 00:09:27.545 00:09:27.545 verify_dump=1 00:09:27.545 verify_backlog=512 00:09:27.545 verify_state_save=0 00:09:27.545 do_verify=1 00:09:27.545 verify=crc32c-intel 00:09:27.545 [job0] 00:09:27.545 filename=/dev/nvme0n1 00:09:27.545 [job1] 00:09:27.545 filename=/dev/nvme0n2 00:09:27.545 [job2] 00:09:27.545 filename=/dev/nvme0n3 00:09:27.545 [job3] 00:09:27.545 filename=/dev/nvme0n4 00:09:27.545 Could not set queue depth (nvme0n1) 00:09:27.545 Could not set queue depth (nvme0n2) 00:09:27.545 Could not set queue depth (nvme0n3) 00:09:27.545 Could not set queue depth (nvme0n4) 00:09:27.805 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:27.805 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:27.805 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:27.805 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:27.805 fio-3.35 00:09:27.805 Starting 4 threads 00:09:29.178 00:09:29.178 job0: (groupid=0, jobs=1): err= 0: pid=2065279: Wed Nov 20 16:21:14 2024 00:09:29.178 read: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec) 00:09:29.178 slat (nsec): min=891, max=19008k, avg=97846.51, stdev=836782.78 00:09:29.178 clat (usec): min=3626, max=39800, avg=12942.48, stdev=5467.69 00:09:29.178 lat (usec): min=3643, max=55012, avg=13040.32, stdev=5549.67 00:09:29.178 clat percentiles (usec): 00:09:29.178 | 1.00th=[ 5932], 5.00th=[ 7635], 10.00th=[ 8979], 20.00th=[ 9503], 00:09:29.178 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10421], 60.00th=[12125], 00:09:29.178 | 70.00th=[12911], 80.00th=[15926], 90.00th=[22676], 95.00th=[24511], 00:09:29.178 | 99.00th=[30016], 99.50th=[33817], 99.90th=[35914], 99.95th=[38536], 00:09:29.178 | 99.99th=[39584] 00:09:29.178 write: IOPS=4337, BW=16.9MiB/s (17.8MB/s)(17.1MiB/1007msec); 0 zone resets 00:09:29.178 slat (nsec): min=1584, max=15883k, avg=116920.38, stdev=759655.72 00:09:29.178 clat (usec): min=585, max=103567, avg=17111.18, stdev=15924.27 00:09:29.178 lat (usec): min=618, max=103577, avg=17228.10, stdev=16040.06 00:09:29.178 clat percentiles (msec): 00:09:29.178 | 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 8], 00:09:29.178 | 30.00th=[ 9], 40.00th=[ 11], 50.00th=[ 15], 60.00th=[ 17], 00:09:29.178 | 70.00th=[ 19], 80.00th=[ 23], 90.00th=[ 28], 95.00th=[ 46], 00:09:29.178 | 99.00th=[ 95], 99.50th=[ 96], 99.90th=[ 97], 99.95th=[ 104], 00:09:29.178 | 99.99th=[ 104] 00:09:29.178 bw ( KiB/s): min=13488, max=20440, per=21.10%, avg=16964.00, stdev=4915.81, samples=2 00:09:29.178 iops : min= 3372, max= 5110, avg=4241.00, stdev=1228.95, samples=2 00:09:29.178 lat (usec) : 750=0.02% 00:09:29.178 lat (msec) : 2=0.13%, 4=2.19%, 10=37.57%, 20=40.90%, 50=16.95% 00:09:29.178 lat (msec) : 100=2.19%, 250=0.05% 00:09:29.178 cpu : usr=3.98%, sys=3.78%, ctx=379, majf=0, minf=1 00:09:29.178 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:29.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.178 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:29.178 issued rwts: total=4096,4368,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:29.178 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:29.178 job1: (groupid=0, jobs=1): err= 0: pid=2065280: Wed Nov 20 16:21:14 2024 00:09:29.178 read: IOPS=5437, BW=21.2MiB/s (22.3MB/s)(21.3MiB/1002msec) 00:09:29.178 slat (nsec): min=881, max=13231k, avg=83054.69, stdev=559167.43 00:09:29.178 clat (usec): min=1659, max=49395, avg=10524.05, stdev=6325.29 00:09:29.178 lat (usec): min=1662, max=49427, avg=10607.10, stdev=6382.63 00:09:29.178 clat percentiles (usec): 00:09:29.178 | 1.00th=[ 5538], 5.00th=[ 6587], 10.00th=[ 7111], 20.00th=[ 7570], 00:09:29.178 | 30.00th=[ 7898], 40.00th=[ 8225], 50.00th=[ 8586], 60.00th=[ 8979], 00:09:29.178 | 70.00th=[ 9634], 80.00th=[11338], 90.00th=[15008], 95.00th=[23200], 00:09:29.178 | 99.00th=[41681], 99.50th=[42730], 99.90th=[42730], 99.95th=[48497], 00:09:29.178 | 99.99th=[49546] 00:09:29.178 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:09:29.178 slat (nsec): min=1527, max=14408k, avg=92975.58, stdev=505464.12 00:09:29.178 clat (usec): min=4067, max=35895, avg=12298.17, stdev=7057.90 00:09:29.178 lat (usec): min=4069, max=35946, avg=12391.15, stdev=7112.33 00:09:29.178 clat percentiles (usec): 00:09:29.178 | 1.00th=[ 4948], 5.00th=[ 6194], 10.00th=[ 6652], 20.00th=[ 7177], 00:09:29.178 | 30.00th=[ 7504], 40.00th=[ 7767], 50.00th=[ 8160], 60.00th=[11469], 00:09:29.178 | 70.00th=[13173], 80.00th=[18482], 90.00th=[24773], 95.00th=[27395], 00:09:29.178 | 99.00th=[30278], 99.50th=[31327], 99.90th=[32113], 99.95th=[32113], 00:09:29.178 | 99.99th=[35914] 00:09:29.178 bw ( KiB/s): min=16504, max=28552, per=28.02%, avg=22528.00, stdev=8519.22, samples=2 00:09:29.178 iops : min= 4126, max= 7138, avg=5632.00, stdev=2129.81, samples=2 00:09:29.178 lat (msec) : 2=0.05%, 10=65.66%, 20=22.59%, 50=11.71% 00:09:29.178 cpu : usr=3.50%, sys=4.70%, ctx=661, majf=0, minf=1 00:09:29.178 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:29.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.178 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:29.178 issued rwts: total=5448,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:29.178 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:29.178 job2: (groupid=0, jobs=1): err= 0: pid=2065281: Wed Nov 20 16:21:14 2024 00:09:29.178 read: IOPS=5528, BW=21.6MiB/s (22.6MB/s)(21.7MiB/1005msec) 00:09:29.178 slat (nsec): min=929, max=11866k, avg=88796.81, stdev=577750.30 00:09:29.178 clat (usec): min=1047, max=45196, avg=11000.36, stdev=5447.60 00:09:29.178 lat (usec): min=3323, max=45223, avg=11089.16, stdev=5508.77 00:09:29.178 clat percentiles (usec): 00:09:29.178 | 1.00th=[ 5604], 5.00th=[ 6587], 10.00th=[ 7242], 20.00th=[ 7963], 00:09:29.178 | 30.00th=[ 8586], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[10028], 00:09:29.178 | 70.00th=[10683], 80.00th=[11600], 90.00th=[17957], 95.00th=[24249], 00:09:29.178 | 99.00th=[31851], 99.50th=[33424], 99.90th=[41681], 99.95th=[42206], 00:09:29.178 | 99.99th=[45351] 00:09:29.178 write: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec); 0 zone resets 00:09:29.178 slat (nsec): min=1574, max=12703k, avg=83523.21, stdev=483634.63 00:09:29.178 clat (usec): min=1157, max=48016, avg=11784.43, stdev=6576.73 00:09:29.178 lat (usec): min=2011, max=48041, avg=11867.96, stdev=6621.64 00:09:29.178 clat percentiles (usec): 00:09:29.178 | 1.00th=[ 4752], 5.00th=[ 5997], 10.00th=[ 6521], 20.00th=[ 7504], 00:09:29.178 | 30.00th=[ 7767], 40.00th=[ 8094], 50.00th=[ 9503], 60.00th=[11338], 00:09:29.178 | 70.00th=[12256], 80.00th=[15139], 90.00th=[19530], 95.00th=[25822], 00:09:29.178 | 99.00th=[37487], 99.50th=[41157], 99.90th=[42730], 99.95th=[45876], 00:09:29.178 | 99.99th=[47973] 00:09:29.178 bw ( KiB/s): min=19144, max=25912, per=28.02%, avg=22528.00, stdev=4785.70, samples=2 00:09:29.178 iops : min= 4786, max= 6478, avg=5632.00, stdev=1196.42, samples=2 00:09:29.178 lat (msec) : 2=0.02%, 4=0.48%, 10=57.03%, 20=33.28%, 50=9.20% 00:09:29.178 cpu : usr=3.59%, sys=5.58%, ctx=511, majf=0, minf=1 00:09:29.178 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:29.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.178 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:29.178 issued rwts: total=5556,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:29.178 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:29.178 job3: (groupid=0, jobs=1): err= 0: pid=2065282: Wed Nov 20 16:21:14 2024 00:09:29.178 read: IOPS=4107, BW=16.0MiB/s (16.8MB/s)(16.1MiB/1005msec) 00:09:29.178 slat (nsec): min=1067, max=19456k, avg=117655.38, stdev=864340.98 00:09:29.178 clat (usec): min=2689, max=50513, avg=14422.42, stdev=6262.63 00:09:29.178 lat (usec): min=2709, max=50539, avg=14540.08, stdev=6339.07 00:09:29.178 clat percentiles (usec): 00:09:29.178 | 1.00th=[ 7373], 5.00th=[ 8848], 10.00th=[ 9634], 20.00th=[10159], 00:09:29.178 | 30.00th=[11076], 40.00th=[11469], 50.00th=[12125], 60.00th=[13173], 00:09:29.178 | 70.00th=[14484], 80.00th=[17171], 90.00th=[23200], 95.00th=[31065], 00:09:29.178 | 99.00th=[34866], 99.50th=[34866], 99.90th=[39060], 99.95th=[45876], 00:09:29.178 | 99.99th=[50594] 00:09:29.178 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:09:29.178 slat (nsec): min=1759, max=13048k, avg=106245.51, stdev=700097.04 00:09:29.178 clat (usec): min=1300, max=79219, avg=14740.38, stdev=11350.57 00:09:29.178 lat (usec): min=1311, max=79226, avg=14846.63, stdev=11432.61 00:09:29.178 clat percentiles (usec): 00:09:29.178 | 1.00th=[ 4883], 5.00th=[ 7635], 10.00th=[ 8717], 20.00th=[ 9372], 00:09:29.178 | 30.00th=[ 9765], 40.00th=[10421], 50.00th=[11207], 60.00th=[12125], 00:09:29.178 | 70.00th=[15270], 80.00th=[16909], 90.00th=[19530], 95.00th=[35390], 00:09:29.178 | 99.00th=[72877], 99.50th=[74974], 99.90th=[79168], 99.95th=[79168], 00:09:29.178 | 99.99th=[79168] 00:09:29.178 bw ( KiB/s): min=16384, max=19712, per=22.45%, avg=18048.00, stdev=2353.25, samples=2 00:09:29.178 iops : min= 4096, max= 4928, avg=4512.00, stdev=588.31, samples=2 00:09:29.178 lat (msec) : 2=0.02%, 4=0.11%, 10=26.19%, 20=62.82%, 50=9.11% 00:09:29.178 lat (msec) : 100=1.74% 00:09:29.178 cpu : usr=3.19%, sys=5.28%, ctx=314, majf=0, minf=1 00:09:29.178 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:29.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.178 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:29.178 issued rwts: total=4128,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:29.178 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:29.178 00:09:29.178 Run status group 0 (all jobs): 00:09:29.178 READ: bw=74.6MiB/s (78.2MB/s), 15.9MiB/s-21.6MiB/s (16.7MB/s-22.6MB/s), io=75.1MiB (78.8MB), run=1002-1007msec 00:09:29.178 WRITE: bw=78.5MiB/s (82.3MB/s), 16.9MiB/s-22.0MiB/s (17.8MB/s-23.0MB/s), io=79.1MiB (82.9MB), run=1002-1007msec 00:09:29.178 00:09:29.178 Disk stats (read/write): 00:09:29.179 nvme0n1: ios=2610/3072, merge=0/0, ticks=34110/62080, in_queue=96190, util=95.79% 00:09:29.179 nvme0n2: ios=3623/3935, merge=0/0, ticks=21117/27317, in_queue=48434, util=96.07% 00:09:29.179 nvme0n3: ios=4608/5039, merge=0/0, ticks=29813/37058, in_queue=66871, util=86.59% 00:09:29.179 nvme0n4: ios=3324/3584, merge=0/0, ticks=29985/35637, in_queue=65622, util=88.44% 00:09:29.179 16:21:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:29.179 [global] 00:09:29.179 thread=1 00:09:29.179 invalidate=1 00:09:29.179 rw=randwrite 00:09:29.179 time_based=1 00:09:29.179 runtime=1 00:09:29.179 ioengine=libaio 00:09:29.179 direct=1 00:09:29.179 bs=4096 00:09:29.179 iodepth=128 00:09:29.179 norandommap=0 00:09:29.179 numjobs=1 00:09:29.179 00:09:29.179 verify_dump=1 00:09:29.179 verify_backlog=512 00:09:29.179 verify_state_save=0 00:09:29.179 do_verify=1 00:09:29.179 verify=crc32c-intel 00:09:29.179 [job0] 00:09:29.179 filename=/dev/nvme0n1 00:09:29.179 [job1] 00:09:29.179 filename=/dev/nvme0n2 00:09:29.179 [job2] 00:09:29.179 filename=/dev/nvme0n3 00:09:29.179 [job3] 00:09:29.179 filename=/dev/nvme0n4 00:09:29.179 Could not set queue depth (nvme0n1) 00:09:29.179 Could not set queue depth (nvme0n2) 00:09:29.179 Could not set queue depth (nvme0n3) 00:09:29.179 Could not set queue depth (nvme0n4) 00:09:29.439 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:29.439 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:29.439 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:29.439 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:29.439 fio-3.35 00:09:29.439 Starting 4 threads 00:09:30.816 00:09:30.816 job0: (groupid=0, jobs=1): err= 0: pid=2065809: Wed Nov 20 16:21:16 2024 00:09:30.816 read: IOPS=10.2k, BW=39.8MiB/s (41.7MB/s)(40.0MiB/1005msec) 00:09:30.816 slat (nsec): min=942, max=8003.9k, avg=49157.46, stdev=376852.53 00:09:30.816 clat (usec): min=2237, max=15024, avg=6720.76, stdev=1669.35 00:09:30.816 lat (usec): min=2265, max=15032, avg=6769.92, stdev=1688.43 00:09:30.816 clat percentiles (usec): 00:09:30.816 | 1.00th=[ 2999], 5.00th=[ 4490], 10.00th=[ 4883], 20.00th=[ 5473], 00:09:30.816 | 30.00th=[ 5866], 40.00th=[ 6194], 50.00th=[ 6521], 60.00th=[ 6849], 00:09:30.816 | 70.00th=[ 7242], 80.00th=[ 7898], 90.00th=[ 8848], 95.00th=[ 9765], 00:09:30.816 | 99.00th=[11863], 99.50th=[12387], 99.90th=[15008], 99.95th=[15008], 00:09:30.816 | 99.99th=[15008] 00:09:30.816 write: IOPS=10.2k, BW=39.8MiB/s (41.7MB/s)(40.0MiB/1005msec); 0 zone resets 00:09:30.816 slat (nsec): min=1546, max=5755.7k, avg=43798.73, stdev=316925.28 00:09:30.816 clat (usec): min=837, max=23175, avg=5729.92, stdev=2375.77 00:09:30.816 lat (usec): min=845, max=23178, avg=5773.72, stdev=2389.12 00:09:30.816 clat percentiles (usec): 00:09:30.816 | 1.00th=[ 2311], 5.00th=[ 3228], 10.00th=[ 3490], 20.00th=[ 4080], 00:09:30.816 | 30.00th=[ 4686], 40.00th=[ 5276], 50.00th=[ 5473], 60.00th=[ 5800], 00:09:30.816 | 70.00th=[ 6521], 80.00th=[ 7046], 90.00th=[ 7504], 95.00th=[ 8455], 00:09:30.816 | 99.00th=[19792], 99.50th=[22676], 99.90th=[23200], 99.95th=[23200], 00:09:30.816 | 99.99th=[23200] 00:09:30.816 bw ( KiB/s): min=38568, max=43352, per=43.27%, avg=40960.00, stdev=3382.80, samples=2 00:09:30.816 iops : min= 9642, max=10838, avg=10240.00, stdev=845.70, samples=2 00:09:30.816 lat (usec) : 1000=0.02% 00:09:30.816 lat (msec) : 2=0.35%, 4=10.78%, 10=85.85%, 20=2.51%, 50=0.50% 00:09:30.816 cpu : usr=6.37%, sys=8.07%, ctx=640, majf=0, minf=1 00:09:30.816 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:30.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.816 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:30.816 issued rwts: total=10232,10240,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:30.816 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:30.816 job1: (groupid=0, jobs=1): err= 0: pid=2065810: Wed Nov 20 16:21:16 2024 00:09:30.816 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:09:30.816 slat (nsec): min=886, max=24464k, avg=59670.00, stdev=606717.93 00:09:30.816 clat (usec): min=816, max=37866, avg=8807.79, stdev=5766.90 00:09:30.816 lat (usec): min=821, max=38747, avg=8867.46, stdev=5814.45 00:09:30.816 clat percentiles (usec): 00:09:30.816 | 1.00th=[ 2024], 5.00th=[ 2540], 10.00th=[ 3064], 20.00th=[ 5407], 00:09:30.816 | 30.00th=[ 6980], 40.00th=[ 7308], 50.00th=[ 7504], 60.00th=[ 7767], 00:09:30.816 | 70.00th=[ 8029], 80.00th=[10159], 90.00th=[17695], 95.00th=[19268], 00:09:30.816 | 99.00th=[34866], 99.50th=[36963], 99.90th=[36963], 99.95th=[36963], 00:09:30.816 | 99.99th=[38011] 00:09:30.816 write: IOPS=5759, BW=22.5MiB/s (23.6MB/s)(22.6MiB/1004msec); 0 zone resets 00:09:30.816 slat (nsec): min=1485, max=24400k, avg=95396.41, stdev=763230.83 00:09:30.816 clat (usec): min=529, max=121512, avg=14232.59, stdev=23612.85 00:09:30.816 lat (usec): min=560, max=121525, avg=14327.98, stdev=23762.62 00:09:30.816 clat percentiles (usec): 00:09:30.816 | 1.00th=[ 1123], 5.00th=[ 1811], 10.00th=[ 2999], 20.00th=[ 3916], 00:09:30.816 | 30.00th=[ 5342], 40.00th=[ 6456], 50.00th=[ 7111], 60.00th=[ 7373], 00:09:30.816 | 70.00th=[ 7832], 80.00th=[ 11338], 90.00th=[ 26084], 95.00th=[ 86508], 00:09:30.816 | 99.00th=[111674], 99.50th=[114820], 99.90th=[120062], 99.95th=[121111], 00:09:30.816 | 99.99th=[121111] 00:09:30.816 bw ( KiB/s): min=13296, max=31952, per=23.90%, avg=22624.00, stdev=13191.78, samples=2 00:09:30.816 iops : min= 3324, max= 7988, avg=5656.00, stdev=3297.95, samples=2 00:09:30.816 lat (usec) : 750=0.10%, 1000=0.34% 00:09:30.816 lat (msec) : 2=3.09%, 4=14.24%, 10=61.17%, 20=12.16%, 50=4.73% 00:09:30.816 lat (msec) : 100=2.70%, 250=1.47% 00:09:30.816 cpu : usr=3.69%, sys=5.28%, ctx=517, majf=0, minf=1 00:09:30.816 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:30.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.816 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:30.816 issued rwts: total=5120,5783,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:30.816 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:30.816 job2: (groupid=0, jobs=1): err= 0: pid=2065814: Wed Nov 20 16:21:16 2024 00:09:30.816 read: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec) 00:09:30.816 slat (nsec): min=992, max=17865k, avg=135989.29, stdev=1002174.73 00:09:30.816 clat (msec): min=6, max=107, avg=15.75, stdev=12.90 00:09:30.816 lat (msec): min=6, max=107, avg=15.88, stdev=13.02 00:09:30.816 clat percentiles (msec): 00:09:30.816 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 9], 00:09:30.816 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 14], 60.00th=[ 16], 00:09:30.816 | 70.00th=[ 17], 80.00th=[ 19], 90.00th=[ 22], 95.00th=[ 30], 00:09:30.816 | 99.00th=[ 92], 99.50th=[ 102], 99.90th=[ 108], 99.95th=[ 108], 00:09:30.817 | 99.99th=[ 108] 00:09:30.817 write: IOPS=3704, BW=14.5MiB/s (15.2MB/s)(14.6MiB/1008msec); 0 zone resets 00:09:30.817 slat (nsec): min=1607, max=13607k, avg=131850.26, stdev=815916.88 00:09:30.817 clat (usec): min=1140, max=107501, avg=19143.16, stdev=14460.41 00:09:30.817 lat (usec): min=1150, max=107503, avg=19275.01, stdev=14523.91 00:09:30.817 clat percentiles (msec): 00:09:30.817 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 9], 00:09:30.817 | 30.00th=[ 11], 40.00th=[ 14], 50.00th=[ 15], 60.00th=[ 17], 00:09:30.817 | 70.00th=[ 21], 80.00th=[ 26], 90.00th=[ 41], 95.00th=[ 52], 00:09:30.817 | 99.00th=[ 66], 99.50th=[ 86], 99.90th=[ 91], 99.95th=[ 108], 00:09:30.817 | 99.99th=[ 108] 00:09:30.817 bw ( KiB/s): min=12464, max=16384, per=15.24%, avg=14424.00, stdev=2771.86, samples=2 00:09:30.817 iops : min= 3116, max= 4096, avg=3606.00, stdev=692.96, samples=2 00:09:30.817 lat (msec) : 2=0.03%, 4=0.27%, 10=29.73%, 20=47.06%, 50=18.69% 00:09:30.817 lat (msec) : 100=3.79%, 250=0.42% 00:09:30.817 cpu : usr=2.28%, sys=4.67%, ctx=285, majf=0, minf=1 00:09:30.817 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:09:30.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.817 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:30.817 issued rwts: total=3584,3734,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:30.817 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:30.817 job3: (groupid=0, jobs=1): err= 0: pid=2065816: Wed Nov 20 16:21:16 2024 00:09:30.817 read: IOPS=3691, BW=14.4MiB/s (15.1MB/s)(14.5MiB/1005msec) 00:09:30.817 slat (nsec): min=988, max=24335k, avg=135921.25, stdev=994606.88 00:09:30.817 clat (usec): min=3027, max=81458, avg=16156.53, stdev=10944.91 00:09:30.817 lat (usec): min=4633, max=81466, avg=16292.45, stdev=11032.47 00:09:30.817 clat percentiles (usec): 00:09:30.817 | 1.00th=[ 7046], 5.00th=[ 8586], 10.00th=[ 8848], 20.00th=[ 9896], 00:09:30.817 | 30.00th=[10290], 40.00th=[11994], 50.00th=[13435], 60.00th=[13960], 00:09:30.817 | 70.00th=[15664], 80.00th=[18744], 90.00th=[25560], 95.00th=[33817], 00:09:30.817 | 99.00th=[77071], 99.50th=[79168], 99.90th=[81265], 99.95th=[81265], 00:09:30.817 | 99.99th=[81265] 00:09:30.817 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:09:30.817 slat (nsec): min=1597, max=11535k, avg=115638.82, stdev=715171.67 00:09:30.817 clat (usec): min=2520, max=81432, avg=16477.91, stdev=11990.09 00:09:30.817 lat (usec): min=2529, max=81436, avg=16593.54, stdev=12056.02 00:09:30.817 clat percentiles (usec): 00:09:30.817 | 1.00th=[ 4424], 5.00th=[ 5800], 10.00th=[ 6980], 20.00th=[ 7767], 00:09:30.817 | 30.00th=[ 9896], 40.00th=[10945], 50.00th=[12518], 60.00th=[13304], 00:09:30.817 | 70.00th=[16319], 80.00th=[21890], 90.00th=[33162], 95.00th=[43254], 00:09:30.817 | 99.00th=[58983], 99.50th=[61604], 99.90th=[63177], 99.95th=[63177], 00:09:30.817 | 99.99th=[81265] 00:09:30.817 bw ( KiB/s): min=16376, max=16384, per=17.30%, avg=16380.00, stdev= 5.66, samples=2 00:09:30.817 iops : min= 4094, max= 4096, avg=4095.00, stdev= 1.41, samples=2 00:09:30.817 lat (msec) : 4=0.45%, 10=27.22%, 20=50.63%, 50=18.43%, 100=3.27% 00:09:30.817 cpu : usr=3.39%, sys=4.18%, ctx=299, majf=0, minf=2 00:09:30.817 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:30.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.817 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:30.817 issued rwts: total=3710,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:30.817 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:30.817 00:09:30.817 Run status group 0 (all jobs): 00:09:30.817 READ: bw=87.8MiB/s (92.0MB/s), 13.9MiB/s-39.8MiB/s (14.6MB/s-41.7MB/s), io=88.5MiB (92.8MB), run=1004-1008msec 00:09:30.817 WRITE: bw=92.4MiB/s (96.9MB/s), 14.5MiB/s-39.8MiB/s (15.2MB/s-41.7MB/s), io=93.2MiB (97.7MB), run=1004-1008msec 00:09:30.817 00:09:30.817 Disk stats (read/write): 00:09:30.817 nvme0n1: ios=8754/8911, merge=0/0, ticks=54473/47304, in_queue=101777, util=88.38% 00:09:30.817 nvme0n2: ios=3625/4554, merge=0/0, ticks=29310/67559, in_queue=96869, util=92.58% 00:09:30.817 nvme0n3: ios=3113/3072, merge=0/0, ticks=49058/54251, in_queue=103309, util=92.45% 00:09:30.817 nvme0n4: ios=3093/3252, merge=0/0, ticks=49292/54161, in_queue=103453, util=90.03% 00:09:30.817 16:21:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:30.817 16:21:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2066091 00:09:30.817 16:21:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:30.817 16:21:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:30.817 [global] 00:09:30.817 thread=1 00:09:30.817 invalidate=1 00:09:30.817 rw=read 00:09:30.817 time_based=1 00:09:30.817 runtime=10 00:09:30.817 ioengine=libaio 00:09:30.817 direct=1 00:09:30.817 bs=4096 00:09:30.817 iodepth=1 00:09:30.817 norandommap=1 00:09:30.817 numjobs=1 00:09:30.817 00:09:30.817 [job0] 00:09:30.817 filename=/dev/nvme0n1 00:09:30.817 [job1] 00:09:30.817 filename=/dev/nvme0n2 00:09:30.817 [job2] 00:09:30.817 filename=/dev/nvme0n3 00:09:30.817 [job3] 00:09:30.817 filename=/dev/nvme0n4 00:09:30.817 Could not set queue depth (nvme0n1) 00:09:30.817 Could not set queue depth (nvme0n2) 00:09:30.817 Could not set queue depth (nvme0n3) 00:09:30.817 Could not set queue depth (nvme0n4) 00:09:31.076 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:31.076 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:31.076 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:31.076 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:31.076 fio-3.35 00:09:31.076 Starting 4 threads 00:09:33.607 16:21:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:33.865 16:21:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:33.865 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=5885952, buflen=4096 00:09:33.865 fio: pid=2066341, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:34.123 16:21:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:34.123 16:21:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:34.123 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=2662400, buflen=4096 00:09:34.123 fio: pid=2066340, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:34.123 16:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:34.123 16:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:34.123 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=8949760, buflen=4096 00:09:34.123 fio: pid=2066338, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:34.381 16:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:34.381 16:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:34.381 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=8265728, buflen=4096 00:09:34.381 fio: pid=2066339, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:34.639 00:09:34.639 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2066338: Wed Nov 20 16:21:20 2024 00:09:34.639 read: IOPS=750, BW=3000KiB/s (3072kB/s)(8740KiB/2913msec) 00:09:34.639 slat (usec): min=7, max=15036, avg=38.71, stdev=452.04 00:09:34.639 clat (usec): min=401, max=42070, avg=1278.05, stdev=3485.19 00:09:34.639 lat (usec): min=417, max=42096, avg=1316.77, stdev=3513.05 00:09:34.639 clat percentiles (usec): 00:09:34.639 | 1.00th=[ 676], 5.00th=[ 791], 10.00th=[ 848], 20.00th=[ 914], 00:09:34.639 | 30.00th=[ 955], 40.00th=[ 971], 50.00th=[ 996], 60.00th=[ 1012], 00:09:34.639 | 70.00th=[ 1029], 80.00th=[ 1057], 90.00th=[ 1090], 95.00th=[ 1106], 00:09:34.639 | 99.00th=[ 1237], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:09:34.639 | 99.99th=[42206] 00:09:34.639 bw ( KiB/s): min= 391, max= 3960, per=36.30%, avg=2892.60, stdev=1556.38, samples=5 00:09:34.639 iops : min= 97, max= 990, avg=723.00, stdev=389.40, samples=5 00:09:34.639 lat (usec) : 500=0.09%, 750=2.33%, 1000=51.60% 00:09:34.639 lat (msec) : 2=45.11%, 4=0.05%, 10=0.05%, 50=0.73% 00:09:34.639 cpu : usr=0.82%, sys=2.20%, ctx=2189, majf=0, minf=1 00:09:34.639 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:34.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.639 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.639 issued rwts: total=2186,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.639 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:34.639 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2066339: Wed Nov 20 16:21:20 2024 00:09:34.639 read: IOPS=639, BW=2556KiB/s (2617kB/s)(8072KiB/3158msec) 00:09:34.639 slat (usec): min=6, max=20482, avg=65.26, stdev=767.72 00:09:34.639 clat (usec): min=452, max=42237, avg=1481.93, stdev=4463.69 00:09:34.639 lat (usec): min=497, max=42263, avg=1547.21, stdev=4526.04 00:09:34.639 clat percentiles (usec): 00:09:34.640 | 1.00th=[ 685], 5.00th=[ 807], 10.00th=[ 857], 20.00th=[ 922], 00:09:34.640 | 30.00th=[ 955], 40.00th=[ 979], 50.00th=[ 996], 60.00th=[ 1012], 00:09:34.640 | 70.00th=[ 1029], 80.00th=[ 1057], 90.00th=[ 1090], 95.00th=[ 1123], 00:09:34.640 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:09:34.640 | 99.99th=[42206] 00:09:34.640 bw ( KiB/s): min= 846, max= 4087, per=33.29%, avg=2652.83, stdev=1564.43, samples=6 00:09:34.640 iops : min= 211, max= 1021, avg=663.00, stdev=391.09, samples=6 00:09:34.640 lat (usec) : 500=0.10%, 750=2.38%, 1000=51.41% 00:09:34.640 lat (msec) : 2=44.82%, 50=1.24% 00:09:34.640 cpu : usr=1.49%, sys=2.22%, ctx=2025, majf=0, minf=2 00:09:34.640 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:34.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.640 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.640 issued rwts: total=2019,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.640 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:34.640 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2066340: Wed Nov 20 16:21:20 2024 00:09:34.640 read: IOPS=236, BW=943KiB/s (966kB/s)(2600KiB/2756msec) 00:09:34.640 slat (usec): min=24, max=13561, avg=47.51, stdev=530.57 00:09:34.640 clat (usec): min=773, max=42829, avg=4151.09, stdev=10658.62 00:09:34.640 lat (usec): min=799, max=55848, avg=4198.64, stdev=10745.94 00:09:34.640 clat percentiles (usec): 00:09:34.640 | 1.00th=[ 873], 5.00th=[ 938], 10.00th=[ 988], 20.00th=[ 1037], 00:09:34.640 | 30.00th=[ 1074], 40.00th=[ 1106], 50.00th=[ 1123], 60.00th=[ 1156], 00:09:34.640 | 70.00th=[ 1172], 80.00th=[ 1205], 90.00th=[ 1270], 95.00th=[41157], 00:09:34.640 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:09:34.640 | 99.99th=[42730] 00:09:34.640 bw ( KiB/s): min= 694, max= 1480, per=12.83%, avg=1022.00, stdev=322.17, samples=5 00:09:34.640 iops : min= 173, max= 370, avg=255.40, stdev=80.67, samples=5 00:09:34.640 lat (usec) : 1000=10.91% 00:09:34.640 lat (msec) : 2=81.41%, 50=7.53% 00:09:34.640 cpu : usr=0.29%, sys=0.69%, ctx=653, majf=0, minf=2 00:09:34.640 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:34.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.640 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.640 issued rwts: total=651,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.640 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:34.640 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2066341: Wed Nov 20 16:21:20 2024 00:09:34.640 read: IOPS=562, BW=2248KiB/s (2302kB/s)(5748KiB/2557msec) 00:09:34.640 slat (nsec): min=7065, max=52463, avg=25992.44, stdev=2987.52 00:09:34.640 clat (usec): min=609, max=42079, avg=1731.28, stdev=5332.04 00:09:34.640 lat (usec): min=634, max=42104, avg=1757.27, stdev=5332.01 00:09:34.640 clat percentiles (usec): 00:09:34.640 | 1.00th=[ 725], 5.00th=[ 807], 10.00th=[ 873], 20.00th=[ 938], 00:09:34.640 | 30.00th=[ 979], 40.00th=[ 1012], 50.00th=[ 1045], 60.00th=[ 1074], 00:09:34.640 | 70.00th=[ 1090], 80.00th=[ 1123], 90.00th=[ 1156], 95.00th=[ 1172], 00:09:34.640 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:34.640 | 99.99th=[42206] 00:09:34.640 bw ( KiB/s): min= 367, max= 4024, per=28.83%, avg=2297.40, stdev=1545.85, samples=5 00:09:34.640 iops : min= 91, max= 1006, avg=574.20, stdev=386.70, samples=5 00:09:34.640 lat (usec) : 750=1.81%, 1000=34.56% 00:09:34.640 lat (msec) : 2=61.82%, 50=1.74% 00:09:34.640 cpu : usr=0.47%, sys=1.84%, ctx=1438, majf=0, minf=2 00:09:34.640 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:34.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.640 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.640 issued rwts: total=1438,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.640 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:34.640 00:09:34.640 Run status group 0 (all jobs): 00:09:34.640 READ: bw=7967KiB/s (8158kB/s), 943KiB/s-3000KiB/s (966kB/s-3072kB/s), io=24.6MiB (25.8MB), run=2557-3158msec 00:09:34.640 00:09:34.640 Disk stats (read/write): 00:09:34.640 nvme0n1: ios=2081/0, merge=0/0, ticks=2697/0, in_queue=2697, util=92.35% 00:09:34.640 nvme0n2: ios=1999/0, merge=0/0, ticks=2725/0, in_queue=2725, util=92.38% 00:09:34.640 nvme0n3: ios=641/0, merge=0/0, ticks=2460/0, in_queue=2460, util=95.68% 00:09:34.640 nvme0n4: ios=1437/0, merge=0/0, ticks=2443/0, in_queue=2443, util=96.36% 00:09:34.640 16:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:34.640 16:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:34.898 16:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:34.898 16:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:35.156 16:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:35.156 16:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:35.156 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:35.156 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:35.414 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:35.414 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2066091 00:09:35.414 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:35.414 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:35.414 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.414 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:35.414 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:35.414 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:35.414 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:35.414 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:35.414 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:35.414 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:35.414 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:35.414 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:35.414 nvmf hotplug test: fio failed as expected 00:09:35.414 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:35.674 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:35.674 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:35.674 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:35.674 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:35.674 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:35.674 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:35.674 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:35.674 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:35.674 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:35.674 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:35.674 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:35.674 rmmod nvme_tcp 00:09:35.674 rmmod nvme_fabrics 00:09:35.674 rmmod nvme_keyring 00:09:35.674 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:35.674 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:35.674 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:35.674 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2061917 ']' 00:09:35.674 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2061917 00:09:35.674 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2061917 ']' 00:09:35.674 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2061917 00:09:35.674 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:35.674 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:35.674 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2061917 00:09:35.935 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:35.935 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:35.935 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2061917' 00:09:35.935 killing process with pid 2061917 00:09:35.935 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2061917 00:09:35.935 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2061917 00:09:35.935 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:35.935 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:35.935 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:35.935 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:35.935 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:35.935 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:35.935 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:35.935 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:35.935 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:35.935 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.935 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:35.935 16:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.935 16:21:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:37.935 00:09:37.935 real 0m29.181s 00:09:37.935 user 2m34.780s 00:09:37.935 sys 0m9.380s 00:09:37.935 16:21:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.935 16:21:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:37.935 ************************************ 00:09:37.935 END TEST nvmf_fio_target 00:09:37.935 ************************************ 00:09:38.336 16:21:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:38.336 16:21:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:38.336 16:21:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.336 16:21:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:38.336 ************************************ 00:09:38.336 START TEST nvmf_bdevio 00:09:38.336 ************************************ 00:09:38.336 16:21:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:38.336 * Looking for test storage... 00:09:38.336 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:38.336 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:38.336 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:09:38.336 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:38.336 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:38.336 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:38.336 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:38.336 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:38.336 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:38.336 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:38.336 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:38.336 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:38.336 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:38.336 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:38.336 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:38.336 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:38.336 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:38.336 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:38.336 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:38.336 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:38.336 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:38.336 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:38.336 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:38.336 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:38.336 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:38.336 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:38.336 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:38.336 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:38.336 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:38.336 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:38.336 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:38.336 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:38.336 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:38.336 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:38.336 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:38.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.336 --rc genhtml_branch_coverage=1 00:09:38.336 --rc genhtml_function_coverage=1 00:09:38.336 --rc genhtml_legend=1 00:09:38.336 --rc geninfo_all_blocks=1 00:09:38.336 --rc geninfo_unexecuted_blocks=1 00:09:38.336 00:09:38.336 ' 00:09:38.336 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:38.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.336 --rc genhtml_branch_coverage=1 00:09:38.337 --rc genhtml_function_coverage=1 00:09:38.337 --rc genhtml_legend=1 00:09:38.337 --rc geninfo_all_blocks=1 00:09:38.337 --rc geninfo_unexecuted_blocks=1 00:09:38.337 00:09:38.337 ' 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:38.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.337 --rc genhtml_branch_coverage=1 00:09:38.337 --rc genhtml_function_coverage=1 00:09:38.337 --rc genhtml_legend=1 00:09:38.337 --rc geninfo_all_blocks=1 00:09:38.337 --rc geninfo_unexecuted_blocks=1 00:09:38.337 00:09:38.337 ' 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:38.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.337 --rc genhtml_branch_coverage=1 00:09:38.337 --rc genhtml_function_coverage=1 00:09:38.337 --rc genhtml_legend=1 00:09:38.337 --rc geninfo_all_blocks=1 00:09:38.337 --rc geninfo_unexecuted_blocks=1 00:09:38.337 00:09:38.337 ' 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:38.337 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:38.337 16:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:46.478 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:46.478 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:46.478 Found net devices under 0000:31:00.0: cvl_0_0 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:46.478 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:46.479 Found net devices under 0000:31:00.1: cvl_0_1 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:46.479 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:46.479 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.471 ms 00:09:46.479 00:09:46.479 --- 10.0.0.2 ping statistics --- 00:09:46.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.479 rtt min/avg/max/mdev = 0.471/0.471/0.471/0.000 ms 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:46.479 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:46.479 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:09:46.479 00:09:46.479 --- 10.0.0.1 ping statistics --- 00:09:46.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.479 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2071442 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2071442 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2071442 ']' 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:46.479 16:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:46.479 [2024-11-20 16:21:31.655116] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:09:46.479 [2024-11-20 16:21:31.655177] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:46.479 [2024-11-20 16:21:31.755225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:46.479 [2024-11-20 16:21:31.805976] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:46.479 [2024-11-20 16:21:31.806038] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:46.479 [2024-11-20 16:21:31.806047] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:46.479 [2024-11-20 16:21:31.806055] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:46.479 [2024-11-20 16:21:31.806061] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:46.479 [2024-11-20 16:21:31.808128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:46.479 [2024-11-20 16:21:31.808384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:46.479 [2024-11-20 16:21:31.808543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:46.479 [2024-11-20 16:21:31.808545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:46.739 16:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:46.739 16:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:09:46.739 16:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:46.740 16:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:46.740 16:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:46.740 16:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:46.740 16:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:46.740 16:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.740 16:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:46.740 [2024-11-20 16:21:32.533523] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:46.740 16:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.740 16:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:46.740 16:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.740 16:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:46.740 Malloc0 00:09:46.740 16:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.740 16:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:46.740 16:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.740 16:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:46.740 16:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.740 16:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:46.740 16:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.740 16:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:46.740 16:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.740 16:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:46.740 16:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.740 16:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:46.740 [2024-11-20 16:21:32.598996] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:46.740 16:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.740 16:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:46.740 16:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:46.740 16:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:46.740 16:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:46.740 16:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:46.740 16:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:46.740 { 00:09:46.740 "params": { 00:09:46.740 "name": "Nvme$subsystem", 00:09:46.740 "trtype": "$TEST_TRANSPORT", 00:09:46.740 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:46.740 "adrfam": "ipv4", 00:09:46.740 "trsvcid": "$NVMF_PORT", 00:09:46.740 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:46.740 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:46.740 "hdgst": ${hdgst:-false}, 00:09:46.740 "ddgst": ${ddgst:-false} 00:09:46.740 }, 00:09:46.740 "method": "bdev_nvme_attach_controller" 00:09:46.740 } 00:09:46.740 EOF 00:09:46.740 )") 00:09:46.740 16:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:46.740 16:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:46.740 16:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:46.740 16:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:46.740 "params": { 00:09:46.740 "name": "Nvme1", 00:09:46.740 "trtype": "tcp", 00:09:46.740 "traddr": "10.0.0.2", 00:09:46.740 "adrfam": "ipv4", 00:09:46.740 "trsvcid": "4420", 00:09:46.740 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:46.740 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:46.740 "hdgst": false, 00:09:46.740 "ddgst": false 00:09:46.740 }, 00:09:46.740 "method": "bdev_nvme_attach_controller" 00:09:46.740 }' 00:09:46.740 [2024-11-20 16:21:32.655560] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:09:46.740 [2024-11-20 16:21:32.655627] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2071771 ] 00:09:46.998 [2024-11-20 16:21:32.733972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:46.998 [2024-11-20 16:21:32.778364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:46.998 [2024-11-20 16:21:32.778486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:46.998 [2024-11-20 16:21:32.778489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.256 I/O targets: 00:09:47.256 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:47.256 00:09:47.256 00:09:47.256 CUnit - A unit testing framework for C - Version 2.1-3 00:09:47.256 http://cunit.sourceforge.net/ 00:09:47.256 00:09:47.256 00:09:47.256 Suite: bdevio tests on: Nvme1n1 00:09:47.256 Test: blockdev write read block ...passed 00:09:47.256 Test: blockdev write zeroes read block ...passed 00:09:47.256 Test: blockdev write zeroes read no split ...passed 00:09:47.256 Test: blockdev write zeroes read split ...passed 00:09:47.256 Test: blockdev write zeroes read split partial ...passed 00:09:47.256 Test: blockdev reset ...[2024-11-20 16:21:33.130605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:47.256 [2024-11-20 16:21:33.130668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d3f1c0 (9): Bad file descriptor 00:09:47.256 [2024-11-20 16:21:33.146564] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:47.256 passed 00:09:47.256 Test: blockdev write read 8 blocks ...passed 00:09:47.256 Test: blockdev write read size > 128k ...passed 00:09:47.256 Test: blockdev write read invalid size ...passed 00:09:47.256 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:47.256 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:47.256 Test: blockdev write read max offset ...passed 00:09:47.516 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:47.516 Test: blockdev writev readv 8 blocks ...passed 00:09:47.516 Test: blockdev writev readv 30 x 1block ...passed 00:09:47.516 Test: blockdev writev readv block ...passed 00:09:47.516 Test: blockdev writev readv size > 128k ...passed 00:09:47.516 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:47.516 Test: blockdev comparev and writev ...[2024-11-20 16:21:33.367148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:47.516 [2024-11-20 16:21:33.367172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:47.516 [2024-11-20 16:21:33.367184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:47.516 [2024-11-20 16:21:33.367190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:47.516 [2024-11-20 16:21:33.367549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:47.516 [2024-11-20 16:21:33.367557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:47.516 [2024-11-20 16:21:33.367567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:47.516 [2024-11-20 16:21:33.367572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:47.516 [2024-11-20 16:21:33.367937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:47.516 [2024-11-20 16:21:33.367945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:47.516 [2024-11-20 16:21:33.367954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:47.516 [2024-11-20 16:21:33.367960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:47.516 [2024-11-20 16:21:33.368309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:47.516 [2024-11-20 16:21:33.368318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:47.516 [2024-11-20 16:21:33.368327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:47.516 [2024-11-20 16:21:33.368332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:47.516 passed 00:09:47.516 Test: blockdev nvme passthru rw ...passed 00:09:47.516 Test: blockdev nvme passthru vendor specific ...[2024-11-20 16:21:33.452409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:47.516 [2024-11-20 16:21:33.452420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:47.516 [2024-11-20 16:21:33.452647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:47.516 [2024-11-20 16:21:33.452654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:47.516 [2024-11-20 16:21:33.452908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:47.516 [2024-11-20 16:21:33.452915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:47.516 [2024-11-20 16:21:33.453128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:47.516 [2024-11-20 16:21:33.453136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:47.516 passed 00:09:47.516 Test: blockdev nvme admin passthru ...passed 00:09:47.776 Test: blockdev copy ...passed 00:09:47.776 00:09:47.776 Run Summary: Type Total Ran Passed Failed Inactive 00:09:47.776 suites 1 1 n/a 0 0 00:09:47.776 tests 23 23 23 0 0 00:09:47.776 asserts 152 152 152 0 n/a 00:09:47.776 00:09:47.776 Elapsed time = 1.102 seconds 00:09:47.776 16:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:47.776 16:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.776 16:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:47.776 16:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.776 16:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:47.776 16:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:47.776 16:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:47.776 16:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:47.776 16:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:47.776 16:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:47.776 16:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:47.776 16:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:47.776 rmmod nvme_tcp 00:09:47.776 rmmod nvme_fabrics 00:09:47.776 rmmod nvme_keyring 00:09:47.776 16:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:47.776 16:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:47.776 16:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:47.776 16:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2071442 ']' 00:09:47.776 16:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2071442 00:09:47.776 16:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2071442 ']' 00:09:47.776 16:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2071442 00:09:47.776 16:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:09:47.776 16:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:47.776 16:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2071442 00:09:48.036 16:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:09:48.036 16:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:09:48.036 16:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2071442' 00:09:48.036 killing process with pid 2071442 00:09:48.036 16:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2071442 00:09:48.036 16:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2071442 00:09:48.036 16:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:48.036 16:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:48.036 16:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:48.036 16:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:48.036 16:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:48.036 16:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:48.036 16:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:48.036 16:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:48.036 16:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:48.036 16:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:48.036 16:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:48.036 16:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.583 16:21:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:50.583 00:09:50.583 real 0m12.016s 00:09:50.583 user 0m12.543s 00:09:50.583 sys 0m6.087s 00:09:50.583 16:21:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:50.583 16:21:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:50.583 ************************************ 00:09:50.583 END TEST nvmf_bdevio 00:09:50.583 ************************************ 00:09:50.583 16:21:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:50.583 00:09:50.583 real 5m0.440s 00:09:50.583 user 11m42.511s 00:09:50.583 sys 1m48.333s 00:09:50.583 16:21:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:50.583 16:21:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:50.583 ************************************ 00:09:50.583 END TEST nvmf_target_core 00:09:50.583 ************************************ 00:09:50.583 16:21:36 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:50.583 16:21:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:50.583 16:21:36 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:50.583 16:21:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:50.583 ************************************ 00:09:50.583 START TEST nvmf_target_extra 00:09:50.583 ************************************ 00:09:50.583 16:21:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:50.583 * Looking for test storage... 00:09:50.583 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:50.583 16:21:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:50.583 16:21:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:50.583 16:21:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:09:50.583 16:21:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:50.583 16:21:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:50.583 16:21:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:50.583 16:21:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:50.583 16:21:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:50.583 16:21:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:50.583 16:21:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:50.583 16:21:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:50.583 16:21:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:50.583 16:21:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:50.583 16:21:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:50.583 16:21:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:50.583 16:21:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:50.583 16:21:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:50.583 16:21:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:50.583 16:21:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:50.583 16:21:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:50.583 16:21:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:50.583 16:21:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:50.583 16:21:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:50.583 16:21:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:50.583 16:21:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:50.583 16:21:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:50.583 16:21:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:50.583 16:21:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:50.583 16:21:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:50.583 16:21:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:50.583 16:21:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:50.583 16:21:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:50.583 16:21:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:50.583 16:21:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:50.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.583 --rc genhtml_branch_coverage=1 00:09:50.583 --rc genhtml_function_coverage=1 00:09:50.583 --rc genhtml_legend=1 00:09:50.583 --rc geninfo_all_blocks=1 00:09:50.583 --rc geninfo_unexecuted_blocks=1 00:09:50.583 00:09:50.583 ' 00:09:50.583 16:21:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:50.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.583 --rc genhtml_branch_coverage=1 00:09:50.583 --rc genhtml_function_coverage=1 00:09:50.583 --rc genhtml_legend=1 00:09:50.583 --rc geninfo_all_blocks=1 00:09:50.583 --rc geninfo_unexecuted_blocks=1 00:09:50.583 00:09:50.583 ' 00:09:50.583 16:21:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:50.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.583 --rc genhtml_branch_coverage=1 00:09:50.583 --rc genhtml_function_coverage=1 00:09:50.583 --rc genhtml_legend=1 00:09:50.583 --rc geninfo_all_blocks=1 00:09:50.583 --rc geninfo_unexecuted_blocks=1 00:09:50.583 00:09:50.583 ' 00:09:50.583 16:21:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:50.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.583 --rc genhtml_branch_coverage=1 00:09:50.583 --rc genhtml_function_coverage=1 00:09:50.583 --rc genhtml_legend=1 00:09:50.583 --rc geninfo_all_blocks=1 00:09:50.583 --rc geninfo_unexecuted_blocks=1 00:09:50.583 00:09:50.583 ' 00:09:50.583 16:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:50.583 16:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:50.583 16:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:50.583 16:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:50.583 16:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:50.583 16:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:50.583 16:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:50.583 16:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:50.583 16:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:50.583 16:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:50.583 16:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:50.583 16:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:50.583 16:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:50.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:50.584 ************************************ 00:09:50.584 START TEST nvmf_example 00:09:50.584 ************************************ 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:50.584 * Looking for test storage... 00:09:50.584 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:09:50.584 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:09:50.853 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:09:50.853 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:50.853 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:09:50.853 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:09:50.853 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:50.853 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:50.853 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:09:50.853 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:50.853 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:50.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.853 --rc genhtml_branch_coverage=1 00:09:50.853 --rc genhtml_function_coverage=1 00:09:50.853 --rc genhtml_legend=1 00:09:50.853 --rc geninfo_all_blocks=1 00:09:50.853 --rc geninfo_unexecuted_blocks=1 00:09:50.853 00:09:50.853 ' 00:09:50.853 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:50.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.853 --rc genhtml_branch_coverage=1 00:09:50.853 --rc genhtml_function_coverage=1 00:09:50.853 --rc genhtml_legend=1 00:09:50.853 --rc geninfo_all_blocks=1 00:09:50.853 --rc geninfo_unexecuted_blocks=1 00:09:50.853 00:09:50.853 ' 00:09:50.853 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:50.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.853 --rc genhtml_branch_coverage=1 00:09:50.853 --rc genhtml_function_coverage=1 00:09:50.853 --rc genhtml_legend=1 00:09:50.853 --rc geninfo_all_blocks=1 00:09:50.853 --rc geninfo_unexecuted_blocks=1 00:09:50.853 00:09:50.853 ' 00:09:50.853 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:50.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.853 --rc genhtml_branch_coverage=1 00:09:50.853 --rc genhtml_function_coverage=1 00:09:50.853 --rc genhtml_legend=1 00:09:50.853 --rc geninfo_all_blocks=1 00:09:50.853 --rc geninfo_unexecuted_blocks=1 00:09:50.853 00:09:50.853 ' 00:09:50.853 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:50.853 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:50.853 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:50.853 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:50.853 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:50.853 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:50.853 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:50.853 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:50.853 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:50.853 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:50.853 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:50.853 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:50.853 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:09:50.853 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:09:50.853 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:50.853 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:50.853 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:50.853 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:50.853 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:50.853 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:09:50.854 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:50.854 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:50.854 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:50.854 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.854 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.854 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.854 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:50.854 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.854 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:09:50.854 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:50.854 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:50.854 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:50.854 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:50.854 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:50.854 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:50.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:50.854 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:50.854 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:50.854 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:50.854 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:50.854 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:50.854 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:50.854 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:50.854 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:50.854 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:50.854 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:50.854 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:50.854 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:50.854 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:50.854 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:50.854 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:50.854 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:50.854 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:50.854 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:50.854 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:50.854 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.854 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:50.854 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.854 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:50.854 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:50.854 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:09:50.854 16:21:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:58.995 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:58.995 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:09:58.995 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:58.995 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:58.995 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:58.995 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:58.995 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:58.995 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:09:58.995 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:58.995 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:09:58.995 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:09:58.995 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:09:58.995 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:09:58.995 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:09:58.995 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:09:58.995 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:58.995 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:58.995 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:58.995 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:58.995 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:58.995 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:58.995 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:58.995 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:58.995 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:58.995 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:58.995 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:58.996 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:58.996 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:58.996 Found net devices under 0000:31:00.0: cvl_0_0 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:58.996 Found net devices under 0000:31:00.1: cvl_0_1 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:58.996 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:58.996 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:09:58.996 00:09:58.996 --- 10.0.0.2 ping statistics --- 00:09:58.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.996 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:09:58.996 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:58.996 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:58.996 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:09:58.996 00:09:58.996 --- 10.0.0.1 ping statistics --- 00:09:58.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.996 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:09:58.997 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:58.997 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:09:58.997 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:58.997 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:58.997 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:58.997 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:58.997 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:58.997 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:58.997 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:58.997 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:58.997 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:58.997 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:58.997 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:58.997 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:58.997 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:58.997 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2076214 00:09:58.997 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:58.997 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:58.997 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2076214 00:09:58.997 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 2076214 ']' 00:09:58.997 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.997 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:58.997 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.997 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:58.997 16:21:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:58.997 16:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:58.997 16:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:09:58.997 16:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:58.997 16:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:58.997 16:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:58.997 16:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:58.997 16:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.997 16:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:58.997 16:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.997 16:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:58.997 16:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.997 16:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:58.997 16:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.997 16:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:58.997 16:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:58.997 16:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.997 16:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:58.997 16:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.997 16:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:58.997 16:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:58.997 16:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.997 16:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:58.997 16:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.997 16:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:58.997 16:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.997 16:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:58.997 16:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.997 16:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:09:58.997 16:21:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:11.201 Initializing NVMe Controllers 00:10:11.201 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:11.201 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:11.201 Initialization complete. Launching workers. 00:10:11.201 ======================================================== 00:10:11.201 Latency(us) 00:10:11.201 Device Information : IOPS MiB/s Average min max 00:10:11.201 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18973.30 74.11 3372.87 671.98 15418.75 00:10:11.201 ======================================================== 00:10:11.201 Total : 18973.30 74.11 3372.87 671.98 15418.75 00:10:11.201 00:10:11.201 16:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:11.201 16:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:11.201 16:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:11.201 16:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:11.201 16:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:11.201 16:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:11.201 16:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:11.201 16:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:11.201 rmmod nvme_tcp 00:10:11.201 rmmod nvme_fabrics 00:10:11.201 rmmod nvme_keyring 00:10:11.201 16:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:11.201 16:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:11.201 16:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:11.201 16:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2076214 ']' 00:10:11.201 16:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2076214 00:10:11.201 16:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 2076214 ']' 00:10:11.201 16:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 2076214 00:10:11.201 16:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:10:11.201 16:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:11.201 16:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2076214 00:10:11.201 16:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:10:11.201 16:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:10:11.201 16:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2076214' 00:10:11.201 killing process with pid 2076214 00:10:11.201 16:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 2076214 00:10:11.201 16:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 2076214 00:10:11.201 nvmf threads initialize successfully 00:10:11.201 bdev subsystem init successfully 00:10:11.201 created a nvmf target service 00:10:11.201 create targets's poll groups done 00:10:11.201 all subsystems of target started 00:10:11.201 nvmf target is running 00:10:11.201 all subsystems of target stopped 00:10:11.201 destroy targets's poll groups done 00:10:11.201 destroyed the nvmf target service 00:10:11.201 bdev subsystem finish successfully 00:10:11.201 nvmf threads destroy successfully 00:10:11.201 16:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:11.201 16:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:11.201 16:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:11.201 16:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:11.201 16:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:11.201 16:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:11.201 16:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:11.201 16:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:11.201 16:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:11.201 16:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.201 16:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.201 16:21:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.461 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:11.461 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:11.461 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:11.461 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:11.727 00:10:11.727 real 0m21.075s 00:10:11.727 user 0m46.545s 00:10:11.727 sys 0m6.525s 00:10:11.727 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.727 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:11.727 ************************************ 00:10:11.727 END TEST nvmf_example 00:10:11.727 ************************************ 00:10:11.727 16:21:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:11.727 16:21:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:11.727 16:21:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:11.727 16:21:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:11.727 ************************************ 00:10:11.727 START TEST nvmf_filesystem 00:10:11.727 ************************************ 00:10:11.727 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:11.727 * Looking for test storage... 00:10:11.727 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:11.727 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:11.727 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:10:11.727 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:11.727 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:11.727 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:11.727 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:11.727 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:11.727 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:11.727 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:11.727 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:11.727 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:11.727 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:11.727 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:11.727 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:11.727 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:11.727 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:11.727 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:11.727 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:11.727 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:11.727 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:11.727 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:11.727 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:11.727 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:11.727 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:11.727 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:11.727 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:11.727 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:11.727 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:11.991 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:11.991 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:11.991 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:11.991 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:11.991 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:11.991 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:11.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.991 --rc genhtml_branch_coverage=1 00:10:11.991 --rc genhtml_function_coverage=1 00:10:11.991 --rc genhtml_legend=1 00:10:11.991 --rc geninfo_all_blocks=1 00:10:11.991 --rc geninfo_unexecuted_blocks=1 00:10:11.991 00:10:11.991 ' 00:10:11.991 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:11.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.991 --rc genhtml_branch_coverage=1 00:10:11.991 --rc genhtml_function_coverage=1 00:10:11.991 --rc genhtml_legend=1 00:10:11.991 --rc geninfo_all_blocks=1 00:10:11.991 --rc geninfo_unexecuted_blocks=1 00:10:11.991 00:10:11.991 ' 00:10:11.991 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:11.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.991 --rc genhtml_branch_coverage=1 00:10:11.991 --rc genhtml_function_coverage=1 00:10:11.991 --rc genhtml_legend=1 00:10:11.991 --rc geninfo_all_blocks=1 00:10:11.991 --rc geninfo_unexecuted_blocks=1 00:10:11.991 00:10:11.991 ' 00:10:11.991 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:11.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.991 --rc genhtml_branch_coverage=1 00:10:11.991 --rc genhtml_function_coverage=1 00:10:11.991 --rc genhtml_legend=1 00:10:11.991 --rc geninfo_all_blocks=1 00:10:11.991 --rc geninfo_unexecuted_blocks=1 00:10:11.991 00:10:11.991 ' 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:11.992 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:11.993 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:11.993 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:11.993 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:11.993 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:11.993 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:11.993 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:11.993 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:11.993 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:11.993 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:11.993 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:11.993 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:11.993 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:11.993 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:11.993 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:11.993 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:11.993 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:11.993 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:11.993 #define SPDK_CONFIG_H 00:10:11.993 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:11.993 #define SPDK_CONFIG_APPS 1 00:10:11.993 #define SPDK_CONFIG_ARCH native 00:10:11.993 #undef SPDK_CONFIG_ASAN 00:10:11.993 #undef SPDK_CONFIG_AVAHI 00:10:11.993 #undef SPDK_CONFIG_CET 00:10:11.993 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:11.993 #define SPDK_CONFIG_COVERAGE 1 00:10:11.993 #define SPDK_CONFIG_CROSS_PREFIX 00:10:11.993 #undef SPDK_CONFIG_CRYPTO 00:10:11.993 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:11.993 #undef SPDK_CONFIG_CUSTOMOCF 00:10:11.993 #undef SPDK_CONFIG_DAOS 00:10:11.993 #define SPDK_CONFIG_DAOS_DIR 00:10:11.993 #define SPDK_CONFIG_DEBUG 1 00:10:11.993 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:11.993 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:11.993 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:11.993 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:11.993 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:11.993 #undef SPDK_CONFIG_DPDK_UADK 00:10:11.993 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:11.993 #define SPDK_CONFIG_EXAMPLES 1 00:10:11.993 #undef SPDK_CONFIG_FC 00:10:11.993 #define SPDK_CONFIG_FC_PATH 00:10:11.993 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:11.993 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:11.993 #define SPDK_CONFIG_FSDEV 1 00:10:11.993 #undef SPDK_CONFIG_FUSE 00:10:11.993 #undef SPDK_CONFIG_FUZZER 00:10:11.993 #define SPDK_CONFIG_FUZZER_LIB 00:10:11.993 #undef SPDK_CONFIG_GOLANG 00:10:11.993 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:11.993 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:11.993 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:11.993 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:11.993 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:11.993 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:11.993 #undef SPDK_CONFIG_HAVE_LZ4 00:10:11.993 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:11.993 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:11.993 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:11.993 #define SPDK_CONFIG_IDXD 1 00:10:11.993 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:11.993 #undef SPDK_CONFIG_IPSEC_MB 00:10:11.993 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:11.993 #define SPDK_CONFIG_ISAL 1 00:10:11.993 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:11.993 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:11.993 #define SPDK_CONFIG_LIBDIR 00:10:11.993 #undef SPDK_CONFIG_LTO 00:10:11.993 #define SPDK_CONFIG_MAX_LCORES 128 00:10:11.993 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:11.993 #define SPDK_CONFIG_NVME_CUSE 1 00:10:11.993 #undef SPDK_CONFIG_OCF 00:10:11.993 #define SPDK_CONFIG_OCF_PATH 00:10:11.993 #define SPDK_CONFIG_OPENSSL_PATH 00:10:11.993 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:11.993 #define SPDK_CONFIG_PGO_DIR 00:10:11.993 #undef SPDK_CONFIG_PGO_USE 00:10:11.993 #define SPDK_CONFIG_PREFIX /usr/local 00:10:11.993 #undef SPDK_CONFIG_RAID5F 00:10:11.993 #undef SPDK_CONFIG_RBD 00:10:11.993 #define SPDK_CONFIG_RDMA 1 00:10:11.993 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:11.993 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:11.993 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:11.993 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:11.993 #define SPDK_CONFIG_SHARED 1 00:10:11.993 #undef SPDK_CONFIG_SMA 00:10:11.993 #define SPDK_CONFIG_TESTS 1 00:10:11.993 #undef SPDK_CONFIG_TSAN 00:10:11.993 #define SPDK_CONFIG_UBLK 1 00:10:11.993 #define SPDK_CONFIG_UBSAN 1 00:10:11.993 #undef SPDK_CONFIG_UNIT_TESTS 00:10:11.993 #undef SPDK_CONFIG_URING 00:10:11.993 #define SPDK_CONFIG_URING_PATH 00:10:11.993 #undef SPDK_CONFIG_URING_ZNS 00:10:11.993 #undef SPDK_CONFIG_USDT 00:10:11.993 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:11.993 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:11.993 #define SPDK_CONFIG_VFIO_USER 1 00:10:11.993 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:11.993 #define SPDK_CONFIG_VHOST 1 00:10:11.993 #define SPDK_CONFIG_VIRTIO 1 00:10:11.993 #undef SPDK_CONFIG_VTUNE 00:10:11.993 #define SPDK_CONFIG_VTUNE_DIR 00:10:11.993 #define SPDK_CONFIG_WERROR 1 00:10:11.993 #define SPDK_CONFIG_WPDK_DIR 00:10:11.993 #undef SPDK_CONFIG_XNVME 00:10:11.993 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:11.993 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:11.993 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:11.993 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:11.993 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:11.993 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:11.993 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:11.993 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.993 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.993 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.993 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:11.993 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.993 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:11.993 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:11.993 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:11.993 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:11.993 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:11.993 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:11.993 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:11.993 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:11.993 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:11.993 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:11.994 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:10:11.995 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j144 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 2078999 ]] 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 2078999 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.QdbrA6 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.QdbrA6/tests/target /tmp/spdk.QdbrA6 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=123429543936 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=129356541952 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5926998016 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64668237824 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678268928 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=25847939072 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=25871310848 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23371776 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=efivarfs 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=efivarfs 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=387072 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=507904 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=116736 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64678043648 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678273024 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=229376 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12935639040 00:10:11.996 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12935651328 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:10:11.997 * Looking for test storage... 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=123429543936 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8141590528 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:11.997 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:11.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.997 --rc genhtml_branch_coverage=1 00:10:11.997 --rc genhtml_function_coverage=1 00:10:11.997 --rc genhtml_legend=1 00:10:11.997 --rc geninfo_all_blocks=1 00:10:11.997 --rc geninfo_unexecuted_blocks=1 00:10:11.997 00:10:11.997 ' 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:11.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.997 --rc genhtml_branch_coverage=1 00:10:11.997 --rc genhtml_function_coverage=1 00:10:11.997 --rc genhtml_legend=1 00:10:11.997 --rc geninfo_all_blocks=1 00:10:11.997 --rc geninfo_unexecuted_blocks=1 00:10:11.997 00:10:11.997 ' 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:11.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.997 --rc genhtml_branch_coverage=1 00:10:11.997 --rc genhtml_function_coverage=1 00:10:11.997 --rc genhtml_legend=1 00:10:11.997 --rc geninfo_all_blocks=1 00:10:11.997 --rc geninfo_unexecuted_blocks=1 00:10:11.997 00:10:11.997 ' 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:11.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.997 --rc genhtml_branch_coverage=1 00:10:11.997 --rc genhtml_function_coverage=1 00:10:11.997 --rc genhtml_legend=1 00:10:11.997 --rc geninfo_all_blocks=1 00:10:11.997 --rc geninfo_unexecuted_blocks=1 00:10:11.997 00:10:11.997 ' 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:11.997 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:11.998 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:11.998 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:11.998 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:11.998 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:11.998 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:11.998 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:11.998 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:11.998 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.998 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.998 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.998 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:11.998 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.998 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:11.998 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:11.998 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:11.998 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:11.998 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:11.998 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:11.998 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:11.998 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:11.998 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:11.998 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:11.998 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:11.998 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:11.998 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:11.998 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:11.998 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:11.998 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:11.998 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:11.998 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:11.998 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:11.998 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.998 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.998 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.998 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:11.998 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:11.998 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:11.998 16:21:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:20.140 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:20.140 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:20.140 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:20.140 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:20.140 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:20.140 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:20.140 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:20.140 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:20.140 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:20.140 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:20.140 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:20.140 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:20.140 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:20.140 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:20.140 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:20.140 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:20.140 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:20.140 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:20.140 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:20.140 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:20.140 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:20.140 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:20.140 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:20.140 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:20.140 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:20.140 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:20.140 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:20.140 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:20.140 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:20.140 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:20.140 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:20.140 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:20.140 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:20.140 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:20.140 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:20.140 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:20.140 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:20.140 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:20.140 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:20.140 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:20.140 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:20.140 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:20.140 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:20.140 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:20.140 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:20.140 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:20.140 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:20.140 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:20.140 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:20.140 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:20.140 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:20.140 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:20.140 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:20.141 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:20.141 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:20.141 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:20.141 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:20.141 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:20.141 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:20.141 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:20.141 Found net devices under 0000:31:00.0: cvl_0_0 00:10:20.141 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:20.141 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:20.141 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:20.141 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:20.141 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:20.141 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:20.141 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:20.141 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:20.141 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:20.141 Found net devices under 0000:31:00.1: cvl_0_1 00:10:20.141 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:20.141 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:20.141 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:20.141 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:20.141 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:20.141 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:20.141 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:20.141 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:20.141 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:20.141 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:20.141 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:20.141 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:20.141 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:20.141 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:20.141 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:20.141 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:20.141 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:20.141 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:20.141 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:20.141 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:20.141 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:20.141 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:20.141 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:20.141 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:20.141 16:22:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:20.141 16:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:20.141 16:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:20.141 16:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:20.141 16:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:20.141 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:20.141 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.672 ms 00:10:20.141 00:10:20.141 --- 10.0.0.2 ping statistics --- 00:10:20.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.141 rtt min/avg/max/mdev = 0.672/0.672/0.672/0.000 ms 00:10:20.141 16:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:20.141 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:20.141 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:10:20.141 00:10:20.141 --- 10.0.0.1 ping statistics --- 00:10:20.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.141 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:10:20.141 16:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:20.141 16:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:10:20.141 16:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:20.141 16:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:20.141 16:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:20.141 16:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:20.141 16:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:20.141 16:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:20.141 16:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:20.141 16:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:20.141 16:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:20.141 16:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:20.141 16:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:20.141 ************************************ 00:10:20.141 START TEST nvmf_filesystem_no_in_capsule 00:10:20.141 ************************************ 00:10:20.141 16:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:10:20.141 16:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:20.141 16:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:20.141 16:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:20.141 16:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:20.141 16:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:20.141 16:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2082815 00:10:20.141 16:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2082815 00:10:20.141 16:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:20.141 16:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2082815 ']' 00:10:20.141 16:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.141 16:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:20.141 16:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.141 16:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:20.141 16:22:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:20.141 [2024-11-20 16:22:05.268766] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:10:20.141 [2024-11-20 16:22:05.268830] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:20.141 [2024-11-20 16:22:05.354230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:20.141 [2024-11-20 16:22:05.397822] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:20.141 [2024-11-20 16:22:05.397860] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:20.141 [2024-11-20 16:22:05.397872] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:20.141 [2024-11-20 16:22:05.397879] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:20.141 [2024-11-20 16:22:05.397885] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:20.141 [2024-11-20 16:22:05.399420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:20.141 [2024-11-20 16:22:05.399504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:20.141 [2024-11-20 16:22:05.399653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.141 [2024-11-20 16:22:05.399653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:20.141 16:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:20.141 16:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:20.141 16:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:20.141 16:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:20.142 16:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:20.402 16:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:20.402 16:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:20.402 16:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:20.402 16:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.402 16:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:20.402 [2024-11-20 16:22:06.120461] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:20.402 16:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.402 16:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:20.402 16:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.402 16:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:20.402 Malloc1 00:10:20.402 16:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.402 16:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:20.402 16:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.402 16:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:20.402 16:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.402 16:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:20.402 16:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.402 16:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:20.402 16:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.402 16:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:20.402 16:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.402 16:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:20.402 [2024-11-20 16:22:06.258063] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:20.402 16:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.402 16:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:20.402 16:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:20.402 16:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:20.402 16:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:20.402 16:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:20.402 16:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:20.402 16:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.402 16:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:20.402 16:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.402 16:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:20.402 { 00:10:20.402 "name": "Malloc1", 00:10:20.402 "aliases": [ 00:10:20.402 "8d55694d-cbf5-4e15-84f4-8ac685fa1c9f" 00:10:20.402 ], 00:10:20.402 "product_name": "Malloc disk", 00:10:20.402 "block_size": 512, 00:10:20.402 "num_blocks": 1048576, 00:10:20.402 "uuid": "8d55694d-cbf5-4e15-84f4-8ac685fa1c9f", 00:10:20.402 "assigned_rate_limits": { 00:10:20.402 "rw_ios_per_sec": 0, 00:10:20.402 "rw_mbytes_per_sec": 0, 00:10:20.402 "r_mbytes_per_sec": 0, 00:10:20.402 "w_mbytes_per_sec": 0 00:10:20.402 }, 00:10:20.402 "claimed": true, 00:10:20.402 "claim_type": "exclusive_write", 00:10:20.402 "zoned": false, 00:10:20.402 "supported_io_types": { 00:10:20.402 "read": true, 00:10:20.402 "write": true, 00:10:20.402 "unmap": true, 00:10:20.402 "flush": true, 00:10:20.402 "reset": true, 00:10:20.402 "nvme_admin": false, 00:10:20.402 "nvme_io": false, 00:10:20.402 "nvme_io_md": false, 00:10:20.402 "write_zeroes": true, 00:10:20.402 "zcopy": true, 00:10:20.402 "get_zone_info": false, 00:10:20.402 "zone_management": false, 00:10:20.402 "zone_append": false, 00:10:20.402 "compare": false, 00:10:20.402 "compare_and_write": false, 00:10:20.402 "abort": true, 00:10:20.402 "seek_hole": false, 00:10:20.402 "seek_data": false, 00:10:20.402 "copy": true, 00:10:20.402 "nvme_iov_md": false 00:10:20.402 }, 00:10:20.402 "memory_domains": [ 00:10:20.402 { 00:10:20.402 "dma_device_id": "system", 00:10:20.402 "dma_device_type": 1 00:10:20.402 }, 00:10:20.402 { 00:10:20.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.402 "dma_device_type": 2 00:10:20.403 } 00:10:20.403 ], 00:10:20.403 "driver_specific": {} 00:10:20.403 } 00:10:20.403 ]' 00:10:20.403 16:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:20.403 16:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:20.403 16:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:20.662 16:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:20.663 16:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:20.663 16:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:20.663 16:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:20.663 16:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:22.044 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:22.044 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:22.044 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:22.044 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:22.044 16:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:23.964 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:23.964 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:23.964 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:23.964 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:23.964 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:23.964 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:23.964 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:23.964 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:23.964 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:23.964 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:23.964 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:23.964 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:23.964 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:23.964 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:23.964 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:24.225 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:24.225 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:24.225 16:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:24.225 16:22:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:25.163 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:25.163 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:25.163 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:25.163 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:25.163 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:25.423 ************************************ 00:10:25.423 START TEST filesystem_ext4 00:10:25.423 ************************************ 00:10:25.423 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:25.423 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:25.423 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:25.423 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:25.423 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:25.423 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:25.423 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:25.423 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:25.423 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:25.423 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:25.423 16:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:25.423 mke2fs 1.47.0 (5-Feb-2023) 00:10:25.423 Discarding device blocks: 0/522240 done 00:10:25.423 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:25.423 Filesystem UUID: 28bcf959-cc4b-41ed-841d-02df2cc60a8e 00:10:25.423 Superblock backups stored on blocks: 00:10:25.423 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:25.423 00:10:25.423 Allocating group tables: 0/64 done 00:10:25.423 Writing inode tables: 0/64 done 00:10:27.963 Creating journal (8192 blocks): done 00:10:27.963 Writing superblocks and filesystem accounting information: 0/64 done 00:10:27.963 00:10:27.963 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:27.963 16:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:33.245 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:33.245 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:33.245 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:33.245 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:33.245 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:33.245 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:33.245 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2082815 00:10:33.245 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:33.245 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:33.245 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:33.245 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:33.504 00:10:33.504 real 0m8.051s 00:10:33.504 user 0m0.028s 00:10:33.504 sys 0m0.078s 00:10:33.505 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:33.505 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:33.505 ************************************ 00:10:33.505 END TEST filesystem_ext4 00:10:33.505 ************************************ 00:10:33.505 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:33.505 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:33.505 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:33.505 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:33.505 ************************************ 00:10:33.505 START TEST filesystem_btrfs 00:10:33.505 ************************************ 00:10:33.505 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:33.505 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:33.505 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:33.505 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:33.505 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:33.505 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:33.505 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:33.505 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:33.505 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:33.505 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:33.505 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:33.505 btrfs-progs v6.8.1 00:10:33.505 See https://btrfs.readthedocs.io for more information. 00:10:33.505 00:10:33.505 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:33.505 NOTE: several default settings have changed in version 5.15, please make sure 00:10:33.505 this does not affect your deployments: 00:10:33.505 - DUP for metadata (-m dup) 00:10:33.505 - enabled no-holes (-O no-holes) 00:10:33.505 - enabled free-space-tree (-R free-space-tree) 00:10:33.505 00:10:33.505 Label: (null) 00:10:33.505 UUID: c382359f-e8be-4c26-ad44-b0a9f70e6053 00:10:33.505 Node size: 16384 00:10:33.505 Sector size: 4096 (CPU page size: 4096) 00:10:33.505 Filesystem size: 510.00MiB 00:10:33.505 Block group profiles: 00:10:33.505 Data: single 8.00MiB 00:10:33.505 Metadata: DUP 32.00MiB 00:10:33.505 System: DUP 8.00MiB 00:10:33.505 SSD detected: yes 00:10:33.505 Zoned device: no 00:10:33.505 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:33.505 Checksum: crc32c 00:10:33.505 Number of devices: 1 00:10:33.505 Devices: 00:10:33.505 ID SIZE PATH 00:10:33.505 1 510.00MiB /dev/nvme0n1p1 00:10:33.505 00:10:33.505 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:33.505 16:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:34.444 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:34.444 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:34.444 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:34.444 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:34.444 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:34.444 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:34.444 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2082815 00:10:34.444 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:34.444 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:34.444 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:34.444 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:34.444 00:10:34.444 real 0m0.983s 00:10:34.444 user 0m0.038s 00:10:34.444 sys 0m0.110s 00:10:34.444 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:34.444 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:34.444 ************************************ 00:10:34.444 END TEST filesystem_btrfs 00:10:34.444 ************************************ 00:10:34.444 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:34.444 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:34.444 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:34.444 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:34.444 ************************************ 00:10:34.444 START TEST filesystem_xfs 00:10:34.444 ************************************ 00:10:34.444 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:34.444 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:34.444 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:34.444 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:34.444 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:34.444 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:34.444 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:34.444 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:10:34.444 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:34.444 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:34.444 16:22:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:34.444 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:34.444 = sectsz=512 attr=2, projid32bit=1 00:10:34.444 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:34.444 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:34.444 data = bsize=4096 blocks=130560, imaxpct=25 00:10:34.444 = sunit=0 swidth=0 blks 00:10:34.444 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:34.444 log =internal log bsize=4096 blocks=16384, version=2 00:10:34.444 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:34.444 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:35.822 Discarding blocks...Done. 00:10:35.822 16:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:35.822 16:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:37.732 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:37.732 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:37.732 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:37.732 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:37.732 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:37.732 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:37.732 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2082815 00:10:37.732 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:37.732 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:37.732 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:37.732 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:37.732 00:10:37.732 real 0m3.078s 00:10:37.732 user 0m0.028s 00:10:37.732 sys 0m0.078s 00:10:37.732 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:37.732 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:37.732 ************************************ 00:10:37.732 END TEST filesystem_xfs 00:10:37.732 ************************************ 00:10:37.732 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:37.992 16:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:38.252 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:38.513 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.513 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:38.513 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:38.513 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:38.513 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:38.513 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:38.513 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:38.513 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:38.513 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:38.513 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.513 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:38.513 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.513 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:38.513 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2082815 00:10:38.513 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2082815 ']' 00:10:38.514 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2082815 00:10:38.514 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:38.514 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:38.514 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2082815 00:10:38.514 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:38.514 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:38.514 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2082815' 00:10:38.514 killing process with pid 2082815 00:10:38.514 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 2082815 00:10:38.514 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 2082815 00:10:38.776 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:38.776 00:10:38.776 real 0m19.383s 00:10:38.776 user 1m16.621s 00:10:38.776 sys 0m1.395s 00:10:38.776 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:38.776 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:38.776 ************************************ 00:10:38.776 END TEST nvmf_filesystem_no_in_capsule 00:10:38.776 ************************************ 00:10:38.776 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:38.776 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:38.776 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:38.776 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:38.776 ************************************ 00:10:38.776 START TEST nvmf_filesystem_in_capsule 00:10:38.776 ************************************ 00:10:38.776 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:10:38.776 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:38.776 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:38.776 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:38.776 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:38.776 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:38.776 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2086861 00:10:38.776 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2086861 00:10:38.776 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:38.776 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2086861 ']' 00:10:38.776 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.776 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:38.776 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.776 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:38.776 16:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:38.776 [2024-11-20 16:22:24.725209] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:10:38.776 [2024-11-20 16:22:24.725261] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:39.038 [2024-11-20 16:22:24.807431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:39.038 [2024-11-20 16:22:24.847659] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:39.038 [2024-11-20 16:22:24.847693] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:39.038 [2024-11-20 16:22:24.847701] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:39.038 [2024-11-20 16:22:24.847707] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:39.038 [2024-11-20 16:22:24.847713] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:39.038 [2024-11-20 16:22:24.849289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:39.038 [2024-11-20 16:22:24.849404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:39.038 [2024-11-20 16:22:24.849561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.038 [2024-11-20 16:22:24.849562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:39.609 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:39.609 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:39.609 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:39.609 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:39.609 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.869 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:39.869 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:39.869 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:39.869 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.869 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.869 [2024-11-20 16:22:25.577191] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:39.869 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.869 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:39.869 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.869 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.869 Malloc1 00:10:39.869 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.869 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:39.869 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.869 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.869 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.869 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:39.869 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.869 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.869 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.869 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:39.869 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.869 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.869 [2024-11-20 16:22:25.716937] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:39.869 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.869 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:39.869 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:39.869 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:39.869 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:39.869 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:39.869 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:39.869 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.869 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.869 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.869 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:39.869 { 00:10:39.869 "name": "Malloc1", 00:10:39.869 "aliases": [ 00:10:39.869 "1b4e70f0-016b-48ef-a924-a11894615ba7" 00:10:39.869 ], 00:10:39.869 "product_name": "Malloc disk", 00:10:39.869 "block_size": 512, 00:10:39.869 "num_blocks": 1048576, 00:10:39.869 "uuid": "1b4e70f0-016b-48ef-a924-a11894615ba7", 00:10:39.869 "assigned_rate_limits": { 00:10:39.869 "rw_ios_per_sec": 0, 00:10:39.870 "rw_mbytes_per_sec": 0, 00:10:39.870 "r_mbytes_per_sec": 0, 00:10:39.870 "w_mbytes_per_sec": 0 00:10:39.870 }, 00:10:39.870 "claimed": true, 00:10:39.870 "claim_type": "exclusive_write", 00:10:39.870 "zoned": false, 00:10:39.870 "supported_io_types": { 00:10:39.870 "read": true, 00:10:39.870 "write": true, 00:10:39.870 "unmap": true, 00:10:39.870 "flush": true, 00:10:39.870 "reset": true, 00:10:39.870 "nvme_admin": false, 00:10:39.870 "nvme_io": false, 00:10:39.870 "nvme_io_md": false, 00:10:39.870 "write_zeroes": true, 00:10:39.870 "zcopy": true, 00:10:39.870 "get_zone_info": false, 00:10:39.870 "zone_management": false, 00:10:39.870 "zone_append": false, 00:10:39.870 "compare": false, 00:10:39.870 "compare_and_write": false, 00:10:39.870 "abort": true, 00:10:39.870 "seek_hole": false, 00:10:39.870 "seek_data": false, 00:10:39.870 "copy": true, 00:10:39.870 "nvme_iov_md": false 00:10:39.870 }, 00:10:39.870 "memory_domains": [ 00:10:39.870 { 00:10:39.870 "dma_device_id": "system", 00:10:39.870 "dma_device_type": 1 00:10:39.870 }, 00:10:39.870 { 00:10:39.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.870 "dma_device_type": 2 00:10:39.870 } 00:10:39.870 ], 00:10:39.870 "driver_specific": {} 00:10:39.870 } 00:10:39.870 ]' 00:10:39.870 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:39.870 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:39.870 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:40.130 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:40.130 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:40.130 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:40.130 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:40.130 16:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:41.511 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:41.511 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:41.511 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:41.511 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:41.511 16:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:44.062 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:44.062 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:44.062 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:44.062 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:44.062 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:44.062 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:44.062 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:44.062 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:44.062 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:44.062 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:44.062 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:44.062 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:44.062 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:44.062 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:44.062 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:44.062 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:44.062 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:44.062 16:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:44.323 16:22:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:45.265 16:22:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:45.265 16:22:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:45.265 16:22:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:45.265 16:22:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:45.265 16:22:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:45.266 ************************************ 00:10:45.266 START TEST filesystem_in_capsule_ext4 00:10:45.266 ************************************ 00:10:45.266 16:22:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:45.266 16:22:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:45.266 16:22:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:45.266 16:22:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:45.266 16:22:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:45.266 16:22:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:45.266 16:22:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:45.266 16:22:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:45.266 16:22:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:45.266 16:22:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:45.266 16:22:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:45.266 mke2fs 1.47.0 (5-Feb-2023) 00:10:45.527 Discarding device blocks: 0/522240 done 00:10:45.527 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:45.527 Filesystem UUID: 533b2673-e525-4ba3-ad1d-88bc7166e605 00:10:45.527 Superblock backups stored on blocks: 00:10:45.527 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:45.527 00:10:45.527 Allocating group tables: 0/64 done 00:10:45.527 Writing inode tables: 0/64 done 00:10:46.918 Creating journal (8192 blocks): done 00:10:46.918 Writing superblocks and filesystem accounting information: 0/64 done 00:10:46.918 00:10:46.918 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:46.918 16:22:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:52.341 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:52.602 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:52.602 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:52.602 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:52.602 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:52.602 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:52.602 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2086861 00:10:52.602 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:52.602 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:52.602 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:52.602 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:52.602 00:10:52.602 real 0m7.213s 00:10:52.602 user 0m0.028s 00:10:52.602 sys 0m0.079s 00:10:52.602 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:52.602 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:52.602 ************************************ 00:10:52.602 END TEST filesystem_in_capsule_ext4 00:10:52.602 ************************************ 00:10:52.602 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:52.602 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:52.602 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:52.602 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:52.602 ************************************ 00:10:52.602 START TEST filesystem_in_capsule_btrfs 00:10:52.602 ************************************ 00:10:52.602 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:52.603 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:52.603 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:52.603 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:52.603 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:52.603 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:52.603 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:52.603 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:52.603 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:52.603 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:52.603 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:52.864 btrfs-progs v6.8.1 00:10:52.864 See https://btrfs.readthedocs.io for more information. 00:10:52.864 00:10:52.864 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:52.864 NOTE: several default settings have changed in version 5.15, please make sure 00:10:52.864 this does not affect your deployments: 00:10:52.864 - DUP for metadata (-m dup) 00:10:52.864 - enabled no-holes (-O no-holes) 00:10:52.864 - enabled free-space-tree (-R free-space-tree) 00:10:52.864 00:10:52.864 Label: (null) 00:10:52.864 UUID: e3a871f8-8bac-4523-95ca-b83f0731dd35 00:10:52.864 Node size: 16384 00:10:52.864 Sector size: 4096 (CPU page size: 4096) 00:10:52.864 Filesystem size: 510.00MiB 00:10:52.864 Block group profiles: 00:10:52.864 Data: single 8.00MiB 00:10:52.864 Metadata: DUP 32.00MiB 00:10:52.864 System: DUP 8.00MiB 00:10:52.864 SSD detected: yes 00:10:52.864 Zoned device: no 00:10:52.864 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:52.864 Checksum: crc32c 00:10:52.864 Number of devices: 1 00:10:52.864 Devices: 00:10:52.864 ID SIZE PATH 00:10:52.864 1 510.00MiB /dev/nvme0n1p1 00:10:52.864 00:10:52.864 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:52.864 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:53.125 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:53.125 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:53.125 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:53.125 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:53.125 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:53.125 16:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:53.125 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2086861 00:10:53.125 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:53.125 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:53.125 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:53.125 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:53.125 00:10:53.125 real 0m0.560s 00:10:53.125 user 0m0.032s 00:10:53.125 sys 0m0.110s 00:10:53.125 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:53.125 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:53.125 ************************************ 00:10:53.125 END TEST filesystem_in_capsule_btrfs 00:10:53.125 ************************************ 00:10:53.386 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:53.386 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:53.386 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:53.386 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:53.386 ************************************ 00:10:53.386 START TEST filesystem_in_capsule_xfs 00:10:53.386 ************************************ 00:10:53.386 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:53.386 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:53.386 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:53.386 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:53.386 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:53.386 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:53.386 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:53.386 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:10:53.386 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:53.386 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:53.386 16:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:53.386 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:53.386 = sectsz=512 attr=2, projid32bit=1 00:10:53.386 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:53.386 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:53.386 data = bsize=4096 blocks=130560, imaxpct=25 00:10:53.386 = sunit=0 swidth=0 blks 00:10:53.386 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:53.386 log =internal log bsize=4096 blocks=16384, version=2 00:10:53.386 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:53.386 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:54.328 Discarding blocks...Done. 00:10:54.328 16:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:54.328 16:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:56.875 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:56.875 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:56.875 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:56.875 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:56.875 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:56.875 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:56.875 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2086861 00:10:56.875 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:56.875 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:56.875 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:56.875 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:56.875 00:10:56.875 real 0m3.688s 00:10:56.875 user 0m0.032s 00:10:56.875 sys 0m0.075s 00:10:56.875 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:56.875 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:56.875 ************************************ 00:10:56.875 END TEST filesystem_in_capsule_xfs 00:10:56.875 ************************************ 00:10:57.135 16:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:57.395 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:57.395 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:57.396 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.396 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:57.396 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:57.396 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:57.396 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:57.396 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:57.396 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:57.396 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:57.396 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:57.396 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.396 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.396 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.396 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:57.396 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2086861 00:10:57.396 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2086861 ']' 00:10:57.396 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2086861 00:10:57.396 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:57.396 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:57.396 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2086861 00:10:57.396 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:57.396 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:57.396 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2086861' 00:10:57.396 killing process with pid 2086861 00:10:57.396 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 2086861 00:10:57.396 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 2086861 00:10:57.656 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:57.656 00:10:57.656 real 0m18.892s 00:10:57.656 user 1m14.678s 00:10:57.656 sys 0m1.417s 00:10:57.656 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:57.656 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.656 ************************************ 00:10:57.656 END TEST nvmf_filesystem_in_capsule 00:10:57.656 ************************************ 00:10:57.656 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:57.656 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:57.656 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:10:57.656 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:57.656 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:10:57.656 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:57.656 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:57.656 rmmod nvme_tcp 00:10:57.917 rmmod nvme_fabrics 00:10:57.917 rmmod nvme_keyring 00:10:57.917 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:57.917 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:10:57.917 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:10:57.917 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:57.917 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:57.917 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:57.917 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:57.917 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:10:57.917 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:10:57.917 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:57.917 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:10:57.917 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:57.917 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:57.917 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.917 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:57.917 16:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.832 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:59.832 00:10:59.832 real 0m48.262s 00:10:59.832 user 2m33.542s 00:10:59.832 sys 0m8.481s 00:10:59.832 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:59.832 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:59.832 ************************************ 00:10:59.832 END TEST nvmf_filesystem 00:10:59.832 ************************************ 00:11:00.093 16:22:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:00.093 16:22:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:00.093 16:22:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:00.093 16:22:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:00.093 ************************************ 00:11:00.093 START TEST nvmf_target_discovery 00:11:00.093 ************************************ 00:11:00.093 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:00.093 * Looking for test storage... 00:11:00.093 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:00.093 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:00.093 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:11:00.093 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:00.093 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:00.093 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:00.093 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:00.093 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:00.093 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:00.093 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:00.093 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:00.093 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:00.093 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:00.093 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:00.093 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:00.093 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:00.093 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:00.093 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:00.093 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:00.093 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:00.093 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:00.093 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:00.093 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:00.093 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:00.093 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:00.093 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:00.093 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:00.093 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:00.093 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:00.093 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:00.093 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:00.093 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:00.093 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:00.093 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:00.093 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:00.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.093 --rc genhtml_branch_coverage=1 00:11:00.093 --rc genhtml_function_coverage=1 00:11:00.094 --rc genhtml_legend=1 00:11:00.094 --rc geninfo_all_blocks=1 00:11:00.094 --rc geninfo_unexecuted_blocks=1 00:11:00.094 00:11:00.094 ' 00:11:00.094 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:00.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.094 --rc genhtml_branch_coverage=1 00:11:00.094 --rc genhtml_function_coverage=1 00:11:00.094 --rc genhtml_legend=1 00:11:00.094 --rc geninfo_all_blocks=1 00:11:00.094 --rc geninfo_unexecuted_blocks=1 00:11:00.094 00:11:00.094 ' 00:11:00.094 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:00.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.094 --rc genhtml_branch_coverage=1 00:11:00.094 --rc genhtml_function_coverage=1 00:11:00.094 --rc genhtml_legend=1 00:11:00.094 --rc geninfo_all_blocks=1 00:11:00.094 --rc geninfo_unexecuted_blocks=1 00:11:00.094 00:11:00.094 ' 00:11:00.094 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:00.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.094 --rc genhtml_branch_coverage=1 00:11:00.094 --rc genhtml_function_coverage=1 00:11:00.094 --rc genhtml_legend=1 00:11:00.094 --rc geninfo_all_blocks=1 00:11:00.094 --rc geninfo_unexecuted_blocks=1 00:11:00.094 00:11:00.094 ' 00:11:00.094 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:00.094 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:00.094 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:00.094 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:00.094 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:00.094 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:00.094 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:00.094 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:00.355 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:00.355 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:00.355 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:00.355 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:00.355 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:11:00.356 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:11:00.356 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:00.356 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:00.356 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:00.356 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:00.356 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:00.356 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:00.356 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:00.356 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:00.356 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:00.356 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.356 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.356 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.356 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:00.356 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.356 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:00.356 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:00.356 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:00.356 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:00.356 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:00.356 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:00.356 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:00.356 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:00.356 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:00.356 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:00.356 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:00.356 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:00.356 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:00.356 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:00.356 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:00.356 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:00.356 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:00.356 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:00.356 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:00.356 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:00.356 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:00.356 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.356 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:00.356 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.356 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:00.356 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:00.356 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:00.356 16:22:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:08.494 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:08.494 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:08.494 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:08.495 Found net devices under 0000:31:00.0: cvl_0_0 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:08.495 Found net devices under 0000:31:00.1: cvl_0_1 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:08.495 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:08.495 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.603 ms 00:11:08.495 00:11:08.495 --- 10.0.0.2 ping statistics --- 00:11:08.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.495 rtt min/avg/max/mdev = 0.603/0.603/0.603/0.000 ms 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:08.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:08.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:11:08.495 00:11:08.495 --- 10.0.0.1 ping statistics --- 00:11:08.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.495 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2094857 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2094857 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 2094857 ']' 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:08.495 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:08.495 [2024-11-20 16:22:53.406387] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:11:08.495 [2024-11-20 16:22:53.406434] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:08.495 [2024-11-20 16:22:53.486182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:08.495 [2024-11-20 16:22:53.521976] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:08.495 [2024-11-20 16:22:53.522014] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:08.495 [2024-11-20 16:22:53.522026] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:08.495 [2024-11-20 16:22:53.522033] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:08.495 [2024-11-20 16:22:53.522038] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:08.495 [2024-11-20 16:22:53.523777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:08.495 [2024-11-20 16:22:53.523891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:08.495 [2024-11-20 16:22:53.524046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.495 [2024-11-20 16:22:53.524047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:08.495 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:08.495 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:11:08.495 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:08.495 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:08.495 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:08.495 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:08.495 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:08.495 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.495 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:08.495 [2024-11-20 16:22:54.250926] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:08.495 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.495 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:08.495 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:08.495 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:08.495 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.495 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:08.495 Null1 00:11:08.495 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.495 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:08.496 [2024-11-20 16:22:54.311267] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:08.496 Null2 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:08.496 Null3 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:08.496 Null4 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:08.496 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.757 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:08.757 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.757 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:08.757 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.757 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:08.757 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.757 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:08.757 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.757 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:08.757 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.757 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:08.757 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.757 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 4420 00:11:08.757 00:11:08.757 Discovery Log Number of Records 6, Generation counter 6 00:11:08.757 =====Discovery Log Entry 0====== 00:11:08.757 trtype: tcp 00:11:08.757 adrfam: ipv4 00:11:08.757 subtype: current discovery subsystem 00:11:08.757 treq: not required 00:11:08.757 portid: 0 00:11:08.757 trsvcid: 4420 00:11:08.757 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:08.757 traddr: 10.0.0.2 00:11:08.757 eflags: explicit discovery connections, duplicate discovery information 00:11:08.757 sectype: none 00:11:08.757 =====Discovery Log Entry 1====== 00:11:08.757 trtype: tcp 00:11:08.757 adrfam: ipv4 00:11:08.757 subtype: nvme subsystem 00:11:08.757 treq: not required 00:11:08.757 portid: 0 00:11:08.757 trsvcid: 4420 00:11:08.757 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:08.757 traddr: 10.0.0.2 00:11:08.757 eflags: none 00:11:08.757 sectype: none 00:11:08.757 =====Discovery Log Entry 2====== 00:11:08.757 trtype: tcp 00:11:08.757 adrfam: ipv4 00:11:08.757 subtype: nvme subsystem 00:11:08.757 treq: not required 00:11:08.757 portid: 0 00:11:08.757 trsvcid: 4420 00:11:08.757 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:08.757 traddr: 10.0.0.2 00:11:08.757 eflags: none 00:11:08.757 sectype: none 00:11:08.757 =====Discovery Log Entry 3====== 00:11:08.757 trtype: tcp 00:11:08.757 adrfam: ipv4 00:11:08.757 subtype: nvme subsystem 00:11:08.757 treq: not required 00:11:08.757 portid: 0 00:11:08.757 trsvcid: 4420 00:11:08.757 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:08.757 traddr: 10.0.0.2 00:11:08.757 eflags: none 00:11:08.757 sectype: none 00:11:08.757 =====Discovery Log Entry 4====== 00:11:08.757 trtype: tcp 00:11:08.757 adrfam: ipv4 00:11:08.757 subtype: nvme subsystem 00:11:08.757 treq: not required 00:11:08.757 portid: 0 00:11:08.757 trsvcid: 4420 00:11:08.758 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:08.758 traddr: 10.0.0.2 00:11:08.758 eflags: none 00:11:08.758 sectype: none 00:11:08.758 =====Discovery Log Entry 5====== 00:11:08.758 trtype: tcp 00:11:08.758 adrfam: ipv4 00:11:08.758 subtype: discovery subsystem referral 00:11:08.758 treq: not required 00:11:08.758 portid: 0 00:11:08.758 trsvcid: 4430 00:11:08.758 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:08.758 traddr: 10.0.0.2 00:11:08.758 eflags: none 00:11:08.758 sectype: none 00:11:08.758 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:08.758 Perform nvmf subsystem discovery via RPC 00:11:08.758 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:08.758 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.758 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:08.758 [ 00:11:08.758 { 00:11:08.758 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:08.758 "subtype": "Discovery", 00:11:08.758 "listen_addresses": [ 00:11:08.758 { 00:11:08.758 "trtype": "TCP", 00:11:08.758 "adrfam": "IPv4", 00:11:08.758 "traddr": "10.0.0.2", 00:11:08.758 "trsvcid": "4420" 00:11:08.758 } 00:11:08.758 ], 00:11:08.758 "allow_any_host": true, 00:11:08.758 "hosts": [] 00:11:08.758 }, 00:11:08.758 { 00:11:08.758 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:08.758 "subtype": "NVMe", 00:11:08.758 "listen_addresses": [ 00:11:08.758 { 00:11:08.758 "trtype": "TCP", 00:11:08.758 "adrfam": "IPv4", 00:11:08.758 "traddr": "10.0.0.2", 00:11:08.758 "trsvcid": "4420" 00:11:08.758 } 00:11:08.758 ], 00:11:08.758 "allow_any_host": true, 00:11:08.758 "hosts": [], 00:11:08.758 "serial_number": "SPDK00000000000001", 00:11:08.758 "model_number": "SPDK bdev Controller", 00:11:08.758 "max_namespaces": 32, 00:11:08.758 "min_cntlid": 1, 00:11:08.758 "max_cntlid": 65519, 00:11:08.758 "namespaces": [ 00:11:08.758 { 00:11:08.758 "nsid": 1, 00:11:08.758 "bdev_name": "Null1", 00:11:08.758 "name": "Null1", 00:11:08.758 "nguid": "B9801FACE8934443ACC94CAFB6E17283", 00:11:08.758 "uuid": "b9801fac-e893-4443-acc9-4cafb6e17283" 00:11:08.758 } 00:11:08.758 ] 00:11:08.758 }, 00:11:08.758 { 00:11:08.758 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:08.758 "subtype": "NVMe", 00:11:08.758 "listen_addresses": [ 00:11:08.758 { 00:11:08.758 "trtype": "TCP", 00:11:08.758 "adrfam": "IPv4", 00:11:08.758 "traddr": "10.0.0.2", 00:11:08.758 "trsvcid": "4420" 00:11:08.758 } 00:11:08.758 ], 00:11:08.758 "allow_any_host": true, 00:11:08.758 "hosts": [], 00:11:08.758 "serial_number": "SPDK00000000000002", 00:11:08.758 "model_number": "SPDK bdev Controller", 00:11:08.758 "max_namespaces": 32, 00:11:08.758 "min_cntlid": 1, 00:11:08.758 "max_cntlid": 65519, 00:11:08.758 "namespaces": [ 00:11:08.758 { 00:11:08.758 "nsid": 1, 00:11:08.758 "bdev_name": "Null2", 00:11:08.758 "name": "Null2", 00:11:08.758 "nguid": "F7DEAC6888A44A9394E3B1763843922F", 00:11:08.758 "uuid": "f7deac68-88a4-4a93-94e3-b1763843922f" 00:11:08.758 } 00:11:08.758 ] 00:11:08.758 }, 00:11:08.758 { 00:11:08.758 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:08.758 "subtype": "NVMe", 00:11:08.758 "listen_addresses": [ 00:11:08.758 { 00:11:08.758 "trtype": "TCP", 00:11:08.758 "adrfam": "IPv4", 00:11:08.758 "traddr": "10.0.0.2", 00:11:08.758 "trsvcid": "4420" 00:11:08.758 } 00:11:08.758 ], 00:11:08.758 "allow_any_host": true, 00:11:08.758 "hosts": [], 00:11:08.758 "serial_number": "SPDK00000000000003", 00:11:08.758 "model_number": "SPDK bdev Controller", 00:11:08.758 "max_namespaces": 32, 00:11:08.758 "min_cntlid": 1, 00:11:08.758 "max_cntlid": 65519, 00:11:08.758 "namespaces": [ 00:11:08.758 { 00:11:08.758 "nsid": 1, 00:11:08.758 "bdev_name": "Null3", 00:11:08.758 "name": "Null3", 00:11:08.758 "nguid": "ACFA5B39471D4A0088DF2278EB4BE59F", 00:11:08.758 "uuid": "acfa5b39-471d-4a00-88df-2278eb4be59f" 00:11:08.758 } 00:11:08.758 ] 00:11:08.758 }, 00:11:08.758 { 00:11:08.758 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:08.758 "subtype": "NVMe", 00:11:08.758 "listen_addresses": [ 00:11:08.758 { 00:11:08.758 "trtype": "TCP", 00:11:08.758 "adrfam": "IPv4", 00:11:08.758 "traddr": "10.0.0.2", 00:11:08.758 "trsvcid": "4420" 00:11:08.758 } 00:11:08.758 ], 00:11:08.758 "allow_any_host": true, 00:11:08.758 "hosts": [], 00:11:08.758 "serial_number": "SPDK00000000000004", 00:11:08.758 "model_number": "SPDK bdev Controller", 00:11:08.758 "max_namespaces": 32, 00:11:08.758 "min_cntlid": 1, 00:11:08.758 "max_cntlid": 65519, 00:11:08.758 "namespaces": [ 00:11:08.758 { 00:11:08.758 "nsid": 1, 00:11:08.758 "bdev_name": "Null4", 00:11:08.758 "name": "Null4", 00:11:08.758 "nguid": "8D87F895C59B4437A1D750C7996ECA28", 00:11:08.758 "uuid": "8d87f895-c59b-4437-a1d7-50c7996eca28" 00:11:08.758 } 00:11:08.758 ] 00:11:08.758 } 00:11:08.758 ] 00:11:08.758 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.758 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:08.758 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:08.758 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:08.758 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.758 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.018 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.018 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:09.018 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.018 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:09.019 rmmod nvme_tcp 00:11:09.019 rmmod nvme_fabrics 00:11:09.019 rmmod nvme_keyring 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2094857 ']' 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2094857 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 2094857 ']' 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 2094857 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:09.019 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2094857 00:11:09.279 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:09.279 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:09.279 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2094857' 00:11:09.279 killing process with pid 2094857 00:11:09.279 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 2094857 00:11:09.279 16:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 2094857 00:11:09.279 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:09.279 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:09.279 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:09.279 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:09.279 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:09.279 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:09.279 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:09.279 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:09.279 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:09.279 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.279 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:09.279 16:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:11.826 00:11:11.826 real 0m11.342s 00:11:11.826 user 0m8.648s 00:11:11.826 sys 0m5.815s 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:11.826 ************************************ 00:11:11.826 END TEST nvmf_target_discovery 00:11:11.826 ************************************ 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:11.826 ************************************ 00:11:11.826 START TEST nvmf_referrals 00:11:11.826 ************************************ 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:11.826 * Looking for test storage... 00:11:11.826 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:11.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.826 --rc genhtml_branch_coverage=1 00:11:11.826 --rc genhtml_function_coverage=1 00:11:11.826 --rc genhtml_legend=1 00:11:11.826 --rc geninfo_all_blocks=1 00:11:11.826 --rc geninfo_unexecuted_blocks=1 00:11:11.826 00:11:11.826 ' 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:11.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.826 --rc genhtml_branch_coverage=1 00:11:11.826 --rc genhtml_function_coverage=1 00:11:11.826 --rc genhtml_legend=1 00:11:11.826 --rc geninfo_all_blocks=1 00:11:11.826 --rc geninfo_unexecuted_blocks=1 00:11:11.826 00:11:11.826 ' 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:11.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.826 --rc genhtml_branch_coverage=1 00:11:11.826 --rc genhtml_function_coverage=1 00:11:11.826 --rc genhtml_legend=1 00:11:11.826 --rc geninfo_all_blocks=1 00:11:11.826 --rc geninfo_unexecuted_blocks=1 00:11:11.826 00:11:11.826 ' 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:11.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.826 --rc genhtml_branch_coverage=1 00:11:11.826 --rc genhtml_function_coverage=1 00:11:11.826 --rc genhtml_legend=1 00:11:11.826 --rc geninfo_all_blocks=1 00:11:11.826 --rc geninfo_unexecuted_blocks=1 00:11:11.826 00:11:11.826 ' 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.826 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.827 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.827 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:11.827 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.827 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:11.827 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:11.827 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:11.827 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:11.827 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:11.827 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:11.827 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:11.827 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:11.827 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:11.827 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:11.827 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:11.827 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:11.827 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:11.827 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:11.827 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:11.827 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:11.827 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:11.827 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:11.827 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:11.827 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:11.827 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:11.827 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:11.827 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:11.827 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:11.827 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:11.827 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:11.827 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:11.827 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:11.827 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:11.827 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:19.967 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:19.967 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:19.967 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:19.967 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:19.967 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:19.967 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:19.967 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:19.967 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:19.967 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:19.967 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:19.967 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:19.967 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:19.967 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:19.967 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:19.967 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:19.967 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:19.967 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:19.967 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:19.967 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:19.967 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:19.967 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:19.967 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:19.967 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:19.967 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:19.967 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:19.967 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:19.967 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:19.967 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:19.967 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:19.967 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:19.967 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:19.967 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:19.967 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:19.967 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:19.967 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:19.967 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:19.967 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:19.967 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:19.968 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:19.968 Found net devices under 0000:31:00.0: cvl_0_0 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:19.968 Found net devices under 0000:31:00.1: cvl_0_1 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:19.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:19.968 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.490 ms 00:11:19.968 00:11:19.968 --- 10.0.0.2 ping statistics --- 00:11:19.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:19.968 rtt min/avg/max/mdev = 0.490/0.490/0.490/0.000 ms 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:19.968 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:19.968 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:11:19.968 00:11:19.968 --- 10.0.0.1 ping statistics --- 00:11:19.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:19.968 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2099431 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2099431 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 2099431 ']' 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.968 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:19.969 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.969 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:19.969 16:23:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:19.969 [2024-11-20 16:23:04.815952] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:11:19.969 [2024-11-20 16:23:04.816030] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:19.969 [2024-11-20 16:23:04.900129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:19.969 [2024-11-20 16:23:04.941865] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:19.969 [2024-11-20 16:23:04.941900] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:19.969 [2024-11-20 16:23:04.941908] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:19.969 [2024-11-20 16:23:04.941915] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:19.969 [2024-11-20 16:23:04.941921] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:19.969 [2024-11-20 16:23:04.943528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:19.969 [2024-11-20 16:23:04.943648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:19.969 [2024-11-20 16:23:04.943805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.969 [2024-11-20 16:23:04.943806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:19.969 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:19.969 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:11:19.969 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:19.969 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:19.969 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:19.969 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:19.969 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:19.969 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.969 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:19.969 [2024-11-20 16:23:05.671581] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:19.969 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.969 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:19.969 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.969 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:19.969 [2024-11-20 16:23:05.700140] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:19.969 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.969 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:19.969 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.969 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:19.969 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.969 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:19.969 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.969 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:19.969 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.969 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:19.969 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.969 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:19.969 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.969 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:19.969 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:19.969 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.969 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:19.969 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.969 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:19.969 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:19.969 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:19.969 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:19.969 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:19.969 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.969 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:19.969 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:19.969 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.969 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:19.969 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:19.969 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:19.969 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:19.969 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:19.969 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:19.969 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:19.969 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:20.230 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:20.230 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:20.230 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:20.230 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.230 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:20.230 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.230 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:20.230 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.230 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:20.230 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.230 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:20.230 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.230 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:20.230 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.230 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:20.230 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:20.230 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.230 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:20.230 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.230 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:20.230 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:20.230 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:20.230 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:20.230 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:20.230 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:20.230 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:20.491 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:20.491 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:20.491 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:20.491 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.491 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:20.491 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.491 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:20.491 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.491 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:20.491 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.491 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:20.491 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:20.491 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:20.491 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:20.491 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.491 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:20.491 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:20.491 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.491 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:20.491 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:20.491 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:20.491 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:20.491 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:20.491 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:20.491 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:20.491 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:20.752 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:20.752 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:20.752 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:20.752 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:20.752 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:20.752 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:20.752 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:21.012 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:21.012 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:21.012 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:21.012 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:21.012 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:21.012 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:21.012 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:21.012 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:21.012 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.012 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:21.012 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.012 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:21.012 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:21.012 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:21.012 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:21.012 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.012 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:21.012 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:21.012 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.272 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:21.272 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:21.272 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:21.272 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:21.272 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:21.272 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:21.272 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:21.272 16:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:21.272 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:21.272 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:21.272 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:21.272 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:21.272 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:21.272 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:21.272 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:21.533 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:21.533 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:21.533 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:21.533 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:21.533 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:21.533 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:21.533 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:21.533 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:21.533 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.533 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:21.533 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.533 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:21.533 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:21.533 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.533 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:21.794 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.794 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:21.794 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:21.794 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:21.794 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:21.794 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:21.794 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:21.794 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:21.794 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:21.794 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:21.794 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:21.794 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:21.794 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:21.794 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:22.055 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:22.055 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:22.055 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:22.055 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:22.055 rmmod nvme_tcp 00:11:22.055 rmmod nvme_fabrics 00:11:22.055 rmmod nvme_keyring 00:11:22.055 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:22.055 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:22.055 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:22.055 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2099431 ']' 00:11:22.056 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2099431 00:11:22.056 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 2099431 ']' 00:11:22.056 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 2099431 00:11:22.056 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:11:22.056 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:22.056 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2099431 00:11:22.056 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:22.056 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:22.056 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2099431' 00:11:22.056 killing process with pid 2099431 00:11:22.056 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 2099431 00:11:22.056 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 2099431 00:11:22.056 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:22.056 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:22.056 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:22.056 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:22.056 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:22.056 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:11:22.056 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:11:22.056 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:22.317 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:22.317 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:22.317 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:22.317 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:24.229 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:24.229 00:11:24.230 real 0m12.820s 00:11:24.230 user 0m15.243s 00:11:24.230 sys 0m6.290s 00:11:24.230 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:24.230 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:24.230 ************************************ 00:11:24.230 END TEST nvmf_referrals 00:11:24.230 ************************************ 00:11:24.230 16:23:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:24.230 16:23:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:24.230 16:23:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:24.230 16:23:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:24.230 ************************************ 00:11:24.230 START TEST nvmf_connect_disconnect 00:11:24.230 ************************************ 00:11:24.230 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:24.491 * Looking for test storage... 00:11:24.491 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:24.491 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:24.491 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:11:24.491 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:24.491 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:24.491 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:24.491 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:24.491 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:24.491 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:24.491 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:24.491 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:24.491 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:24.491 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:24.491 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:24.491 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:24.491 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:24.491 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:24.491 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:24.491 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:24.491 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:24.491 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:24.491 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:24.491 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:24.491 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:24.491 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:24.491 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:24.491 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:24.491 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:24.491 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:24.491 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:24.491 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:24.491 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:24.491 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:24.491 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:24.491 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:24.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.491 --rc genhtml_branch_coverage=1 00:11:24.491 --rc genhtml_function_coverage=1 00:11:24.491 --rc genhtml_legend=1 00:11:24.491 --rc geninfo_all_blocks=1 00:11:24.491 --rc geninfo_unexecuted_blocks=1 00:11:24.491 00:11:24.491 ' 00:11:24.491 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:24.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.491 --rc genhtml_branch_coverage=1 00:11:24.491 --rc genhtml_function_coverage=1 00:11:24.491 --rc genhtml_legend=1 00:11:24.492 --rc geninfo_all_blocks=1 00:11:24.492 --rc geninfo_unexecuted_blocks=1 00:11:24.492 00:11:24.492 ' 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:24.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.492 --rc genhtml_branch_coverage=1 00:11:24.492 --rc genhtml_function_coverage=1 00:11:24.492 --rc genhtml_legend=1 00:11:24.492 --rc geninfo_all_blocks=1 00:11:24.492 --rc geninfo_unexecuted_blocks=1 00:11:24.492 00:11:24.492 ' 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:24.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.492 --rc genhtml_branch_coverage=1 00:11:24.492 --rc genhtml_function_coverage=1 00:11:24.492 --rc genhtml_legend=1 00:11:24.492 --rc geninfo_all_blocks=1 00:11:24.492 --rc geninfo_unexecuted_blocks=1 00:11:24.492 00:11:24.492 ' 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:24.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:24.492 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:32.633 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:32.633 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:32.633 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:32.634 Found net devices under 0000:31:00.0: cvl_0_0 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:32.634 Found net devices under 0000:31:00.1: cvl_0_1 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:32.634 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:32.634 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:11:32.634 00:11:32.634 --- 10.0.0.2 ping statistics --- 00:11:32.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.634 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:32.634 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:32.634 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:11:32.634 00:11:32.634 --- 10.0.0.1 ping statistics --- 00:11:32.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.634 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2104385 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2104385 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 2104385 ']' 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:32.634 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:32.634 [2024-11-20 16:23:17.910812] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:11:32.634 [2024-11-20 16:23:17.910878] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:32.634 [2024-11-20 16:23:17.994573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:32.634 [2024-11-20 16:23:18.035968] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:32.634 [2024-11-20 16:23:18.036013] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:32.634 [2024-11-20 16:23:18.036022] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:32.634 [2024-11-20 16:23:18.036028] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:32.634 [2024-11-20 16:23:18.036034] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:32.634 [2024-11-20 16:23:18.037627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:32.634 [2024-11-20 16:23:18.037745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:32.634 [2024-11-20 16:23:18.037901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.634 [2024-11-20 16:23:18.037901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:32.895 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:32.895 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:11:32.895 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:32.895 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:32.895 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:32.895 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:32.895 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:32.895 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.895 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:32.895 [2024-11-20 16:23:18.769675] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:32.895 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.895 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:32.895 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.895 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:32.895 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.895 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:32.895 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:32.895 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.895 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:32.895 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.895 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:32.895 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.895 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:32.895 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.896 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:32.896 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.896 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:32.896 [2024-11-20 16:23:18.838412] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:32.896 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.896 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:32.896 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:32.896 16:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:37.095 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.391 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.594 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.890 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.189 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.189 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:51.189 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:51.189 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:51.189 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:11:51.189 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:51.189 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:11:51.189 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:51.189 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:51.189 rmmod nvme_tcp 00:11:51.189 rmmod nvme_fabrics 00:11:51.189 rmmod nvme_keyring 00:11:51.189 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:51.450 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:11:51.450 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:11:51.450 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2104385 ']' 00:11:51.450 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2104385 00:11:51.450 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2104385 ']' 00:11:51.450 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 2104385 00:11:51.450 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:11:51.450 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:51.450 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2104385 00:11:51.450 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:51.450 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:51.450 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2104385' 00:11:51.450 killing process with pid 2104385 00:11:51.450 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 2104385 00:11:51.450 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 2104385 00:11:51.450 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:51.450 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:51.450 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:51.450 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:11:51.450 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:11:51.450 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:51.450 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:11:51.450 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:51.450 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:51.450 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:51.450 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:51.450 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:53.995 00:11:53.995 real 0m29.262s 00:11:53.995 user 1m19.037s 00:11:53.995 sys 0m7.038s 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:53.995 ************************************ 00:11:53.995 END TEST nvmf_connect_disconnect 00:11:53.995 ************************************ 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:53.995 ************************************ 00:11:53.995 START TEST nvmf_multitarget 00:11:53.995 ************************************ 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:53.995 * Looking for test storage... 00:11:53.995 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:53.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.995 --rc genhtml_branch_coverage=1 00:11:53.995 --rc genhtml_function_coverage=1 00:11:53.995 --rc genhtml_legend=1 00:11:53.995 --rc geninfo_all_blocks=1 00:11:53.995 --rc geninfo_unexecuted_blocks=1 00:11:53.995 00:11:53.995 ' 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:53.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.995 --rc genhtml_branch_coverage=1 00:11:53.995 --rc genhtml_function_coverage=1 00:11:53.995 --rc genhtml_legend=1 00:11:53.995 --rc geninfo_all_blocks=1 00:11:53.995 --rc geninfo_unexecuted_blocks=1 00:11:53.995 00:11:53.995 ' 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:53.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.995 --rc genhtml_branch_coverage=1 00:11:53.995 --rc genhtml_function_coverage=1 00:11:53.995 --rc genhtml_legend=1 00:11:53.995 --rc geninfo_all_blocks=1 00:11:53.995 --rc geninfo_unexecuted_blocks=1 00:11:53.995 00:11:53.995 ' 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:53.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.995 --rc genhtml_branch_coverage=1 00:11:53.995 --rc genhtml_function_coverage=1 00:11:53.995 --rc genhtml_legend=1 00:11:53.995 --rc geninfo_all_blocks=1 00:11:53.995 --rc geninfo_unexecuted_blocks=1 00:11:53.995 00:11:53.995 ' 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.995 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.996 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.996 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:53.996 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.996 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:11:53.996 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:53.996 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:53.996 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:53.996 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:53.996 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:53.996 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:53.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:53.996 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:53.996 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:53.996 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:53.996 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:53.996 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:53.996 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:53.996 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:53.996 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:53.996 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:53.996 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:53.996 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:53.996 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:53.996 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:53.996 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:53.996 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:53.996 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:11:53.996 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:02.167 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:02.167 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:02.167 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:02.167 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:02.167 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:02.167 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:02.167 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:02.167 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:02.167 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:02.167 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:02.167 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:02.167 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:02.167 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:02.167 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:02.167 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:02.167 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:02.167 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:02.167 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:02.167 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:02.167 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:02.167 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:02.167 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:02.167 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:02.167 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:02.167 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:02.167 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:02.167 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:02.167 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:02.167 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:02.167 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:02.167 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:02.167 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:02.167 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:02.167 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:02.167 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:02.167 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:02.167 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:02.167 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:02.167 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:02.167 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:02.167 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:02.167 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:02.167 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:02.167 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:02.167 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:02.167 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:02.167 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:02.167 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:02.167 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:02.167 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:02.168 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:02.168 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:02.168 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:02.168 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:02.168 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:02.168 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:02.168 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:02.168 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:02.168 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:02.168 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:02.168 Found net devices under 0000:31:00.0: cvl_0_0 00:12:02.168 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:02.168 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:02.168 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:02.168 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:02.168 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:02.168 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:02.168 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:02.168 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:02.168 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:02.168 Found net devices under 0000:31:00.1: cvl_0_1 00:12:02.168 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:02.168 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:02.168 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:12:02.168 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:02.168 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:02.168 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:02.168 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:02.168 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:02.168 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:02.168 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:02.168 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:02.168 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:02.168 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:02.168 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:02.168 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:02.168 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:02.168 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:02.168 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:02.168 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:02.168 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:02.168 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:02.168 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:02.168 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:02.168 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:02.168 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:02.168 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:02.168 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:02.168 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:02.168 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:02.168 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:02.168 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.651 ms 00:12:02.168 00:12:02.168 --- 10.0.0.2 ping statistics --- 00:12:02.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.168 rtt min/avg/max/mdev = 0.651/0.651/0.651/0.000 ms 00:12:02.168 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:02.168 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:02.168 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:12:02.168 00:12:02.168 --- 10.0.0.1 ping statistics --- 00:12:02.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.168 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:12:02.168 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:02.168 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:12:02.168 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:02.168 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:02.168 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:02.168 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:02.168 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:02.168 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:02.168 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:02.168 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:02.168 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:02.168 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:02.168 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:02.168 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2112533 00:12:02.168 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2112533 00:12:02.168 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:02.168 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 2112533 ']' 00:12:02.168 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:02.168 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:02.168 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:02.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:02.168 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:02.168 16:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:02.168 [2024-11-20 16:23:47.328552] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:12:02.168 [2024-11-20 16:23:47.328600] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:02.168 [2024-11-20 16:23:47.409045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:02.168 [2024-11-20 16:23:47.444886] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:02.168 [2024-11-20 16:23:47.444922] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:02.168 [2024-11-20 16:23:47.444930] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:02.168 [2024-11-20 16:23:47.444936] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:02.168 [2024-11-20 16:23:47.444942] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:02.168 [2024-11-20 16:23:47.446425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:02.168 [2024-11-20 16:23:47.446551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:02.168 [2024-11-20 16:23:47.446704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.168 [2024-11-20 16:23:47.446706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:02.168 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:02.168 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:12:02.168 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:02.168 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:02.168 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:02.476 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:02.476 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:02.476 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:02.476 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:02.476 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:02.476 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:02.476 "nvmf_tgt_1" 00:12:02.476 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:02.766 "nvmf_tgt_2" 00:12:02.766 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:02.766 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:02.766 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:02.766 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:02.766 true 00:12:02.766 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:03.026 true 00:12:03.026 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:03.026 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:03.026 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:03.026 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:03.026 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:03.026 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:03.026 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:03.026 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:03.026 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:03.026 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:03.026 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:03.026 rmmod nvme_tcp 00:12:03.026 rmmod nvme_fabrics 00:12:03.026 rmmod nvme_keyring 00:12:03.026 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:03.026 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:03.026 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:03.026 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2112533 ']' 00:12:03.026 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2112533 00:12:03.026 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 2112533 ']' 00:12:03.026 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 2112533 00:12:03.026 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:12:03.026 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:03.026 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2112533 00:12:03.287 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:03.287 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:03.287 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2112533' 00:12:03.287 killing process with pid 2112533 00:12:03.287 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 2112533 00:12:03.287 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 2112533 00:12:03.287 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:03.287 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:03.287 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:03.287 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:03.287 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:03.287 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:12:03.287 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:12:03.287 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:03.287 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:03.287 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.287 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:03.287 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:05.833 00:12:05.833 real 0m11.691s 00:12:05.833 user 0m9.608s 00:12:05.833 sys 0m6.152s 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:05.833 ************************************ 00:12:05.833 END TEST nvmf_multitarget 00:12:05.833 ************************************ 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:05.833 ************************************ 00:12:05.833 START TEST nvmf_rpc 00:12:05.833 ************************************ 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:05.833 * Looking for test storage... 00:12:05.833 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:05.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.833 --rc genhtml_branch_coverage=1 00:12:05.833 --rc genhtml_function_coverage=1 00:12:05.833 --rc genhtml_legend=1 00:12:05.833 --rc geninfo_all_blocks=1 00:12:05.833 --rc geninfo_unexecuted_blocks=1 00:12:05.833 00:12:05.833 ' 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:05.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.833 --rc genhtml_branch_coverage=1 00:12:05.833 --rc genhtml_function_coverage=1 00:12:05.833 --rc genhtml_legend=1 00:12:05.833 --rc geninfo_all_blocks=1 00:12:05.833 --rc geninfo_unexecuted_blocks=1 00:12:05.833 00:12:05.833 ' 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:05.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.833 --rc genhtml_branch_coverage=1 00:12:05.833 --rc genhtml_function_coverage=1 00:12:05.833 --rc genhtml_legend=1 00:12:05.833 --rc geninfo_all_blocks=1 00:12:05.833 --rc geninfo_unexecuted_blocks=1 00:12:05.833 00:12:05.833 ' 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:05.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.833 --rc genhtml_branch_coverage=1 00:12:05.833 --rc genhtml_function_coverage=1 00:12:05.833 --rc genhtml_legend=1 00:12:05.833 --rc geninfo_all_blocks=1 00:12:05.833 --rc geninfo_unexecuted_blocks=1 00:12:05.833 00:12:05.833 ' 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:05.833 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:05.834 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:05.834 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:05.834 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:05.834 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:05.834 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:05.834 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.834 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.834 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.834 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:05.834 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.834 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:05.834 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:05.834 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:05.834 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:05.834 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:05.834 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:05.834 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:05.834 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:05.834 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:05.834 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:05.834 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:05.834 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:05.834 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:05.834 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:05.834 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:05.834 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:05.834 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:05.834 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:05.834 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.834 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:05.834 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.834 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:05.834 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:05.834 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:05.834 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:13.978 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:13.978 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:13.978 Found net devices under 0000:31:00.0: cvl_0_0 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:13.978 Found net devices under 0000:31:00.1: cvl_0_1 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:13.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:13.978 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:12:13.978 00:12:13.978 --- 10.0.0.2 ping statistics --- 00:12:13.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:13.978 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:13.978 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:13.978 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:12:13.978 00:12:13.978 --- 10.0.0.1 ping statistics --- 00:12:13.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:13.978 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2117032 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2117032 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 2117032 ']' 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:13.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:13.978 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.978 [2024-11-20 16:23:58.919270] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:12:13.978 [2024-11-20 16:23:58.919336] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:13.979 [2024-11-20 16:23:59.003418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:13.979 [2024-11-20 16:23:59.045624] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:13.979 [2024-11-20 16:23:59.045657] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:13.979 [2024-11-20 16:23:59.045666] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:13.979 [2024-11-20 16:23:59.045673] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:13.979 [2024-11-20 16:23:59.045679] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:13.979 [2024-11-20 16:23:59.047312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:13.979 [2024-11-20 16:23:59.047434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:13.979 [2024-11-20 16:23:59.047590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.979 [2024-11-20 16:23:59.047591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:13.979 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:13.979 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:13.979 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:13.979 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:13.979 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.979 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:13.979 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:13.979 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.979 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.979 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.979 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:13.979 "tick_rate": 2400000000, 00:12:13.979 "poll_groups": [ 00:12:13.979 { 00:12:13.979 "name": "nvmf_tgt_poll_group_000", 00:12:13.979 "admin_qpairs": 0, 00:12:13.979 "io_qpairs": 0, 00:12:13.979 "current_admin_qpairs": 0, 00:12:13.979 "current_io_qpairs": 0, 00:12:13.979 "pending_bdev_io": 0, 00:12:13.979 "completed_nvme_io": 0, 00:12:13.979 "transports": [] 00:12:13.979 }, 00:12:13.979 { 00:12:13.979 "name": "nvmf_tgt_poll_group_001", 00:12:13.979 "admin_qpairs": 0, 00:12:13.979 "io_qpairs": 0, 00:12:13.979 "current_admin_qpairs": 0, 00:12:13.979 "current_io_qpairs": 0, 00:12:13.979 "pending_bdev_io": 0, 00:12:13.979 "completed_nvme_io": 0, 00:12:13.979 "transports": [] 00:12:13.979 }, 00:12:13.979 { 00:12:13.979 "name": "nvmf_tgt_poll_group_002", 00:12:13.979 "admin_qpairs": 0, 00:12:13.979 "io_qpairs": 0, 00:12:13.979 "current_admin_qpairs": 0, 00:12:13.979 "current_io_qpairs": 0, 00:12:13.979 "pending_bdev_io": 0, 00:12:13.979 "completed_nvme_io": 0, 00:12:13.979 "transports": [] 00:12:13.979 }, 00:12:13.979 { 00:12:13.979 "name": "nvmf_tgt_poll_group_003", 00:12:13.979 "admin_qpairs": 0, 00:12:13.979 "io_qpairs": 0, 00:12:13.979 "current_admin_qpairs": 0, 00:12:13.979 "current_io_qpairs": 0, 00:12:13.979 "pending_bdev_io": 0, 00:12:13.979 "completed_nvme_io": 0, 00:12:13.979 "transports": [] 00:12:13.979 } 00:12:13.979 ] 00:12:13.979 }' 00:12:13.979 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:13.979 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:13.979 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:13.979 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:13.979 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:13.979 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:13.979 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:13.979 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:13.979 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.979 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.979 [2024-11-20 16:23:59.891913] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:13.979 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.979 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:13.979 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.979 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.979 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.979 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:13.979 "tick_rate": 2400000000, 00:12:13.979 "poll_groups": [ 00:12:13.979 { 00:12:13.979 "name": "nvmf_tgt_poll_group_000", 00:12:13.979 "admin_qpairs": 0, 00:12:13.979 "io_qpairs": 0, 00:12:13.979 "current_admin_qpairs": 0, 00:12:13.979 "current_io_qpairs": 0, 00:12:13.979 "pending_bdev_io": 0, 00:12:13.979 "completed_nvme_io": 0, 00:12:13.979 "transports": [ 00:12:13.979 { 00:12:13.979 "trtype": "TCP" 00:12:13.979 } 00:12:13.979 ] 00:12:13.979 }, 00:12:13.979 { 00:12:13.979 "name": "nvmf_tgt_poll_group_001", 00:12:13.979 "admin_qpairs": 0, 00:12:13.979 "io_qpairs": 0, 00:12:13.979 "current_admin_qpairs": 0, 00:12:13.979 "current_io_qpairs": 0, 00:12:13.979 "pending_bdev_io": 0, 00:12:13.979 "completed_nvme_io": 0, 00:12:13.979 "transports": [ 00:12:13.979 { 00:12:13.979 "trtype": "TCP" 00:12:13.979 } 00:12:13.979 ] 00:12:13.979 }, 00:12:13.979 { 00:12:13.979 "name": "nvmf_tgt_poll_group_002", 00:12:13.979 "admin_qpairs": 0, 00:12:13.979 "io_qpairs": 0, 00:12:13.979 "current_admin_qpairs": 0, 00:12:13.979 "current_io_qpairs": 0, 00:12:13.979 "pending_bdev_io": 0, 00:12:13.979 "completed_nvme_io": 0, 00:12:13.979 "transports": [ 00:12:13.979 { 00:12:13.979 "trtype": "TCP" 00:12:13.979 } 00:12:13.979 ] 00:12:13.979 }, 00:12:13.979 { 00:12:13.979 "name": "nvmf_tgt_poll_group_003", 00:12:13.979 "admin_qpairs": 0, 00:12:13.979 "io_qpairs": 0, 00:12:13.979 "current_admin_qpairs": 0, 00:12:13.979 "current_io_qpairs": 0, 00:12:13.979 "pending_bdev_io": 0, 00:12:13.979 "completed_nvme_io": 0, 00:12:13.979 "transports": [ 00:12:13.979 { 00:12:13.979 "trtype": "TCP" 00:12:13.979 } 00:12:13.979 ] 00:12:13.979 } 00:12:13.979 ] 00:12:13.979 }' 00:12:13.979 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:13.979 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:13.979 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:13.979 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:14.240 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:14.240 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:14.240 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:14.240 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:14.240 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:14.240 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:14.240 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:14.240 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:14.240 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:14.240 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:14.240 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.240 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.240 Malloc1 00:12:14.240 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.240 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:14.240 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.240 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.240 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.240 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:14.240 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.240 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.240 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.240 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:14.240 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.240 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.240 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.240 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:14.240 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.240 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.240 [2024-11-20 16:24:00.094124] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:14.240 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.240 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 10.0.0.2 -s 4420 00:12:14.240 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:14.240 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 10.0.0.2 -s 4420 00:12:14.240 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:14.240 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:14.240 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:14.240 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:14.240 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:14.240 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:14.240 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:14.241 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:14.241 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 10.0.0.2 -s 4420 00:12:14.241 [2024-11-20 16:24:00.131105] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6' 00:12:14.241 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:14.241 could not add new controller: failed to write to nvme-fabrics device 00:12:14.241 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:14.241 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:14.241 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:14.241 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:14.241 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:12:14.241 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.241 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.241 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.241 16:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:16.153 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:16.153 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:16.154 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:16.154 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:16.154 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:18.067 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:18.067 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:18.067 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:18.068 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:18.068 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:18.068 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:18.068 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:18.068 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.068 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:18.068 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:18.068 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:18.068 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:18.068 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:18.068 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:18.068 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:18.068 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:12:18.068 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.068 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.068 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.068 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:18.068 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:18.068 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:18.068 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:18.068 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:18.068 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:18.068 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:18.068 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:18.068 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:18.068 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:18.068 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:18.068 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:18.068 [2024-11-20 16:24:03.897322] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6' 00:12:18.068 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:18.068 could not add new controller: failed to write to nvme-fabrics device 00:12:18.068 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:18.068 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:18.068 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:18.068 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:18.068 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:18.068 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.068 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.068 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.068 16:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:19.984 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:19.984 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:19.984 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:19.984 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:19.984 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:21.900 16:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:21.900 16:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:21.900 16:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:21.900 16:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:21.900 16:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:21.900 16:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:21.900 16:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:21.900 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.900 16:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:21.900 16:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:21.900 16:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:21.900 16:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:21.900 16:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:21.900 16:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:21.900 16:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:21.900 16:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:21.900 16:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.900 16:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.900 16:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.900 16:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:21.900 16:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:21.901 16:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:21.901 16:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.901 16:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.901 16:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.901 16:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:21.901 16:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.901 16:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.901 [2024-11-20 16:24:07.625388] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:21.901 16:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.901 16:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:21.901 16:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.901 16:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.901 16:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.901 16:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:21.901 16:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.901 16:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.901 16:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.901 16:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:23.283 16:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:23.283 16:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:23.283 16:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:23.283 16:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:23.283 16:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:25.195 16:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:25.195 16:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:25.195 16:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:25.455 16:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:25.455 16:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:25.455 16:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:25.455 16:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:25.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.455 16:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:25.455 16:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:25.455 16:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:25.455 16:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:25.455 16:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:25.455 16:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:25.455 16:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:25.455 16:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:25.455 16:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.455 16:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.455 16:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.455 16:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:25.455 16:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.455 16:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.455 16:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.455 16:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:25.455 16:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:25.455 16:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.455 16:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.455 16:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.455 16:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:25.455 16:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.455 16:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.455 [2024-11-20 16:24:11.331285] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:25.455 16:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.456 16:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:25.456 16:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.456 16:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.456 16:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.456 16:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:25.456 16:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.456 16:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.456 16:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.456 16:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:27.366 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:27.366 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:27.366 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:27.366 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:27.366 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:29.278 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:29.278 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:29.278 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:29.278 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:29.278 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:29.278 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:29.278 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:29.278 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.278 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:29.278 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:29.278 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:29.278 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:29.278 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:29.278 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:29.278 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:29.278 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:29.278 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.278 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.278 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.278 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:29.278 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.278 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.278 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.278 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:29.278 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:29.278 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.278 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.278 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.278 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:29.278 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.278 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.278 [2024-11-20 16:24:15.060369] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:29.278 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.278 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:29.278 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.278 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.278 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.278 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:29.278 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.278 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.278 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.278 16:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:30.661 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:30.661 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:30.661 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:30.661 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:30.661 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:33.207 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:33.207 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:33.207 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:33.207 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:33.207 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:33.207 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:33.207 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:33.207 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.207 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:33.207 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:33.207 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:33.207 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:33.207 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:33.207 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:33.207 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:33.207 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:33.207 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.207 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.207 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.207 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:33.207 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.207 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.207 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.207 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:33.207 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:33.207 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.207 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.207 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.207 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:33.207 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.207 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.207 [2024-11-20 16:24:18.797040] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:33.207 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.207 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:33.207 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.207 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.207 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.207 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:33.207 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.207 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.207 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.207 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:34.596 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:34.596 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:34.596 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:34.596 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:34.596 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:36.510 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:36.510 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:36.510 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:36.510 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:36.510 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:36.510 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:36.510 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:36.510 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.510 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:36.510 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:36.510 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:36.510 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:36.510 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:36.510 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:36.771 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:36.771 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:36.771 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.771 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.771 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.771 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:36.771 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.771 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.771 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.771 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:36.771 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:36.771 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.771 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.771 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.771 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:36.771 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.771 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.771 [2024-11-20 16:24:22.512386] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:36.771 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.771 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:36.771 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.771 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.771 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.771 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:36.771 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.771 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.771 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.771 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:38.154 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:38.154 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:38.154 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:38.154 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:38.154 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:40.065 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:40.065 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:40.065 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:40.325 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:40.326 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:40.326 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:40.326 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:40.326 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.326 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:40.326 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:40.326 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:40.326 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.326 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:40.326 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.326 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:40.326 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:40.326 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.326 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.326 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.326 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.326 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.326 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.326 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.326 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:40.326 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:40.326 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:40.326 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.326 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.326 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.326 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.326 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.326 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.326 [2024-11-20 16:24:26.222610] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.326 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.326 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:40.326 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.326 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.326 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.326 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:40.326 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.326 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.326 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.326 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:40.326 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.326 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.326 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.326 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.326 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.326 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.326 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.326 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:40.326 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:40.326 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.326 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.588 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.588 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.588 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.588 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.588 [2024-11-20 16:24:26.290789] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.588 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.588 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:40.588 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.588 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.588 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.588 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:40.588 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.588 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.588 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.588 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:40.588 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.588 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.588 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.588 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.588 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.588 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.588 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.588 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:40.588 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:40.588 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.588 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.588 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.588 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.588 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.588 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.588 [2024-11-20 16:24:26.363020] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.588 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.588 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:40.588 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.588 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.588 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.588 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:40.588 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.588 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.588 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.588 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:40.588 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.588 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.588 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.588 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.588 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.588 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.588 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.588 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:40.588 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:40.588 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.588 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.588 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.588 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.588 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.588 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.588 [2024-11-20 16:24:26.435222] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.589 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.589 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:40.589 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.589 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.589 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.589 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:40.589 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.589 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.589 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.589 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:40.589 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.589 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.589 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.589 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.589 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.589 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.589 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.589 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:40.589 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:40.589 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.589 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.589 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.589 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.589 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.589 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.589 [2024-11-20 16:24:26.503427] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.589 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.589 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:40.589 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.589 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.589 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.589 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:40.589 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.589 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.589 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.589 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:40.589 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.589 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.850 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.850 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.850 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.850 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.850 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.850 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:40.850 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.850 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.850 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.850 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:40.850 "tick_rate": 2400000000, 00:12:40.850 "poll_groups": [ 00:12:40.850 { 00:12:40.850 "name": "nvmf_tgt_poll_group_000", 00:12:40.850 "admin_qpairs": 0, 00:12:40.850 "io_qpairs": 224, 00:12:40.850 "current_admin_qpairs": 0, 00:12:40.850 "current_io_qpairs": 0, 00:12:40.850 "pending_bdev_io": 0, 00:12:40.850 "completed_nvme_io": 489, 00:12:40.850 "transports": [ 00:12:40.850 { 00:12:40.850 "trtype": "TCP" 00:12:40.850 } 00:12:40.850 ] 00:12:40.850 }, 00:12:40.850 { 00:12:40.850 "name": "nvmf_tgt_poll_group_001", 00:12:40.850 "admin_qpairs": 1, 00:12:40.850 "io_qpairs": 223, 00:12:40.850 "current_admin_qpairs": 0, 00:12:40.850 "current_io_qpairs": 0, 00:12:40.850 "pending_bdev_io": 0, 00:12:40.850 "completed_nvme_io": 227, 00:12:40.850 "transports": [ 00:12:40.850 { 00:12:40.850 "trtype": "TCP" 00:12:40.850 } 00:12:40.850 ] 00:12:40.850 }, 00:12:40.850 { 00:12:40.850 "name": "nvmf_tgt_poll_group_002", 00:12:40.850 "admin_qpairs": 6, 00:12:40.850 "io_qpairs": 218, 00:12:40.850 "current_admin_qpairs": 0, 00:12:40.850 "current_io_qpairs": 0, 00:12:40.850 "pending_bdev_io": 0, 00:12:40.850 "completed_nvme_io": 234, 00:12:40.850 "transports": [ 00:12:40.850 { 00:12:40.850 "trtype": "TCP" 00:12:40.850 } 00:12:40.850 ] 00:12:40.850 }, 00:12:40.850 { 00:12:40.850 "name": "nvmf_tgt_poll_group_003", 00:12:40.850 "admin_qpairs": 0, 00:12:40.850 "io_qpairs": 224, 00:12:40.850 "current_admin_qpairs": 0, 00:12:40.850 "current_io_qpairs": 0, 00:12:40.850 "pending_bdev_io": 0, 00:12:40.850 "completed_nvme_io": 289, 00:12:40.850 "transports": [ 00:12:40.851 { 00:12:40.851 "trtype": "TCP" 00:12:40.851 } 00:12:40.851 ] 00:12:40.851 } 00:12:40.851 ] 00:12:40.851 }' 00:12:40.851 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:40.851 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:40.851 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:40.851 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:40.851 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:40.851 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:40.851 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:40.851 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:40.851 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:40.851 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:12:40.851 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:40.851 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:40.851 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:40.851 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:40.851 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:12:40.851 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:40.851 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:12:40.851 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:40.851 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:40.851 rmmod nvme_tcp 00:12:40.851 rmmod nvme_fabrics 00:12:40.851 rmmod nvme_keyring 00:12:40.851 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:40.851 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:12:40.851 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:12:40.851 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2117032 ']' 00:12:40.851 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2117032 00:12:40.851 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 2117032 ']' 00:12:40.851 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 2117032 00:12:40.851 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:12:40.851 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:40.851 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2117032 00:12:41.113 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:41.113 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:41.113 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2117032' 00:12:41.113 killing process with pid 2117032 00:12:41.113 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 2117032 00:12:41.113 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 2117032 00:12:41.113 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:41.113 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:41.113 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:41.113 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:12:41.113 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:12:41.113 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:12:41.113 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:41.113 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:41.113 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:41.113 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.113 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:41.113 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:43.659 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:43.659 00:12:43.659 real 0m37.736s 00:12:43.659 user 1m53.531s 00:12:43.659 sys 0m7.579s 00:12:43.659 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:43.659 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.659 ************************************ 00:12:43.659 END TEST nvmf_rpc 00:12:43.660 ************************************ 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:43.660 ************************************ 00:12:43.660 START TEST nvmf_invalid 00:12:43.660 ************************************ 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:43.660 * Looking for test storage... 00:12:43.660 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:43.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.660 --rc genhtml_branch_coverage=1 00:12:43.660 --rc genhtml_function_coverage=1 00:12:43.660 --rc genhtml_legend=1 00:12:43.660 --rc geninfo_all_blocks=1 00:12:43.660 --rc geninfo_unexecuted_blocks=1 00:12:43.660 00:12:43.660 ' 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:43.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.660 --rc genhtml_branch_coverage=1 00:12:43.660 --rc genhtml_function_coverage=1 00:12:43.660 --rc genhtml_legend=1 00:12:43.660 --rc geninfo_all_blocks=1 00:12:43.660 --rc geninfo_unexecuted_blocks=1 00:12:43.660 00:12:43.660 ' 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:43.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.660 --rc genhtml_branch_coverage=1 00:12:43.660 --rc genhtml_function_coverage=1 00:12:43.660 --rc genhtml_legend=1 00:12:43.660 --rc geninfo_all_blocks=1 00:12:43.660 --rc geninfo_unexecuted_blocks=1 00:12:43.660 00:12:43.660 ' 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:43.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.660 --rc genhtml_branch_coverage=1 00:12:43.660 --rc genhtml_function_coverage=1 00:12:43.660 --rc genhtml_legend=1 00:12:43.660 --rc geninfo_all_blocks=1 00:12:43.660 --rc geninfo_unexecuted_blocks=1 00:12:43.660 00:12:43.660 ' 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:43.660 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:43.661 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.661 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.661 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.661 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:43.661 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.661 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:12:43.661 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:43.661 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:43.661 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:43.661 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:43.661 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:43.661 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:43.661 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:43.661 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:43.661 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:43.661 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:43.661 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:43.661 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:43.661 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:43.661 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:43.661 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:43.661 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:43.661 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:43.661 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:43.661 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:43.661 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:43.661 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:43.661 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.661 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:43.661 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:43.661 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:43.661 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:43.661 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:12:43.661 16:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:51.804 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:51.804 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:12:51.804 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:51.804 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:51.804 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:51.804 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:51.804 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:51.804 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:12:51.804 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:51.804 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:12:51.804 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:12:51.804 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:12:51.804 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:12:51.804 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:12:51.804 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:12:51.804 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:51.804 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:51.804 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:51.804 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:51.804 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:51.805 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:51.805 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:51.805 Found net devices under 0000:31:00.0: cvl_0_0 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:51.805 Found net devices under 0000:31:00.1: cvl_0_1 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:51.805 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:51.805 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:12:51.805 00:12:51.805 --- 10.0.0.2 ping statistics --- 00:12:51.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.805 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:51.805 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:51.805 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:12:51.805 00:12:51.805 --- 10.0.0.1 ping statistics --- 00:12:51.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.805 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2127398 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2127398 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 2127398 ']' 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:51.805 16:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:51.805 [2024-11-20 16:24:36.672652] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:12:51.805 [2024-11-20 16:24:36.672721] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.805 [2024-11-20 16:24:36.757104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:51.805 [2024-11-20 16:24:36.799414] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:51.805 [2024-11-20 16:24:36.799449] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:51.805 [2024-11-20 16:24:36.799457] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:51.805 [2024-11-20 16:24:36.799464] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:51.805 [2024-11-20 16:24:36.799470] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:51.805 [2024-11-20 16:24:36.801329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:51.805 [2024-11-20 16:24:36.801450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:51.805 [2024-11-20 16:24:36.801607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.805 [2024-11-20 16:24:36.801608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:51.805 16:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:51.805 16:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:12:51.805 16:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:51.805 16:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:51.805 16:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:51.805 16:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:51.805 16:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:51.805 16:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode16691 00:12:51.805 [2024-11-20 16:24:37.666157] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:51.805 16:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:51.805 { 00:12:51.805 "nqn": "nqn.2016-06.io.spdk:cnode16691", 00:12:51.805 "tgt_name": "foobar", 00:12:51.805 "method": "nvmf_create_subsystem", 00:12:51.805 "req_id": 1 00:12:51.805 } 00:12:51.805 Got JSON-RPC error response 00:12:51.805 response: 00:12:51.805 { 00:12:51.805 "code": -32603, 00:12:51.805 "message": "Unable to find target foobar" 00:12:51.805 }' 00:12:51.805 16:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:51.805 { 00:12:51.805 "nqn": "nqn.2016-06.io.spdk:cnode16691", 00:12:51.805 "tgt_name": "foobar", 00:12:51.806 "method": "nvmf_create_subsystem", 00:12:51.806 "req_id": 1 00:12:51.806 } 00:12:51.806 Got JSON-RPC error response 00:12:51.806 response: 00:12:51.806 { 00:12:51.806 "code": -32603, 00:12:51.806 "message": "Unable to find target foobar" 00:12:51.806 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:51.806 16:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:51.806 16:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode8360 00:12:52.072 [2024-11-20 16:24:37.854823] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8360: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:52.072 16:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:52.072 { 00:12:52.072 "nqn": "nqn.2016-06.io.spdk:cnode8360", 00:12:52.072 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:52.072 "method": "nvmf_create_subsystem", 00:12:52.072 "req_id": 1 00:12:52.072 } 00:12:52.072 Got JSON-RPC error response 00:12:52.072 response: 00:12:52.072 { 00:12:52.072 "code": -32602, 00:12:52.072 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:52.072 }' 00:12:52.072 16:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:52.072 { 00:12:52.072 "nqn": "nqn.2016-06.io.spdk:cnode8360", 00:12:52.072 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:52.072 "method": "nvmf_create_subsystem", 00:12:52.072 "req_id": 1 00:12:52.072 } 00:12:52.072 Got JSON-RPC error response 00:12:52.072 response: 00:12:52.072 { 00:12:52.072 "code": -32602, 00:12:52.072 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:52.072 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:52.072 16:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:52.072 16:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode6273 00:12:52.334 [2024-11-20 16:24:38.043369] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6273: invalid model number 'SPDK_Controller' 00:12:52.334 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:52.334 { 00:12:52.334 "nqn": "nqn.2016-06.io.spdk:cnode6273", 00:12:52.334 "model_number": "SPDK_Controller\u001f", 00:12:52.334 "method": "nvmf_create_subsystem", 00:12:52.334 "req_id": 1 00:12:52.334 } 00:12:52.334 Got JSON-RPC error response 00:12:52.334 response: 00:12:52.334 { 00:12:52.334 "code": -32602, 00:12:52.334 "message": "Invalid MN SPDK_Controller\u001f" 00:12:52.334 }' 00:12:52.334 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:52.334 { 00:12:52.334 "nqn": "nqn.2016-06.io.spdk:cnode6273", 00:12:52.334 "model_number": "SPDK_Controller\u001f", 00:12:52.334 "method": "nvmf_create_subsystem", 00:12:52.334 "req_id": 1 00:12:52.334 } 00:12:52.334 Got JSON-RPC error response 00:12:52.334 response: 00:12:52.334 { 00:12:52.334 "code": -32602, 00:12:52.334 "message": "Invalid MN SPDK_Controller\u001f" 00:12:52.334 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:52.334 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:52.334 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:52.334 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:52.334 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:52.334 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:52.334 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:52.334 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.334 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:12:52.334 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:52.334 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:12:52.334 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.334 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.334 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:52.334 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:52.334 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:52.334 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.334 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.334 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:12:52.334 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:52.334 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:12:52.334 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.334 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.334 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:52.334 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:52.334 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:52.334 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.334 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.334 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:52.334 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:52.334 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:52.334 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.334 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.334 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ t == \- ]] 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 't$_+TOO]^EWxW/=[EDH)V' 00:12:52.335 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 't$_+TOO]^EWxW/=[EDH)V' nqn.2016-06.io.spdk:cnode12795 00:12:52.596 [2024-11-20 16:24:38.396532] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12795: invalid serial number 't$_+TOO]^EWxW/=[EDH)V' 00:12:52.596 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:52.596 { 00:12:52.596 "nqn": "nqn.2016-06.io.spdk:cnode12795", 00:12:52.596 "serial_number": "t$_+TOO]^EWxW/=[EDH)V", 00:12:52.596 "method": "nvmf_create_subsystem", 00:12:52.596 "req_id": 1 00:12:52.596 } 00:12:52.596 Got JSON-RPC error response 00:12:52.596 response: 00:12:52.596 { 00:12:52.596 "code": -32602, 00:12:52.596 "message": "Invalid SN t$_+TOO]^EWxW/=[EDH)V" 00:12:52.596 }' 00:12:52.596 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:52.596 { 00:12:52.596 "nqn": "nqn.2016-06.io.spdk:cnode12795", 00:12:52.596 "serial_number": "t$_+TOO]^EWxW/=[EDH)V", 00:12:52.596 "method": "nvmf_create_subsystem", 00:12:52.596 "req_id": 1 00:12:52.596 } 00:12:52.596 Got JSON-RPC error response 00:12:52.596 response: 00:12:52.596 { 00:12:52.596 "code": -32602, 00:12:52.596 "message": "Invalid SN t$_+TOO]^EWxW/=[EDH)V" 00:12:52.596 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:52.596 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:52.596 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.597 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.860 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.861 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:12:52.861 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:52.861 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:12:52.861 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.861 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.861 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:12:52.861 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:52.861 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:12:52.861 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.861 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.861 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:52.861 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:52.861 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:52.861 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.861 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.861 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:12:52.861 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:52.861 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:12:52.861 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.861 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.861 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:12:52.861 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:52.861 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:12:52.861 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:52.861 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:52.861 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 1 == \- ]] 00:12:52.861 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '1zFw@efJpW\hjvuPo {JDTbGTWxULq\7S0{[CW,' 00:12:52.861 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '1zFw@efJpW\hjvuPo {JDTbGTWxULq\7S0{[CW,' nqn.2016-06.io.spdk:cnode5596 00:12:53.125 [2024-11-20 16:24:38.906203] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5596: invalid model number '1zFw@efJpW\hjvuPo {JDTbGTWxULq\7S0{[CW,' 00:12:53.125 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:12:53.125 { 00:12:53.125 "nqn": "nqn.2016-06.io.spdk:cnode5596", 00:12:53.125 "model_number": "1zFw@efJpW\\hjvuPo {JDTbGTWxULq\\7S\u007f0{[CW,", 00:12:53.125 "method": "nvmf_create_subsystem", 00:12:53.125 "req_id": 1 00:12:53.125 } 00:12:53.125 Got JSON-RPC error response 00:12:53.125 response: 00:12:53.125 { 00:12:53.125 "code": -32602, 00:12:53.125 "message": "Invalid MN 1zFw@efJpW\\hjvuPo {JDTbGTWxULq\\7S\u007f0{[CW," 00:12:53.125 }' 00:12:53.125 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:12:53.125 { 00:12:53.125 "nqn": "nqn.2016-06.io.spdk:cnode5596", 00:12:53.125 "model_number": "1zFw@efJpW\\hjvuPo {JDTbGTWxULq\\7S\u007f0{[CW,", 00:12:53.125 "method": "nvmf_create_subsystem", 00:12:53.125 "req_id": 1 00:12:53.125 } 00:12:53.125 Got JSON-RPC error response 00:12:53.125 response: 00:12:53.125 { 00:12:53.125 "code": -32602, 00:12:53.125 "message": "Invalid MN 1zFw@efJpW\\hjvuPo {JDTbGTWxULq\\7S\u007f0{[CW," 00:12:53.125 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:53.125 16:24:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:53.387 [2024-11-20 16:24:39.090875] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:53.387 16:24:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:53.387 16:24:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:53.387 16:24:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:12:53.387 16:24:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:53.387 16:24:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:12:53.387 16:24:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:53.649 [2024-11-20 16:24:39.468082] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:53.649 16:24:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:12:53.649 { 00:12:53.649 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:53.649 "listen_address": { 00:12:53.649 "trtype": "tcp", 00:12:53.649 "traddr": "", 00:12:53.649 "trsvcid": "4421" 00:12:53.649 }, 00:12:53.649 "method": "nvmf_subsystem_remove_listener", 00:12:53.649 "req_id": 1 00:12:53.649 } 00:12:53.649 Got JSON-RPC error response 00:12:53.649 response: 00:12:53.649 { 00:12:53.649 "code": -32602, 00:12:53.649 "message": "Invalid parameters" 00:12:53.649 }' 00:12:53.649 16:24:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:12:53.649 { 00:12:53.649 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:53.649 "listen_address": { 00:12:53.649 "trtype": "tcp", 00:12:53.649 "traddr": "", 00:12:53.649 "trsvcid": "4421" 00:12:53.649 }, 00:12:53.649 "method": "nvmf_subsystem_remove_listener", 00:12:53.649 "req_id": 1 00:12:53.649 } 00:12:53.649 Got JSON-RPC error response 00:12:53.649 response: 00:12:53.649 { 00:12:53.649 "code": -32602, 00:12:53.649 "message": "Invalid parameters" 00:12:53.649 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:53.649 16:24:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20005 -i 0 00:12:53.911 [2024-11-20 16:24:39.652614] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20005: invalid cntlid range [0-65519] 00:12:53.911 16:24:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:12:53.911 { 00:12:53.911 "nqn": "nqn.2016-06.io.spdk:cnode20005", 00:12:53.911 "min_cntlid": 0, 00:12:53.911 "method": "nvmf_create_subsystem", 00:12:53.911 "req_id": 1 00:12:53.911 } 00:12:53.911 Got JSON-RPC error response 00:12:53.911 response: 00:12:53.911 { 00:12:53.911 "code": -32602, 00:12:53.911 "message": "Invalid cntlid range [0-65519]" 00:12:53.911 }' 00:12:53.911 16:24:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:12:53.911 { 00:12:53.911 "nqn": "nqn.2016-06.io.spdk:cnode20005", 00:12:53.911 "min_cntlid": 0, 00:12:53.911 "method": "nvmf_create_subsystem", 00:12:53.911 "req_id": 1 00:12:53.911 } 00:12:53.911 Got JSON-RPC error response 00:12:53.911 response: 00:12:53.911 { 00:12:53.911 "code": -32602, 00:12:53.911 "message": "Invalid cntlid range [0-65519]" 00:12:53.911 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:53.911 16:24:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28200 -i 65520 00:12:53.911 [2024-11-20 16:24:39.833218] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28200: invalid cntlid range [65520-65519] 00:12:53.911 16:24:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:12:53.911 { 00:12:53.911 "nqn": "nqn.2016-06.io.spdk:cnode28200", 00:12:53.911 "min_cntlid": 65520, 00:12:53.911 "method": "nvmf_create_subsystem", 00:12:53.911 "req_id": 1 00:12:53.911 } 00:12:53.911 Got JSON-RPC error response 00:12:53.911 response: 00:12:53.911 { 00:12:53.911 "code": -32602, 00:12:53.911 "message": "Invalid cntlid range [65520-65519]" 00:12:53.911 }' 00:12:53.911 16:24:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:12:53.911 { 00:12:53.911 "nqn": "nqn.2016-06.io.spdk:cnode28200", 00:12:53.911 "min_cntlid": 65520, 00:12:53.911 "method": "nvmf_create_subsystem", 00:12:53.911 "req_id": 1 00:12:53.911 } 00:12:53.911 Got JSON-RPC error response 00:12:53.911 response: 00:12:53.911 { 00:12:53.911 "code": -32602, 00:12:53.911 "message": "Invalid cntlid range [65520-65519]" 00:12:53.911 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:54.173 16:24:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1844 -I 0 00:12:54.173 [2024-11-20 16:24:40.017948] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1844: invalid cntlid range [1-0] 00:12:54.173 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:12:54.173 { 00:12:54.173 "nqn": "nqn.2016-06.io.spdk:cnode1844", 00:12:54.173 "max_cntlid": 0, 00:12:54.173 "method": "nvmf_create_subsystem", 00:12:54.173 "req_id": 1 00:12:54.173 } 00:12:54.173 Got JSON-RPC error response 00:12:54.173 response: 00:12:54.173 { 00:12:54.173 "code": -32602, 00:12:54.173 "message": "Invalid cntlid range [1-0]" 00:12:54.173 }' 00:12:54.173 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:12:54.173 { 00:12:54.173 "nqn": "nqn.2016-06.io.spdk:cnode1844", 00:12:54.173 "max_cntlid": 0, 00:12:54.173 "method": "nvmf_create_subsystem", 00:12:54.173 "req_id": 1 00:12:54.173 } 00:12:54.173 Got JSON-RPC error response 00:12:54.173 response: 00:12:54.173 { 00:12:54.173 "code": -32602, 00:12:54.173 "message": "Invalid cntlid range [1-0]" 00:12:54.173 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:54.173 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12643 -I 65520 00:12:54.434 [2024-11-20 16:24:40.270738] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12643: invalid cntlid range [1-65520] 00:12:54.434 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:12:54.434 { 00:12:54.434 "nqn": "nqn.2016-06.io.spdk:cnode12643", 00:12:54.434 "max_cntlid": 65520, 00:12:54.434 "method": "nvmf_create_subsystem", 00:12:54.434 "req_id": 1 00:12:54.434 } 00:12:54.434 Got JSON-RPC error response 00:12:54.434 response: 00:12:54.434 { 00:12:54.434 "code": -32602, 00:12:54.434 "message": "Invalid cntlid range [1-65520]" 00:12:54.434 }' 00:12:54.434 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:12:54.434 { 00:12:54.434 "nqn": "nqn.2016-06.io.spdk:cnode12643", 00:12:54.434 "max_cntlid": 65520, 00:12:54.434 "method": "nvmf_create_subsystem", 00:12:54.434 "req_id": 1 00:12:54.434 } 00:12:54.434 Got JSON-RPC error response 00:12:54.434 response: 00:12:54.434 { 00:12:54.434 "code": -32602, 00:12:54.434 "message": "Invalid cntlid range [1-65520]" 00:12:54.434 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:54.434 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10044 -i 6 -I 5 00:12:54.694 [2024-11-20 16:24:40.455297] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10044: invalid cntlid range [6-5] 00:12:54.694 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:12:54.694 { 00:12:54.694 "nqn": "nqn.2016-06.io.spdk:cnode10044", 00:12:54.694 "min_cntlid": 6, 00:12:54.694 "max_cntlid": 5, 00:12:54.694 "method": "nvmf_create_subsystem", 00:12:54.694 "req_id": 1 00:12:54.694 } 00:12:54.694 Got JSON-RPC error response 00:12:54.694 response: 00:12:54.694 { 00:12:54.694 "code": -32602, 00:12:54.694 "message": "Invalid cntlid range [6-5]" 00:12:54.694 }' 00:12:54.694 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:12:54.694 { 00:12:54.694 "nqn": "nqn.2016-06.io.spdk:cnode10044", 00:12:54.694 "min_cntlid": 6, 00:12:54.694 "max_cntlid": 5, 00:12:54.694 "method": "nvmf_create_subsystem", 00:12:54.694 "req_id": 1 00:12:54.694 } 00:12:54.694 Got JSON-RPC error response 00:12:54.694 response: 00:12:54.694 { 00:12:54.694 "code": -32602, 00:12:54.694 "message": "Invalid cntlid range [6-5]" 00:12:54.694 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:54.694 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:54.694 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:12:54.694 { 00:12:54.694 "name": "foobar", 00:12:54.694 "method": "nvmf_delete_target", 00:12:54.694 "req_id": 1 00:12:54.694 } 00:12:54.694 Got JSON-RPC error response 00:12:54.694 response: 00:12:54.694 { 00:12:54.694 "code": -32602, 00:12:54.694 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:54.694 }' 00:12:54.694 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:12:54.694 { 00:12:54.694 "name": "foobar", 00:12:54.694 "method": "nvmf_delete_target", 00:12:54.694 "req_id": 1 00:12:54.694 } 00:12:54.694 Got JSON-RPC error response 00:12:54.694 response: 00:12:54.694 { 00:12:54.694 "code": -32602, 00:12:54.694 "message": "The specified target doesn't exist, cannot delete it." 00:12:54.694 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:54.694 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:54.694 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:12:54.694 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:54.694 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:12:54.694 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:54.694 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:12:54.694 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:54.694 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:54.694 rmmod nvme_tcp 00:12:54.694 rmmod nvme_fabrics 00:12:54.694 rmmod nvme_keyring 00:12:54.956 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:54.956 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:12:54.956 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:12:54.956 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 2127398 ']' 00:12:54.956 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 2127398 00:12:54.956 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 2127398 ']' 00:12:54.956 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 2127398 00:12:54.956 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:12:54.956 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:54.956 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2127398 00:12:54.956 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:54.956 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:54.956 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2127398' 00:12:54.956 killing process with pid 2127398 00:12:54.956 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 2127398 00:12:54.956 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 2127398 00:12:54.956 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:54.956 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:54.956 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:54.956 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:12:54.956 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:12:54.956 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:54.956 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:12:54.956 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:54.956 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:54.956 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:54.956 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:54.956 16:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.501 16:24:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:57.501 00:12:57.501 real 0m13.824s 00:12:57.501 user 0m20.781s 00:12:57.501 sys 0m6.394s 00:12:57.501 16:24:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:57.501 16:24:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:57.501 ************************************ 00:12:57.501 END TEST nvmf_invalid 00:12:57.501 ************************************ 00:12:57.501 16:24:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:57.501 16:24:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:57.501 16:24:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:57.501 16:24:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:57.501 ************************************ 00:12:57.501 START TEST nvmf_connect_stress 00:12:57.501 ************************************ 00:12:57.501 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:57.501 * Looking for test storage... 00:12:57.501 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:57.501 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:57.501 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:12:57.501 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:57.501 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:57.501 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:57.501 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:57.501 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:57.501 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:12:57.501 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:12:57.501 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:12:57.501 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:12:57.501 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:12:57.501 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:12:57.501 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:12:57.501 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:57.501 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:12:57.501 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:57.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.502 --rc genhtml_branch_coverage=1 00:12:57.502 --rc genhtml_function_coverage=1 00:12:57.502 --rc genhtml_legend=1 00:12:57.502 --rc geninfo_all_blocks=1 00:12:57.502 --rc geninfo_unexecuted_blocks=1 00:12:57.502 00:12:57.502 ' 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:57.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.502 --rc genhtml_branch_coverage=1 00:12:57.502 --rc genhtml_function_coverage=1 00:12:57.502 --rc genhtml_legend=1 00:12:57.502 --rc geninfo_all_blocks=1 00:12:57.502 --rc geninfo_unexecuted_blocks=1 00:12:57.502 00:12:57.502 ' 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:57.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.502 --rc genhtml_branch_coverage=1 00:12:57.502 --rc genhtml_function_coverage=1 00:12:57.502 --rc genhtml_legend=1 00:12:57.502 --rc geninfo_all_blocks=1 00:12:57.502 --rc geninfo_unexecuted_blocks=1 00:12:57.502 00:12:57.502 ' 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:57.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:57.502 --rc genhtml_branch_coverage=1 00:12:57.502 --rc genhtml_function_coverage=1 00:12:57.502 --rc genhtml_legend=1 00:12:57.502 --rc geninfo_all_blocks=1 00:12:57.502 --rc geninfo_unexecuted_blocks=1 00:12:57.502 00:12:57.502 ' 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:57.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:12:57.502 16:24:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:05.682 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:05.682 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:05.682 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:05.682 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:05.682 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:05.682 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:05.682 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:05.682 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:05.682 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:05.682 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:05.682 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:05.682 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:05.682 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:05.682 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:05.682 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:05.682 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:05.682 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:05.682 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:05.682 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:05.682 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:05.682 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:05.682 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:05.682 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:05.682 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:05.682 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:05.682 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:05.682 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:05.682 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:05.682 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:05.682 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:05.682 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:05.682 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:05.682 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:05.682 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:05.682 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:05.682 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:05.682 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:05.682 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:05.682 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:05.682 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:05.682 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:05.682 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:05.682 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:05.682 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:05.682 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:05.682 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:05.682 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:05.682 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:05.682 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:05.682 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:05.682 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:05.682 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:05.682 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:05.682 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:05.682 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:05.683 Found net devices under 0000:31:00.0: cvl_0_0 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:05.683 Found net devices under 0000:31:00.1: cvl_0_1 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:05.683 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:05.683 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.599 ms 00:13:05.683 00:13:05.683 --- 10.0.0.2 ping statistics --- 00:13:05.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:05.683 rtt min/avg/max/mdev = 0.599/0.599/0.599/0.000 ms 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:05.683 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:05.683 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:13:05.683 00:13:05.683 --- 10.0.0.1 ping statistics --- 00:13:05.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:05.683 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2132609 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2132609 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 2132609 ']' 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:05.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:05.683 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:05.683 [2024-11-20 16:24:50.746168] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:13:05.683 [2024-11-20 16:24:50.746221] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:05.683 [2024-11-20 16:24:50.843679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:05.683 [2024-11-20 16:24:50.885881] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:05.683 [2024-11-20 16:24:50.885927] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:05.683 [2024-11-20 16:24:50.885935] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:05.683 [2024-11-20 16:24:50.885942] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:05.683 [2024-11-20 16:24:50.885948] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:05.683 [2024-11-20 16:24:50.887637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:05.683 [2024-11-20 16:24:50.887797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:05.683 [2024-11-20 16:24:50.887799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:05.683 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:05.683 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:13:05.683 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:05.683 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:05.683 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:05.683 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:05.683 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:05.683 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.683 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:05.683 [2024-11-20 16:24:51.595207] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:05.683 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.683 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:05.683 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.683 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:05.683 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.026 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:06.026 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.026 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:06.026 [2024-11-20 16:24:51.619527] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:06.026 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.026 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:06.026 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.026 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:06.026 NULL1 00:13:06.026 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.026 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2132934 00:13:06.026 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:06.026 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:06.026 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:06.026 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:06.026 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:06.026 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:06.026 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:06.026 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:06.026 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:06.026 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:06.026 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:06.026 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:06.026 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:06.026 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:06.026 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:06.026 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:06.026 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:06.026 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:06.026 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:06.026 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:06.026 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:06.026 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:06.026 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:06.026 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:06.026 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:06.026 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:06.026 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:06.026 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:06.026 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:06.026 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:06.026 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:06.026 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:06.026 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:06.026 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:06.026 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:06.027 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:06.027 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:06.027 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:06.027 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:06.027 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:06.027 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:06.027 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:06.027 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:06.027 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:06.027 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2132934 00:13:06.027 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:06.027 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.027 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:06.332 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.332 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2132934 00:13:06.332 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:06.332 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.332 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:06.593 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.593 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2132934 00:13:06.593 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:06.593 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.593 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:06.879 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.879 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2132934 00:13:06.879 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:06.879 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.879 16:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.140 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.140 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2132934 00:13:07.140 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.140 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.140 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.713 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.713 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2132934 00:13:07.713 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.713 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.713 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.973 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.973 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2132934 00:13:07.973 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.973 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.973 16:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.234 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.234 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2132934 00:13:08.234 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:08.234 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.234 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.495 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.495 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2132934 00:13:08.495 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:08.495 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.495 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.756 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.756 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2132934 00:13:08.756 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:08.756 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.756 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:09.328 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.328 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2132934 00:13:09.328 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:09.328 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.328 16:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:09.588 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.588 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2132934 00:13:09.588 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:09.588 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.588 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:09.849 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.849 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2132934 00:13:09.849 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:09.849 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.849 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.110 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.110 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2132934 00:13:10.110 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.110 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.110 16:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.371 16:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.371 16:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2132934 00:13:10.371 16:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.371 16:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.371 16:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.941 16:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.941 16:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2132934 00:13:10.941 16:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.941 16:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.941 16:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.202 16:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.202 16:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2132934 00:13:11.202 16:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.202 16:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.202 16:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.462 16:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.463 16:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2132934 00:13:11.463 16:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.463 16:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.463 16:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.723 16:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.723 16:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2132934 00:13:11.723 16:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.723 16:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.723 16:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.984 16:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.984 16:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2132934 00:13:11.984 16:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.984 16:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.984 16:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.555 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.555 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2132934 00:13:12.555 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:12.555 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.555 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.815 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.815 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2132934 00:13:12.815 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:12.815 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.815 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.075 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.075 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2132934 00:13:13.075 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:13.075 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.075 16:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.336 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.336 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2132934 00:13:13.336 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:13.336 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.336 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.596 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.857 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2132934 00:13:13.857 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:13.857 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.857 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:14.119 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.119 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2132934 00:13:14.119 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.119 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.119 16:24:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:14.381 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.381 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2132934 00:13:14.381 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.381 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.381 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:14.643 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.643 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2132934 00:13:14.643 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.643 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.643 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:14.904 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.904 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2132934 00:13:14.904 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.904 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.904 16:25:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.476 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.476 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2132934 00:13:15.476 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.476 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.476 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.739 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.739 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2132934 00:13:15.739 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.739 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.739 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:16.002 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:16.002 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.002 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2132934 00:13:16.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2132934) - No such process 00:13:16.002 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2132934 00:13:16.002 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:16.002 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:16.002 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:16.002 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:16.002 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:16.002 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:16.002 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:16.002 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:16.002 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:16.002 rmmod nvme_tcp 00:13:16.002 rmmod nvme_fabrics 00:13:16.002 rmmod nvme_keyring 00:13:16.002 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:16.002 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:16.002 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:16.002 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2132609 ']' 00:13:16.002 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2132609 00:13:16.002 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 2132609 ']' 00:13:16.002 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 2132609 00:13:16.002 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:13:16.002 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:16.002 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2132609 00:13:16.263 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:16.264 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:16.264 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2132609' 00:13:16.264 killing process with pid 2132609 00:13:16.264 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 2132609 00:13:16.264 16:25:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 2132609 00:13:16.264 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:16.264 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:16.264 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:16.264 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:16.264 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:13:16.264 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:16.264 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:13:16.264 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:16.264 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:16.264 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:16.264 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:16.264 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.812 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:18.812 00:13:18.812 real 0m21.153s 00:13:18.812 user 0m42.285s 00:13:18.812 sys 0m9.116s 00:13:18.812 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:18.812 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.812 ************************************ 00:13:18.812 END TEST nvmf_connect_stress 00:13:18.812 ************************************ 00:13:18.812 16:25:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:18.812 16:25:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:18.812 16:25:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:18.812 16:25:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:18.812 ************************************ 00:13:18.812 START TEST nvmf_fused_ordering 00:13:18.812 ************************************ 00:13:18.812 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:18.812 * Looking for test storage... 00:13:18.812 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:18.812 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:18.812 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:13:18.812 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:18.812 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:18.812 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:18.812 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:18.812 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:18.812 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:18.812 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:18.812 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:18.812 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:18.812 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:18.812 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:18.812 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:18.812 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:18.812 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:18.812 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:18.812 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:18.812 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:18.812 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:18.812 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:18.812 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:18.812 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:18.812 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:18.812 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:18.812 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:18.812 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:18.812 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:18.812 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:18.812 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:18.812 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:18.812 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:18.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.813 --rc genhtml_branch_coverage=1 00:13:18.813 --rc genhtml_function_coverage=1 00:13:18.813 --rc genhtml_legend=1 00:13:18.813 --rc geninfo_all_blocks=1 00:13:18.813 --rc geninfo_unexecuted_blocks=1 00:13:18.813 00:13:18.813 ' 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:18.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.813 --rc genhtml_branch_coverage=1 00:13:18.813 --rc genhtml_function_coverage=1 00:13:18.813 --rc genhtml_legend=1 00:13:18.813 --rc geninfo_all_blocks=1 00:13:18.813 --rc geninfo_unexecuted_blocks=1 00:13:18.813 00:13:18.813 ' 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:18.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.813 --rc genhtml_branch_coverage=1 00:13:18.813 --rc genhtml_function_coverage=1 00:13:18.813 --rc genhtml_legend=1 00:13:18.813 --rc geninfo_all_blocks=1 00:13:18.813 --rc geninfo_unexecuted_blocks=1 00:13:18.813 00:13:18.813 ' 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:18.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.813 --rc genhtml_branch_coverage=1 00:13:18.813 --rc genhtml_function_coverage=1 00:13:18.813 --rc genhtml_legend=1 00:13:18.813 --rc geninfo_all_blocks=1 00:13:18.813 --rc geninfo_unexecuted_blocks=1 00:13:18.813 00:13:18.813 ' 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:18.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:18.813 16:25:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:26.963 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:26.963 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:13:26.963 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:26.963 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:26.963 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:26.963 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:26.963 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:26.963 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:13:26.963 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:26.963 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:13:26.963 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:13:26.963 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:13:26.963 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:13:26.963 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:13:26.963 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:13:26.963 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:26.963 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:26.963 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:26.963 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:26.963 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:26.963 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:26.963 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:26.963 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:26.963 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:26.963 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:26.963 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:26.963 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:26.963 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:26.963 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:26.963 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:26.963 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:26.963 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:26.963 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:26.963 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:26.963 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:26.963 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:26.963 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:26.963 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:26.963 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:26.963 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:26.963 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:26.963 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:26.963 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:26.963 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:26.964 Found net devices under 0000:31:00.0: cvl_0_0 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:26.964 Found net devices under 0000:31:00.1: cvl_0_1 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:26.964 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:26.964 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.607 ms 00:13:26.964 00:13:26.964 --- 10.0.0.2 ping statistics --- 00:13:26.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.964 rtt min/avg/max/mdev = 0.607/0.607/0.607/0.000 ms 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:26.964 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:26.964 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:13:26.964 00:13:26.964 --- 10.0.0.1 ping statistics --- 00:13:26.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.964 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2139035 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2139035 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 2139035 ']' 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:26.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:26.964 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:26.964 [2024-11-20 16:25:11.818553] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:13:26.964 [2024-11-20 16:25:11.818602] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:26.964 [2024-11-20 16:25:11.911980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.965 [2024-11-20 16:25:11.945973] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:26.965 [2024-11-20 16:25:11.946010] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:26.965 [2024-11-20 16:25:11.946018] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:26.965 [2024-11-20 16:25:11.946025] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:26.965 [2024-11-20 16:25:11.946031] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:26.965 [2024-11-20 16:25:11.946601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:26.965 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:26.965 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:13:26.965 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:26.965 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:26.965 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:26.965 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:26.965 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:26.965 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.965 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:26.965 [2024-11-20 16:25:12.071040] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:26.965 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.965 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:26.965 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.965 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:26.965 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.965 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:26.965 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.965 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:26.965 [2024-11-20 16:25:12.087283] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:26.965 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.965 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:26.965 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.965 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:26.965 NULL1 00:13:26.965 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.965 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:26.965 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.965 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:26.965 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.965 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:26.965 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.965 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:26.965 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.965 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:26.965 [2024-11-20 16:25:12.145006] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:13:26.965 [2024-11-20 16:25:12.145047] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2139072 ] 00:13:26.965 Attached to nqn.2016-06.io.spdk:cnode1 00:13:26.965 Namespace ID: 1 size: 1GB 00:13:26.965 fused_ordering(0) 00:13:26.965 fused_ordering(1) 00:13:26.965 fused_ordering(2) 00:13:26.965 fused_ordering(3) 00:13:26.965 fused_ordering(4) 00:13:26.965 fused_ordering(5) 00:13:26.965 fused_ordering(6) 00:13:26.965 fused_ordering(7) 00:13:26.965 fused_ordering(8) 00:13:26.965 fused_ordering(9) 00:13:26.965 fused_ordering(10) 00:13:26.965 fused_ordering(11) 00:13:26.965 fused_ordering(12) 00:13:26.965 fused_ordering(13) 00:13:26.965 fused_ordering(14) 00:13:26.965 fused_ordering(15) 00:13:26.965 fused_ordering(16) 00:13:26.965 fused_ordering(17) 00:13:26.965 fused_ordering(18) 00:13:26.965 fused_ordering(19) 00:13:26.965 fused_ordering(20) 00:13:26.965 fused_ordering(21) 00:13:26.965 fused_ordering(22) 00:13:26.965 fused_ordering(23) 00:13:26.965 fused_ordering(24) 00:13:26.965 fused_ordering(25) 00:13:26.965 fused_ordering(26) 00:13:26.965 fused_ordering(27) 00:13:26.965 fused_ordering(28) 00:13:26.965 fused_ordering(29) 00:13:26.965 fused_ordering(30) 00:13:26.965 fused_ordering(31) 00:13:26.965 fused_ordering(32) 00:13:26.965 fused_ordering(33) 00:13:26.965 fused_ordering(34) 00:13:26.965 fused_ordering(35) 00:13:26.965 fused_ordering(36) 00:13:26.965 fused_ordering(37) 00:13:26.965 fused_ordering(38) 00:13:26.965 fused_ordering(39) 00:13:26.965 fused_ordering(40) 00:13:26.965 fused_ordering(41) 00:13:26.965 fused_ordering(42) 00:13:26.965 fused_ordering(43) 00:13:26.965 fused_ordering(44) 00:13:26.965 fused_ordering(45) 00:13:26.965 fused_ordering(46) 00:13:26.965 fused_ordering(47) 00:13:26.965 fused_ordering(48) 00:13:26.965 fused_ordering(49) 00:13:26.965 fused_ordering(50) 00:13:26.965 fused_ordering(51) 00:13:26.965 fused_ordering(52) 00:13:26.965 fused_ordering(53) 00:13:26.965 fused_ordering(54) 00:13:26.965 fused_ordering(55) 00:13:26.965 fused_ordering(56) 00:13:26.965 fused_ordering(57) 00:13:26.965 fused_ordering(58) 00:13:26.965 fused_ordering(59) 00:13:26.965 fused_ordering(60) 00:13:26.965 fused_ordering(61) 00:13:26.965 fused_ordering(62) 00:13:26.965 fused_ordering(63) 00:13:26.965 fused_ordering(64) 00:13:26.965 fused_ordering(65) 00:13:26.965 fused_ordering(66) 00:13:26.965 fused_ordering(67) 00:13:26.965 fused_ordering(68) 00:13:26.965 fused_ordering(69) 00:13:26.965 fused_ordering(70) 00:13:26.965 fused_ordering(71) 00:13:26.965 fused_ordering(72) 00:13:26.965 fused_ordering(73) 00:13:26.965 fused_ordering(74) 00:13:26.965 fused_ordering(75) 00:13:26.965 fused_ordering(76) 00:13:26.965 fused_ordering(77) 00:13:26.965 fused_ordering(78) 00:13:26.965 fused_ordering(79) 00:13:26.965 fused_ordering(80) 00:13:26.965 fused_ordering(81) 00:13:26.965 fused_ordering(82) 00:13:26.965 fused_ordering(83) 00:13:26.966 fused_ordering(84) 00:13:26.966 fused_ordering(85) 00:13:26.966 fused_ordering(86) 00:13:26.966 fused_ordering(87) 00:13:26.966 fused_ordering(88) 00:13:26.966 fused_ordering(89) 00:13:26.966 fused_ordering(90) 00:13:26.966 fused_ordering(91) 00:13:26.966 fused_ordering(92) 00:13:26.966 fused_ordering(93) 00:13:26.966 fused_ordering(94) 00:13:26.966 fused_ordering(95) 00:13:26.966 fused_ordering(96) 00:13:26.966 fused_ordering(97) 00:13:26.966 fused_ordering(98) 00:13:26.966 fused_ordering(99) 00:13:26.966 fused_ordering(100) 00:13:26.966 fused_ordering(101) 00:13:26.966 fused_ordering(102) 00:13:26.966 fused_ordering(103) 00:13:26.966 fused_ordering(104) 00:13:26.966 fused_ordering(105) 00:13:26.966 fused_ordering(106) 00:13:26.966 fused_ordering(107) 00:13:26.966 fused_ordering(108) 00:13:26.966 fused_ordering(109) 00:13:26.966 fused_ordering(110) 00:13:26.966 fused_ordering(111) 00:13:26.966 fused_ordering(112) 00:13:26.966 fused_ordering(113) 00:13:26.966 fused_ordering(114) 00:13:26.966 fused_ordering(115) 00:13:26.966 fused_ordering(116) 00:13:26.966 fused_ordering(117) 00:13:26.966 fused_ordering(118) 00:13:26.966 fused_ordering(119) 00:13:26.966 fused_ordering(120) 00:13:26.966 fused_ordering(121) 00:13:26.966 fused_ordering(122) 00:13:26.966 fused_ordering(123) 00:13:26.966 fused_ordering(124) 00:13:26.966 fused_ordering(125) 00:13:26.966 fused_ordering(126) 00:13:26.966 fused_ordering(127) 00:13:26.966 fused_ordering(128) 00:13:26.966 fused_ordering(129) 00:13:26.966 fused_ordering(130) 00:13:26.966 fused_ordering(131) 00:13:26.966 fused_ordering(132) 00:13:26.966 fused_ordering(133) 00:13:26.966 fused_ordering(134) 00:13:26.966 fused_ordering(135) 00:13:26.966 fused_ordering(136) 00:13:26.966 fused_ordering(137) 00:13:26.966 fused_ordering(138) 00:13:26.966 fused_ordering(139) 00:13:26.966 fused_ordering(140) 00:13:26.966 fused_ordering(141) 00:13:26.966 fused_ordering(142) 00:13:26.966 fused_ordering(143) 00:13:26.966 fused_ordering(144) 00:13:26.966 fused_ordering(145) 00:13:26.966 fused_ordering(146) 00:13:26.966 fused_ordering(147) 00:13:26.966 fused_ordering(148) 00:13:26.966 fused_ordering(149) 00:13:26.966 fused_ordering(150) 00:13:26.966 fused_ordering(151) 00:13:26.966 fused_ordering(152) 00:13:26.966 fused_ordering(153) 00:13:26.966 fused_ordering(154) 00:13:26.966 fused_ordering(155) 00:13:26.966 fused_ordering(156) 00:13:26.966 fused_ordering(157) 00:13:26.966 fused_ordering(158) 00:13:26.966 fused_ordering(159) 00:13:26.966 fused_ordering(160) 00:13:26.966 fused_ordering(161) 00:13:26.966 fused_ordering(162) 00:13:26.966 fused_ordering(163) 00:13:26.966 fused_ordering(164) 00:13:26.966 fused_ordering(165) 00:13:26.966 fused_ordering(166) 00:13:26.966 fused_ordering(167) 00:13:26.966 fused_ordering(168) 00:13:26.966 fused_ordering(169) 00:13:26.966 fused_ordering(170) 00:13:26.966 fused_ordering(171) 00:13:26.966 fused_ordering(172) 00:13:26.966 fused_ordering(173) 00:13:26.966 fused_ordering(174) 00:13:26.966 fused_ordering(175) 00:13:26.966 fused_ordering(176) 00:13:26.966 fused_ordering(177) 00:13:26.966 fused_ordering(178) 00:13:26.966 fused_ordering(179) 00:13:26.966 fused_ordering(180) 00:13:26.966 fused_ordering(181) 00:13:26.966 fused_ordering(182) 00:13:26.966 fused_ordering(183) 00:13:26.966 fused_ordering(184) 00:13:26.966 fused_ordering(185) 00:13:26.966 fused_ordering(186) 00:13:26.966 fused_ordering(187) 00:13:26.966 fused_ordering(188) 00:13:26.966 fused_ordering(189) 00:13:26.966 fused_ordering(190) 00:13:26.966 fused_ordering(191) 00:13:26.966 fused_ordering(192) 00:13:26.966 fused_ordering(193) 00:13:26.966 fused_ordering(194) 00:13:26.966 fused_ordering(195) 00:13:26.966 fused_ordering(196) 00:13:26.966 fused_ordering(197) 00:13:26.966 fused_ordering(198) 00:13:26.966 fused_ordering(199) 00:13:26.966 fused_ordering(200) 00:13:26.966 fused_ordering(201) 00:13:26.966 fused_ordering(202) 00:13:26.966 fused_ordering(203) 00:13:26.966 fused_ordering(204) 00:13:26.966 fused_ordering(205) 00:13:27.228 fused_ordering(206) 00:13:27.228 fused_ordering(207) 00:13:27.228 fused_ordering(208) 00:13:27.228 fused_ordering(209) 00:13:27.228 fused_ordering(210) 00:13:27.228 fused_ordering(211) 00:13:27.228 fused_ordering(212) 00:13:27.228 fused_ordering(213) 00:13:27.228 fused_ordering(214) 00:13:27.228 fused_ordering(215) 00:13:27.228 fused_ordering(216) 00:13:27.228 fused_ordering(217) 00:13:27.228 fused_ordering(218) 00:13:27.228 fused_ordering(219) 00:13:27.228 fused_ordering(220) 00:13:27.228 fused_ordering(221) 00:13:27.228 fused_ordering(222) 00:13:27.228 fused_ordering(223) 00:13:27.228 fused_ordering(224) 00:13:27.228 fused_ordering(225) 00:13:27.228 fused_ordering(226) 00:13:27.228 fused_ordering(227) 00:13:27.228 fused_ordering(228) 00:13:27.228 fused_ordering(229) 00:13:27.229 fused_ordering(230) 00:13:27.229 fused_ordering(231) 00:13:27.229 fused_ordering(232) 00:13:27.229 fused_ordering(233) 00:13:27.229 fused_ordering(234) 00:13:27.229 fused_ordering(235) 00:13:27.229 fused_ordering(236) 00:13:27.229 fused_ordering(237) 00:13:27.229 fused_ordering(238) 00:13:27.229 fused_ordering(239) 00:13:27.229 fused_ordering(240) 00:13:27.229 fused_ordering(241) 00:13:27.229 fused_ordering(242) 00:13:27.229 fused_ordering(243) 00:13:27.229 fused_ordering(244) 00:13:27.229 fused_ordering(245) 00:13:27.229 fused_ordering(246) 00:13:27.229 fused_ordering(247) 00:13:27.229 fused_ordering(248) 00:13:27.229 fused_ordering(249) 00:13:27.229 fused_ordering(250) 00:13:27.229 fused_ordering(251) 00:13:27.229 fused_ordering(252) 00:13:27.229 fused_ordering(253) 00:13:27.229 fused_ordering(254) 00:13:27.229 fused_ordering(255) 00:13:27.229 fused_ordering(256) 00:13:27.229 fused_ordering(257) 00:13:27.229 fused_ordering(258) 00:13:27.229 fused_ordering(259) 00:13:27.229 fused_ordering(260) 00:13:27.229 fused_ordering(261) 00:13:27.229 fused_ordering(262) 00:13:27.229 fused_ordering(263) 00:13:27.229 fused_ordering(264) 00:13:27.229 fused_ordering(265) 00:13:27.229 fused_ordering(266) 00:13:27.229 fused_ordering(267) 00:13:27.229 fused_ordering(268) 00:13:27.229 fused_ordering(269) 00:13:27.229 fused_ordering(270) 00:13:27.229 fused_ordering(271) 00:13:27.229 fused_ordering(272) 00:13:27.229 fused_ordering(273) 00:13:27.229 fused_ordering(274) 00:13:27.229 fused_ordering(275) 00:13:27.229 fused_ordering(276) 00:13:27.229 fused_ordering(277) 00:13:27.229 fused_ordering(278) 00:13:27.229 fused_ordering(279) 00:13:27.229 fused_ordering(280) 00:13:27.229 fused_ordering(281) 00:13:27.229 fused_ordering(282) 00:13:27.229 fused_ordering(283) 00:13:27.229 fused_ordering(284) 00:13:27.229 fused_ordering(285) 00:13:27.229 fused_ordering(286) 00:13:27.229 fused_ordering(287) 00:13:27.229 fused_ordering(288) 00:13:27.229 fused_ordering(289) 00:13:27.229 fused_ordering(290) 00:13:27.229 fused_ordering(291) 00:13:27.229 fused_ordering(292) 00:13:27.229 fused_ordering(293) 00:13:27.229 fused_ordering(294) 00:13:27.229 fused_ordering(295) 00:13:27.229 fused_ordering(296) 00:13:27.229 fused_ordering(297) 00:13:27.229 fused_ordering(298) 00:13:27.229 fused_ordering(299) 00:13:27.229 fused_ordering(300) 00:13:27.229 fused_ordering(301) 00:13:27.229 fused_ordering(302) 00:13:27.229 fused_ordering(303) 00:13:27.229 fused_ordering(304) 00:13:27.229 fused_ordering(305) 00:13:27.229 fused_ordering(306) 00:13:27.229 fused_ordering(307) 00:13:27.229 fused_ordering(308) 00:13:27.229 fused_ordering(309) 00:13:27.229 fused_ordering(310) 00:13:27.229 fused_ordering(311) 00:13:27.229 fused_ordering(312) 00:13:27.229 fused_ordering(313) 00:13:27.229 fused_ordering(314) 00:13:27.229 fused_ordering(315) 00:13:27.229 fused_ordering(316) 00:13:27.229 fused_ordering(317) 00:13:27.229 fused_ordering(318) 00:13:27.229 fused_ordering(319) 00:13:27.229 fused_ordering(320) 00:13:27.229 fused_ordering(321) 00:13:27.229 fused_ordering(322) 00:13:27.229 fused_ordering(323) 00:13:27.229 fused_ordering(324) 00:13:27.229 fused_ordering(325) 00:13:27.229 fused_ordering(326) 00:13:27.229 fused_ordering(327) 00:13:27.229 fused_ordering(328) 00:13:27.229 fused_ordering(329) 00:13:27.229 fused_ordering(330) 00:13:27.229 fused_ordering(331) 00:13:27.229 fused_ordering(332) 00:13:27.229 fused_ordering(333) 00:13:27.229 fused_ordering(334) 00:13:27.229 fused_ordering(335) 00:13:27.229 fused_ordering(336) 00:13:27.229 fused_ordering(337) 00:13:27.229 fused_ordering(338) 00:13:27.229 fused_ordering(339) 00:13:27.229 fused_ordering(340) 00:13:27.229 fused_ordering(341) 00:13:27.229 fused_ordering(342) 00:13:27.229 fused_ordering(343) 00:13:27.229 fused_ordering(344) 00:13:27.229 fused_ordering(345) 00:13:27.229 fused_ordering(346) 00:13:27.229 fused_ordering(347) 00:13:27.229 fused_ordering(348) 00:13:27.229 fused_ordering(349) 00:13:27.229 fused_ordering(350) 00:13:27.229 fused_ordering(351) 00:13:27.229 fused_ordering(352) 00:13:27.229 fused_ordering(353) 00:13:27.229 fused_ordering(354) 00:13:27.229 fused_ordering(355) 00:13:27.229 fused_ordering(356) 00:13:27.229 fused_ordering(357) 00:13:27.229 fused_ordering(358) 00:13:27.229 fused_ordering(359) 00:13:27.229 fused_ordering(360) 00:13:27.229 fused_ordering(361) 00:13:27.229 fused_ordering(362) 00:13:27.229 fused_ordering(363) 00:13:27.229 fused_ordering(364) 00:13:27.229 fused_ordering(365) 00:13:27.229 fused_ordering(366) 00:13:27.229 fused_ordering(367) 00:13:27.229 fused_ordering(368) 00:13:27.229 fused_ordering(369) 00:13:27.229 fused_ordering(370) 00:13:27.229 fused_ordering(371) 00:13:27.229 fused_ordering(372) 00:13:27.229 fused_ordering(373) 00:13:27.229 fused_ordering(374) 00:13:27.229 fused_ordering(375) 00:13:27.229 fused_ordering(376) 00:13:27.229 fused_ordering(377) 00:13:27.229 fused_ordering(378) 00:13:27.229 fused_ordering(379) 00:13:27.229 fused_ordering(380) 00:13:27.229 fused_ordering(381) 00:13:27.229 fused_ordering(382) 00:13:27.229 fused_ordering(383) 00:13:27.229 fused_ordering(384) 00:13:27.229 fused_ordering(385) 00:13:27.229 fused_ordering(386) 00:13:27.229 fused_ordering(387) 00:13:27.229 fused_ordering(388) 00:13:27.229 fused_ordering(389) 00:13:27.229 fused_ordering(390) 00:13:27.229 fused_ordering(391) 00:13:27.229 fused_ordering(392) 00:13:27.229 fused_ordering(393) 00:13:27.229 fused_ordering(394) 00:13:27.229 fused_ordering(395) 00:13:27.229 fused_ordering(396) 00:13:27.229 fused_ordering(397) 00:13:27.229 fused_ordering(398) 00:13:27.229 fused_ordering(399) 00:13:27.229 fused_ordering(400) 00:13:27.229 fused_ordering(401) 00:13:27.229 fused_ordering(402) 00:13:27.229 fused_ordering(403) 00:13:27.229 fused_ordering(404) 00:13:27.229 fused_ordering(405) 00:13:27.229 fused_ordering(406) 00:13:27.229 fused_ordering(407) 00:13:27.229 fused_ordering(408) 00:13:27.229 fused_ordering(409) 00:13:27.229 fused_ordering(410) 00:13:27.490 fused_ordering(411) 00:13:27.490 fused_ordering(412) 00:13:27.490 fused_ordering(413) 00:13:27.490 fused_ordering(414) 00:13:27.490 fused_ordering(415) 00:13:27.490 fused_ordering(416) 00:13:27.490 fused_ordering(417) 00:13:27.490 fused_ordering(418) 00:13:27.490 fused_ordering(419) 00:13:27.490 fused_ordering(420) 00:13:27.490 fused_ordering(421) 00:13:27.490 fused_ordering(422) 00:13:27.490 fused_ordering(423) 00:13:27.490 fused_ordering(424) 00:13:27.490 fused_ordering(425) 00:13:27.490 fused_ordering(426) 00:13:27.490 fused_ordering(427) 00:13:27.490 fused_ordering(428) 00:13:27.490 fused_ordering(429) 00:13:27.490 fused_ordering(430) 00:13:27.490 fused_ordering(431) 00:13:27.490 fused_ordering(432) 00:13:27.490 fused_ordering(433) 00:13:27.490 fused_ordering(434) 00:13:27.490 fused_ordering(435) 00:13:27.490 fused_ordering(436) 00:13:27.490 fused_ordering(437) 00:13:27.490 fused_ordering(438) 00:13:27.490 fused_ordering(439) 00:13:27.490 fused_ordering(440) 00:13:27.490 fused_ordering(441) 00:13:27.490 fused_ordering(442) 00:13:27.490 fused_ordering(443) 00:13:27.490 fused_ordering(444) 00:13:27.490 fused_ordering(445) 00:13:27.490 fused_ordering(446) 00:13:27.490 fused_ordering(447) 00:13:27.490 fused_ordering(448) 00:13:27.490 fused_ordering(449) 00:13:27.490 fused_ordering(450) 00:13:27.490 fused_ordering(451) 00:13:27.490 fused_ordering(452) 00:13:27.490 fused_ordering(453) 00:13:27.490 fused_ordering(454) 00:13:27.490 fused_ordering(455) 00:13:27.490 fused_ordering(456) 00:13:27.490 fused_ordering(457) 00:13:27.490 fused_ordering(458) 00:13:27.490 fused_ordering(459) 00:13:27.490 fused_ordering(460) 00:13:27.490 fused_ordering(461) 00:13:27.490 fused_ordering(462) 00:13:27.490 fused_ordering(463) 00:13:27.490 fused_ordering(464) 00:13:27.490 fused_ordering(465) 00:13:27.490 fused_ordering(466) 00:13:27.490 fused_ordering(467) 00:13:27.490 fused_ordering(468) 00:13:27.490 fused_ordering(469) 00:13:27.490 fused_ordering(470) 00:13:27.490 fused_ordering(471) 00:13:27.490 fused_ordering(472) 00:13:27.490 fused_ordering(473) 00:13:27.490 fused_ordering(474) 00:13:27.490 fused_ordering(475) 00:13:27.490 fused_ordering(476) 00:13:27.490 fused_ordering(477) 00:13:27.490 fused_ordering(478) 00:13:27.490 fused_ordering(479) 00:13:27.490 fused_ordering(480) 00:13:27.490 fused_ordering(481) 00:13:27.490 fused_ordering(482) 00:13:27.490 fused_ordering(483) 00:13:27.490 fused_ordering(484) 00:13:27.490 fused_ordering(485) 00:13:27.490 fused_ordering(486) 00:13:27.490 fused_ordering(487) 00:13:27.490 fused_ordering(488) 00:13:27.490 fused_ordering(489) 00:13:27.490 fused_ordering(490) 00:13:27.490 fused_ordering(491) 00:13:27.490 fused_ordering(492) 00:13:27.490 fused_ordering(493) 00:13:27.490 fused_ordering(494) 00:13:27.490 fused_ordering(495) 00:13:27.490 fused_ordering(496) 00:13:27.490 fused_ordering(497) 00:13:27.490 fused_ordering(498) 00:13:27.490 fused_ordering(499) 00:13:27.490 fused_ordering(500) 00:13:27.490 fused_ordering(501) 00:13:27.490 fused_ordering(502) 00:13:27.490 fused_ordering(503) 00:13:27.490 fused_ordering(504) 00:13:27.490 fused_ordering(505) 00:13:27.490 fused_ordering(506) 00:13:27.490 fused_ordering(507) 00:13:27.490 fused_ordering(508) 00:13:27.490 fused_ordering(509) 00:13:27.490 fused_ordering(510) 00:13:27.490 fused_ordering(511) 00:13:27.490 fused_ordering(512) 00:13:27.490 fused_ordering(513) 00:13:27.490 fused_ordering(514) 00:13:27.490 fused_ordering(515) 00:13:27.490 fused_ordering(516) 00:13:27.490 fused_ordering(517) 00:13:27.490 fused_ordering(518) 00:13:27.490 fused_ordering(519) 00:13:27.490 fused_ordering(520) 00:13:27.490 fused_ordering(521) 00:13:27.490 fused_ordering(522) 00:13:27.490 fused_ordering(523) 00:13:27.490 fused_ordering(524) 00:13:27.490 fused_ordering(525) 00:13:27.490 fused_ordering(526) 00:13:27.490 fused_ordering(527) 00:13:27.490 fused_ordering(528) 00:13:27.490 fused_ordering(529) 00:13:27.490 fused_ordering(530) 00:13:27.490 fused_ordering(531) 00:13:27.490 fused_ordering(532) 00:13:27.490 fused_ordering(533) 00:13:27.490 fused_ordering(534) 00:13:27.490 fused_ordering(535) 00:13:27.490 fused_ordering(536) 00:13:27.490 fused_ordering(537) 00:13:27.490 fused_ordering(538) 00:13:27.490 fused_ordering(539) 00:13:27.490 fused_ordering(540) 00:13:27.490 fused_ordering(541) 00:13:27.490 fused_ordering(542) 00:13:27.490 fused_ordering(543) 00:13:27.490 fused_ordering(544) 00:13:27.490 fused_ordering(545) 00:13:27.490 fused_ordering(546) 00:13:27.490 fused_ordering(547) 00:13:27.490 fused_ordering(548) 00:13:27.490 fused_ordering(549) 00:13:27.490 fused_ordering(550) 00:13:27.490 fused_ordering(551) 00:13:27.490 fused_ordering(552) 00:13:27.490 fused_ordering(553) 00:13:27.490 fused_ordering(554) 00:13:27.490 fused_ordering(555) 00:13:27.490 fused_ordering(556) 00:13:27.490 fused_ordering(557) 00:13:27.490 fused_ordering(558) 00:13:27.490 fused_ordering(559) 00:13:27.490 fused_ordering(560) 00:13:27.490 fused_ordering(561) 00:13:27.490 fused_ordering(562) 00:13:27.490 fused_ordering(563) 00:13:27.490 fused_ordering(564) 00:13:27.490 fused_ordering(565) 00:13:27.490 fused_ordering(566) 00:13:27.490 fused_ordering(567) 00:13:27.490 fused_ordering(568) 00:13:27.490 fused_ordering(569) 00:13:27.491 fused_ordering(570) 00:13:27.491 fused_ordering(571) 00:13:27.491 fused_ordering(572) 00:13:27.491 fused_ordering(573) 00:13:27.491 fused_ordering(574) 00:13:27.491 fused_ordering(575) 00:13:27.491 fused_ordering(576) 00:13:27.491 fused_ordering(577) 00:13:27.491 fused_ordering(578) 00:13:27.491 fused_ordering(579) 00:13:27.491 fused_ordering(580) 00:13:27.491 fused_ordering(581) 00:13:27.491 fused_ordering(582) 00:13:27.491 fused_ordering(583) 00:13:27.491 fused_ordering(584) 00:13:27.491 fused_ordering(585) 00:13:27.491 fused_ordering(586) 00:13:27.491 fused_ordering(587) 00:13:27.491 fused_ordering(588) 00:13:27.491 fused_ordering(589) 00:13:27.491 fused_ordering(590) 00:13:27.491 fused_ordering(591) 00:13:27.491 fused_ordering(592) 00:13:27.491 fused_ordering(593) 00:13:27.491 fused_ordering(594) 00:13:27.491 fused_ordering(595) 00:13:27.491 fused_ordering(596) 00:13:27.491 fused_ordering(597) 00:13:27.491 fused_ordering(598) 00:13:27.491 fused_ordering(599) 00:13:27.491 fused_ordering(600) 00:13:27.491 fused_ordering(601) 00:13:27.491 fused_ordering(602) 00:13:27.491 fused_ordering(603) 00:13:27.491 fused_ordering(604) 00:13:27.491 fused_ordering(605) 00:13:27.491 fused_ordering(606) 00:13:27.491 fused_ordering(607) 00:13:27.491 fused_ordering(608) 00:13:27.491 fused_ordering(609) 00:13:27.491 fused_ordering(610) 00:13:27.491 fused_ordering(611) 00:13:27.491 fused_ordering(612) 00:13:27.491 fused_ordering(613) 00:13:27.491 fused_ordering(614) 00:13:27.491 fused_ordering(615) 00:13:28.095 fused_ordering(616) 00:13:28.095 fused_ordering(617) 00:13:28.095 fused_ordering(618) 00:13:28.095 fused_ordering(619) 00:13:28.095 fused_ordering(620) 00:13:28.095 fused_ordering(621) 00:13:28.095 fused_ordering(622) 00:13:28.095 fused_ordering(623) 00:13:28.095 fused_ordering(624) 00:13:28.095 fused_ordering(625) 00:13:28.095 fused_ordering(626) 00:13:28.095 fused_ordering(627) 00:13:28.095 fused_ordering(628) 00:13:28.095 fused_ordering(629) 00:13:28.095 fused_ordering(630) 00:13:28.095 fused_ordering(631) 00:13:28.095 fused_ordering(632) 00:13:28.095 fused_ordering(633) 00:13:28.095 fused_ordering(634) 00:13:28.095 fused_ordering(635) 00:13:28.095 fused_ordering(636) 00:13:28.095 fused_ordering(637) 00:13:28.095 fused_ordering(638) 00:13:28.095 fused_ordering(639) 00:13:28.095 fused_ordering(640) 00:13:28.095 fused_ordering(641) 00:13:28.095 fused_ordering(642) 00:13:28.095 fused_ordering(643) 00:13:28.095 fused_ordering(644) 00:13:28.095 fused_ordering(645) 00:13:28.095 fused_ordering(646) 00:13:28.095 fused_ordering(647) 00:13:28.095 fused_ordering(648) 00:13:28.095 fused_ordering(649) 00:13:28.095 fused_ordering(650) 00:13:28.095 fused_ordering(651) 00:13:28.095 fused_ordering(652) 00:13:28.095 fused_ordering(653) 00:13:28.095 fused_ordering(654) 00:13:28.095 fused_ordering(655) 00:13:28.095 fused_ordering(656) 00:13:28.095 fused_ordering(657) 00:13:28.095 fused_ordering(658) 00:13:28.095 fused_ordering(659) 00:13:28.095 fused_ordering(660) 00:13:28.095 fused_ordering(661) 00:13:28.095 fused_ordering(662) 00:13:28.095 fused_ordering(663) 00:13:28.095 fused_ordering(664) 00:13:28.095 fused_ordering(665) 00:13:28.095 fused_ordering(666) 00:13:28.095 fused_ordering(667) 00:13:28.095 fused_ordering(668) 00:13:28.095 fused_ordering(669) 00:13:28.095 fused_ordering(670) 00:13:28.095 fused_ordering(671) 00:13:28.095 fused_ordering(672) 00:13:28.095 fused_ordering(673) 00:13:28.095 fused_ordering(674) 00:13:28.095 fused_ordering(675) 00:13:28.095 fused_ordering(676) 00:13:28.095 fused_ordering(677) 00:13:28.095 fused_ordering(678) 00:13:28.095 fused_ordering(679) 00:13:28.095 fused_ordering(680) 00:13:28.095 fused_ordering(681) 00:13:28.095 fused_ordering(682) 00:13:28.095 fused_ordering(683) 00:13:28.095 fused_ordering(684) 00:13:28.095 fused_ordering(685) 00:13:28.095 fused_ordering(686) 00:13:28.095 fused_ordering(687) 00:13:28.095 fused_ordering(688) 00:13:28.095 fused_ordering(689) 00:13:28.095 fused_ordering(690) 00:13:28.095 fused_ordering(691) 00:13:28.095 fused_ordering(692) 00:13:28.095 fused_ordering(693) 00:13:28.095 fused_ordering(694) 00:13:28.095 fused_ordering(695) 00:13:28.095 fused_ordering(696) 00:13:28.095 fused_ordering(697) 00:13:28.095 fused_ordering(698) 00:13:28.095 fused_ordering(699) 00:13:28.095 fused_ordering(700) 00:13:28.095 fused_ordering(701) 00:13:28.095 fused_ordering(702) 00:13:28.095 fused_ordering(703) 00:13:28.095 fused_ordering(704) 00:13:28.095 fused_ordering(705) 00:13:28.095 fused_ordering(706) 00:13:28.095 fused_ordering(707) 00:13:28.095 fused_ordering(708) 00:13:28.095 fused_ordering(709) 00:13:28.095 fused_ordering(710) 00:13:28.095 fused_ordering(711) 00:13:28.095 fused_ordering(712) 00:13:28.095 fused_ordering(713) 00:13:28.095 fused_ordering(714) 00:13:28.095 fused_ordering(715) 00:13:28.095 fused_ordering(716) 00:13:28.095 fused_ordering(717) 00:13:28.095 fused_ordering(718) 00:13:28.095 fused_ordering(719) 00:13:28.095 fused_ordering(720) 00:13:28.095 fused_ordering(721) 00:13:28.095 fused_ordering(722) 00:13:28.095 fused_ordering(723) 00:13:28.095 fused_ordering(724) 00:13:28.095 fused_ordering(725) 00:13:28.095 fused_ordering(726) 00:13:28.095 fused_ordering(727) 00:13:28.095 fused_ordering(728) 00:13:28.095 fused_ordering(729) 00:13:28.095 fused_ordering(730) 00:13:28.095 fused_ordering(731) 00:13:28.095 fused_ordering(732) 00:13:28.095 fused_ordering(733) 00:13:28.095 fused_ordering(734) 00:13:28.095 fused_ordering(735) 00:13:28.095 fused_ordering(736) 00:13:28.095 fused_ordering(737) 00:13:28.095 fused_ordering(738) 00:13:28.095 fused_ordering(739) 00:13:28.095 fused_ordering(740) 00:13:28.095 fused_ordering(741) 00:13:28.095 fused_ordering(742) 00:13:28.095 fused_ordering(743) 00:13:28.095 fused_ordering(744) 00:13:28.095 fused_ordering(745) 00:13:28.095 fused_ordering(746) 00:13:28.095 fused_ordering(747) 00:13:28.095 fused_ordering(748) 00:13:28.095 fused_ordering(749) 00:13:28.095 fused_ordering(750) 00:13:28.095 fused_ordering(751) 00:13:28.095 fused_ordering(752) 00:13:28.095 fused_ordering(753) 00:13:28.095 fused_ordering(754) 00:13:28.096 fused_ordering(755) 00:13:28.096 fused_ordering(756) 00:13:28.096 fused_ordering(757) 00:13:28.096 fused_ordering(758) 00:13:28.096 fused_ordering(759) 00:13:28.096 fused_ordering(760) 00:13:28.096 fused_ordering(761) 00:13:28.096 fused_ordering(762) 00:13:28.096 fused_ordering(763) 00:13:28.096 fused_ordering(764) 00:13:28.096 fused_ordering(765) 00:13:28.096 fused_ordering(766) 00:13:28.096 fused_ordering(767) 00:13:28.096 fused_ordering(768) 00:13:28.096 fused_ordering(769) 00:13:28.096 fused_ordering(770) 00:13:28.096 fused_ordering(771) 00:13:28.096 fused_ordering(772) 00:13:28.096 fused_ordering(773) 00:13:28.096 fused_ordering(774) 00:13:28.096 fused_ordering(775) 00:13:28.096 fused_ordering(776) 00:13:28.096 fused_ordering(777) 00:13:28.096 fused_ordering(778) 00:13:28.096 fused_ordering(779) 00:13:28.096 fused_ordering(780) 00:13:28.096 fused_ordering(781) 00:13:28.096 fused_ordering(782) 00:13:28.096 fused_ordering(783) 00:13:28.096 fused_ordering(784) 00:13:28.096 fused_ordering(785) 00:13:28.096 fused_ordering(786) 00:13:28.096 fused_ordering(787) 00:13:28.096 fused_ordering(788) 00:13:28.096 fused_ordering(789) 00:13:28.096 fused_ordering(790) 00:13:28.096 fused_ordering(791) 00:13:28.096 fused_ordering(792) 00:13:28.096 fused_ordering(793) 00:13:28.096 fused_ordering(794) 00:13:28.096 fused_ordering(795) 00:13:28.096 fused_ordering(796) 00:13:28.096 fused_ordering(797) 00:13:28.096 fused_ordering(798) 00:13:28.096 fused_ordering(799) 00:13:28.096 fused_ordering(800) 00:13:28.096 fused_ordering(801) 00:13:28.096 fused_ordering(802) 00:13:28.096 fused_ordering(803) 00:13:28.096 fused_ordering(804) 00:13:28.096 fused_ordering(805) 00:13:28.096 fused_ordering(806) 00:13:28.096 fused_ordering(807) 00:13:28.096 fused_ordering(808) 00:13:28.096 fused_ordering(809) 00:13:28.096 fused_ordering(810) 00:13:28.096 fused_ordering(811) 00:13:28.096 fused_ordering(812) 00:13:28.096 fused_ordering(813) 00:13:28.096 fused_ordering(814) 00:13:28.096 fused_ordering(815) 00:13:28.096 fused_ordering(816) 00:13:28.096 fused_ordering(817) 00:13:28.096 fused_ordering(818) 00:13:28.096 fused_ordering(819) 00:13:28.096 fused_ordering(820) 00:13:28.668 fused_ordering(821) 00:13:28.668 fused_ordering(822) 00:13:28.668 fused_ordering(823) 00:13:28.668 fused_ordering(824) 00:13:28.668 fused_ordering(825) 00:13:28.668 fused_ordering(826) 00:13:28.668 fused_ordering(827) 00:13:28.668 fused_ordering(828) 00:13:28.668 fused_ordering(829) 00:13:28.668 fused_ordering(830) 00:13:28.668 fused_ordering(831) 00:13:28.668 fused_ordering(832) 00:13:28.668 fused_ordering(833) 00:13:28.668 fused_ordering(834) 00:13:28.668 fused_ordering(835) 00:13:28.668 fused_ordering(836) 00:13:28.668 fused_ordering(837) 00:13:28.668 fused_ordering(838) 00:13:28.668 fused_ordering(839) 00:13:28.668 fused_ordering(840) 00:13:28.668 fused_ordering(841) 00:13:28.668 fused_ordering(842) 00:13:28.668 fused_ordering(843) 00:13:28.668 fused_ordering(844) 00:13:28.668 fused_ordering(845) 00:13:28.668 fused_ordering(846) 00:13:28.668 fused_ordering(847) 00:13:28.668 fused_ordering(848) 00:13:28.668 fused_ordering(849) 00:13:28.668 fused_ordering(850) 00:13:28.668 fused_ordering(851) 00:13:28.668 fused_ordering(852) 00:13:28.668 fused_ordering(853) 00:13:28.668 fused_ordering(854) 00:13:28.668 fused_ordering(855) 00:13:28.668 fused_ordering(856) 00:13:28.668 fused_ordering(857) 00:13:28.668 fused_ordering(858) 00:13:28.668 fused_ordering(859) 00:13:28.668 fused_ordering(860) 00:13:28.668 fused_ordering(861) 00:13:28.668 fused_ordering(862) 00:13:28.668 fused_ordering(863) 00:13:28.668 fused_ordering(864) 00:13:28.668 fused_ordering(865) 00:13:28.668 fused_ordering(866) 00:13:28.668 fused_ordering(867) 00:13:28.668 fused_ordering(868) 00:13:28.668 fused_ordering(869) 00:13:28.668 fused_ordering(870) 00:13:28.668 fused_ordering(871) 00:13:28.668 fused_ordering(872) 00:13:28.668 fused_ordering(873) 00:13:28.668 fused_ordering(874) 00:13:28.668 fused_ordering(875) 00:13:28.668 fused_ordering(876) 00:13:28.668 fused_ordering(877) 00:13:28.668 fused_ordering(878) 00:13:28.668 fused_ordering(879) 00:13:28.668 fused_ordering(880) 00:13:28.668 fused_ordering(881) 00:13:28.668 fused_ordering(882) 00:13:28.668 fused_ordering(883) 00:13:28.668 fused_ordering(884) 00:13:28.668 fused_ordering(885) 00:13:28.668 fused_ordering(886) 00:13:28.668 fused_ordering(887) 00:13:28.668 fused_ordering(888) 00:13:28.668 fused_ordering(889) 00:13:28.668 fused_ordering(890) 00:13:28.668 fused_ordering(891) 00:13:28.668 fused_ordering(892) 00:13:28.668 fused_ordering(893) 00:13:28.668 fused_ordering(894) 00:13:28.668 fused_ordering(895) 00:13:28.668 fused_ordering(896) 00:13:28.668 fused_ordering(897) 00:13:28.668 fused_ordering(898) 00:13:28.668 fused_ordering(899) 00:13:28.668 fused_ordering(900) 00:13:28.668 fused_ordering(901) 00:13:28.668 fused_ordering(902) 00:13:28.668 fused_ordering(903) 00:13:28.668 fused_ordering(904) 00:13:28.668 fused_ordering(905) 00:13:28.668 fused_ordering(906) 00:13:28.668 fused_ordering(907) 00:13:28.668 fused_ordering(908) 00:13:28.668 fused_ordering(909) 00:13:28.668 fused_ordering(910) 00:13:28.668 fused_ordering(911) 00:13:28.668 fused_ordering(912) 00:13:28.668 fused_ordering(913) 00:13:28.668 fused_ordering(914) 00:13:28.668 fused_ordering(915) 00:13:28.668 fused_ordering(916) 00:13:28.668 fused_ordering(917) 00:13:28.668 fused_ordering(918) 00:13:28.668 fused_ordering(919) 00:13:28.668 fused_ordering(920) 00:13:28.668 fused_ordering(921) 00:13:28.668 fused_ordering(922) 00:13:28.668 fused_ordering(923) 00:13:28.668 fused_ordering(924) 00:13:28.668 fused_ordering(925) 00:13:28.668 fused_ordering(926) 00:13:28.668 fused_ordering(927) 00:13:28.668 fused_ordering(928) 00:13:28.668 fused_ordering(929) 00:13:28.668 fused_ordering(930) 00:13:28.668 fused_ordering(931) 00:13:28.668 fused_ordering(932) 00:13:28.668 fused_ordering(933) 00:13:28.668 fused_ordering(934) 00:13:28.668 fused_ordering(935) 00:13:28.668 fused_ordering(936) 00:13:28.668 fused_ordering(937) 00:13:28.668 fused_ordering(938) 00:13:28.668 fused_ordering(939) 00:13:28.668 fused_ordering(940) 00:13:28.668 fused_ordering(941) 00:13:28.668 fused_ordering(942) 00:13:28.668 fused_ordering(943) 00:13:28.668 fused_ordering(944) 00:13:28.668 fused_ordering(945) 00:13:28.668 fused_ordering(946) 00:13:28.668 fused_ordering(947) 00:13:28.668 fused_ordering(948) 00:13:28.668 fused_ordering(949) 00:13:28.668 fused_ordering(950) 00:13:28.668 fused_ordering(951) 00:13:28.668 fused_ordering(952) 00:13:28.668 fused_ordering(953) 00:13:28.668 fused_ordering(954) 00:13:28.668 fused_ordering(955) 00:13:28.668 fused_ordering(956) 00:13:28.668 fused_ordering(957) 00:13:28.668 fused_ordering(958) 00:13:28.668 fused_ordering(959) 00:13:28.668 fused_ordering(960) 00:13:28.668 fused_ordering(961) 00:13:28.668 fused_ordering(962) 00:13:28.668 fused_ordering(963) 00:13:28.668 fused_ordering(964) 00:13:28.668 fused_ordering(965) 00:13:28.668 fused_ordering(966) 00:13:28.668 fused_ordering(967) 00:13:28.668 fused_ordering(968) 00:13:28.668 fused_ordering(969) 00:13:28.668 fused_ordering(970) 00:13:28.668 fused_ordering(971) 00:13:28.668 fused_ordering(972) 00:13:28.668 fused_ordering(973) 00:13:28.668 fused_ordering(974) 00:13:28.668 fused_ordering(975) 00:13:28.668 fused_ordering(976) 00:13:28.668 fused_ordering(977) 00:13:28.668 fused_ordering(978) 00:13:28.668 fused_ordering(979) 00:13:28.668 fused_ordering(980) 00:13:28.668 fused_ordering(981) 00:13:28.668 fused_ordering(982) 00:13:28.668 fused_ordering(983) 00:13:28.668 fused_ordering(984) 00:13:28.668 fused_ordering(985) 00:13:28.668 fused_ordering(986) 00:13:28.668 fused_ordering(987) 00:13:28.668 fused_ordering(988) 00:13:28.668 fused_ordering(989) 00:13:28.668 fused_ordering(990) 00:13:28.668 fused_ordering(991) 00:13:28.668 fused_ordering(992) 00:13:28.668 fused_ordering(993) 00:13:28.668 fused_ordering(994) 00:13:28.668 fused_ordering(995) 00:13:28.668 fused_ordering(996) 00:13:28.668 fused_ordering(997) 00:13:28.668 fused_ordering(998) 00:13:28.668 fused_ordering(999) 00:13:28.668 fused_ordering(1000) 00:13:28.668 fused_ordering(1001) 00:13:28.668 fused_ordering(1002) 00:13:28.668 fused_ordering(1003) 00:13:28.668 fused_ordering(1004) 00:13:28.668 fused_ordering(1005) 00:13:28.668 fused_ordering(1006) 00:13:28.668 fused_ordering(1007) 00:13:28.668 fused_ordering(1008) 00:13:28.668 fused_ordering(1009) 00:13:28.668 fused_ordering(1010) 00:13:28.668 fused_ordering(1011) 00:13:28.668 fused_ordering(1012) 00:13:28.668 fused_ordering(1013) 00:13:28.668 fused_ordering(1014) 00:13:28.668 fused_ordering(1015) 00:13:28.668 fused_ordering(1016) 00:13:28.668 fused_ordering(1017) 00:13:28.668 fused_ordering(1018) 00:13:28.668 fused_ordering(1019) 00:13:28.668 fused_ordering(1020) 00:13:28.669 fused_ordering(1021) 00:13:28.669 fused_ordering(1022) 00:13:28.669 fused_ordering(1023) 00:13:28.669 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:28.669 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:28.669 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:28.669 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:13:28.669 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:28.669 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:13:28.669 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:28.669 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:28.669 rmmod nvme_tcp 00:13:28.669 rmmod nvme_fabrics 00:13:28.669 rmmod nvme_keyring 00:13:28.669 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:28.669 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:13:28.669 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:13:28.669 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2139035 ']' 00:13:28.669 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2139035 00:13:28.669 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 2139035 ']' 00:13:28.669 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 2139035 00:13:28.669 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:13:28.669 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:28.669 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2139035 00:13:28.669 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:28.669 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:28.669 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2139035' 00:13:28.669 killing process with pid 2139035 00:13:28.669 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 2139035 00:13:28.669 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 2139035 00:13:28.929 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:28.929 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:28.929 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:28.929 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:13:28.929 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:13:28.929 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:28.929 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:13:28.929 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:28.929 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:28.929 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:28.929 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:28.929 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:31.475 16:25:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:31.475 00:13:31.475 real 0m12.584s 00:13:31.475 user 0m6.297s 00:13:31.475 sys 0m6.889s 00:13:31.475 16:25:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:31.475 16:25:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:31.475 ************************************ 00:13:31.475 END TEST nvmf_fused_ordering 00:13:31.475 ************************************ 00:13:31.475 16:25:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:31.475 16:25:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:31.475 16:25:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:31.475 16:25:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:31.475 ************************************ 00:13:31.475 START TEST nvmf_ns_masking 00:13:31.475 ************************************ 00:13:31.475 16:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:31.475 * Looking for test storage... 00:13:31.475 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:31.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:31.476 --rc genhtml_branch_coverage=1 00:13:31.476 --rc genhtml_function_coverage=1 00:13:31.476 --rc genhtml_legend=1 00:13:31.476 --rc geninfo_all_blocks=1 00:13:31.476 --rc geninfo_unexecuted_blocks=1 00:13:31.476 00:13:31.476 ' 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:31.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:31.476 --rc genhtml_branch_coverage=1 00:13:31.476 --rc genhtml_function_coverage=1 00:13:31.476 --rc genhtml_legend=1 00:13:31.476 --rc geninfo_all_blocks=1 00:13:31.476 --rc geninfo_unexecuted_blocks=1 00:13:31.476 00:13:31.476 ' 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:31.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:31.476 --rc genhtml_branch_coverage=1 00:13:31.476 --rc genhtml_function_coverage=1 00:13:31.476 --rc genhtml_legend=1 00:13:31.476 --rc geninfo_all_blocks=1 00:13:31.476 --rc geninfo_unexecuted_blocks=1 00:13:31.476 00:13:31.476 ' 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:31.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:31.476 --rc genhtml_branch_coverage=1 00:13:31.476 --rc genhtml_function_coverage=1 00:13:31.476 --rc genhtml_legend=1 00:13:31.476 --rc geninfo_all_blocks=1 00:13:31.476 --rc geninfo_unexecuted_blocks=1 00:13:31.476 00:13:31.476 ' 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.476 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.477 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.477 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:31.477 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.477 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:13:31.477 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:31.477 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:31.477 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:31.477 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:31.477 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:31.477 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:31.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:31.477 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:31.477 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:31.477 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:31.477 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:31.477 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:31.477 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:31.477 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:31.477 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=4ed8bf30-00e9-41c3-9b36-c57f5e575c8b 00:13:31.477 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:31.477 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=6bd2ae94-de99-4b88-9146-bc28af5ded6b 00:13:31.477 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:31.477 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:31.477 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:31.477 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:31.477 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=ebef59f3-5508-4abc-9335-451b1fb5f404 00:13:31.477 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:31.477 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:31.477 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:31.477 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:31.477 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:31.477 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:31.477 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:31.477 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:31.477 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:31.477 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:31.477 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:31.477 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:13:31.477 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:39.627 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:39.627 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:39.627 Found net devices under 0000:31:00.0: cvl_0_0 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:39.627 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:39.627 Found net devices under 0000:31:00.1: cvl_0_1 00:13:39.628 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:39.628 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:39.628 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:13:39.628 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:39.628 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:39.628 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:39.628 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:39.628 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:39.628 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:39.628 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:39.628 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:39.628 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:39.628 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:39.628 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:39.628 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:39.628 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:39.628 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:39.628 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:39.628 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:39.628 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:39.628 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:39.628 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:39.628 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:39.628 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:39.628 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:39.628 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:39.628 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:39.628 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:39.628 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:39.628 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:39.628 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.511 ms 00:13:39.628 00:13:39.628 --- 10.0.0.2 ping statistics --- 00:13:39.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:39.628 rtt min/avg/max/mdev = 0.511/0.511/0.511/0.000 ms 00:13:39.628 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:39.628 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:39.628 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:13:39.628 00:13:39.628 --- 10.0.0.1 ping statistics --- 00:13:39.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:39.628 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:13:39.628 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:39.628 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:13:39.628 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:39.628 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:39.628 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:39.628 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:39.628 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:39.628 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:39.628 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:39.628 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:39.628 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:39.628 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:39.628 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:39.628 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2143777 00:13:39.628 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2143777 00:13:39.628 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:39.628 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2143777 ']' 00:13:39.628 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.628 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:39.628 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.628 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:39.628 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:39.628 [2024-11-20 16:25:24.541001] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:13:39.628 [2024-11-20 16:25:24.541073] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:39.628 [2024-11-20 16:25:24.624724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.628 [2024-11-20 16:25:24.666556] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:39.628 [2024-11-20 16:25:24.666592] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:39.628 [2024-11-20 16:25:24.666600] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:39.628 [2024-11-20 16:25:24.666607] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:39.628 [2024-11-20 16:25:24.666613] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:39.628 [2024-11-20 16:25:24.667204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.628 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:39.628 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:39.628 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:39.628 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:39.629 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:39.629 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:39.629 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:39.629 [2024-11-20 16:25:25.525978] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:39.629 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:39.629 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:39.629 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:39.892 Malloc1 00:13:39.892 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:40.154 Malloc2 00:13:40.154 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:40.154 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:40.415 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:40.415 [2024-11-20 16:25:26.371491] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:40.676 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:40.676 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ebef59f3-5508-4abc-9335-451b1fb5f404 -a 10.0.0.2 -s 4420 -i 4 00:13:40.676 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:40.676 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:40.676 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:40.676 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:40.676 16:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:42.592 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:42.592 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:42.592 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:42.852 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:42.852 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:42.852 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:42.852 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:42.852 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:42.852 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:42.852 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:42.852 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:42.852 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:42.852 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:42.852 [ 0]:0x1 00:13:42.852 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:42.852 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:42.852 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=52ed1c99fd274cda89879da23738bf75 00:13:42.852 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 52ed1c99fd274cda89879da23738bf75 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:42.852 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:43.112 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:43.112 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:43.112 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:43.112 [ 0]:0x1 00:13:43.112 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:43.112 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:43.112 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=52ed1c99fd274cda89879da23738bf75 00:13:43.112 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 52ed1c99fd274cda89879da23738bf75 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:43.112 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:43.112 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:43.112 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:43.112 [ 1]:0x2 00:13:43.112 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:43.112 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:43.112 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b2429fdb8db0479d8eb56daae3c93df0 00:13:43.112 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b2429fdb8db0479d8eb56daae3c93df0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:43.112 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:43.112 16:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:43.379 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.379 16:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.641 16:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:43.642 16:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:43.642 16:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ebef59f3-5508-4abc-9335-451b1fb5f404 -a 10.0.0.2 -s 4420 -i 4 00:13:43.902 16:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:43.903 16:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:43.903 16:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:43.903 16:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:13:43.903 16:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:13:43.903 16:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:46.448 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:46.448 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:46.448 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:46.448 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:46.448 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:46.448 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:46.448 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:46.448 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:46.448 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:46.448 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:46.448 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:46.448 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:46.448 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:46.448 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:46.448 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:46.448 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:46.448 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:46.448 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:46.448 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:46.448 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:46.448 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:46.449 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:46.449 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:46.449 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:46.449 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:46.449 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:46.449 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:46.449 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:46.449 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:46.449 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:46.449 16:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:46.449 [ 0]:0x2 00:13:46.449 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:46.449 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:46.449 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b2429fdb8db0479d8eb56daae3c93df0 00:13:46.449 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b2429fdb8db0479d8eb56daae3c93df0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:46.449 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:46.449 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:46.449 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:46.449 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:46.449 [ 0]:0x1 00:13:46.449 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:46.449 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:46.449 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=52ed1c99fd274cda89879da23738bf75 00:13:46.449 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 52ed1c99fd274cda89879da23738bf75 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:46.449 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:46.449 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:46.449 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:46.449 [ 1]:0x2 00:13:46.449 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:46.449 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:46.449 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b2429fdb8db0479d8eb56daae3c93df0 00:13:46.449 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b2429fdb8db0479d8eb56daae3c93df0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:46.449 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:46.710 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:46.710 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:46.710 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:46.710 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:46.710 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:46.710 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:46.710 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:46.710 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:46.710 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:46.710 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:46.710 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:46.710 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:46.710 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:46.710 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:46.710 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:46.710 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:46.710 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:46.710 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:46.710 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:46.710 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:46.710 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:46.710 [ 0]:0x2 00:13:46.710 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:46.710 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:46.710 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b2429fdb8db0479d8eb56daae3c93df0 00:13:46.710 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b2429fdb8db0479d8eb56daae3c93df0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:46.710 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:46.710 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:46.971 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.971 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:46.971 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:46.971 16:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ebef59f3-5508-4abc-9335-451b1fb5f404 -a 10.0.0.2 -s 4420 -i 4 00:13:47.233 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:47.233 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:47.233 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:47.233 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:47.233 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:47.233 16:25:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:49.148 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:49.148 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:49.148 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:49.148 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:49.148 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:49.148 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:49.148 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:49.148 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:49.409 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:49.409 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:49.409 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:49.409 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:49.409 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:49.409 [ 0]:0x1 00:13:49.409 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:49.409 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:49.409 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=52ed1c99fd274cda89879da23738bf75 00:13:49.409 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 52ed1c99fd274cda89879da23738bf75 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:49.409 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:49.409 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:49.409 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:49.409 [ 1]:0x2 00:13:49.409 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:49.409 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:49.409 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b2429fdb8db0479d8eb56daae3c93df0 00:13:49.409 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b2429fdb8db0479d8eb56daae3c93df0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:49.409 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:49.671 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:49.671 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:49.671 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:49.671 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:49.671 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:49.671 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:49.671 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:49.671 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:49.671 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:49.671 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:49.671 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:49.671 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:49.671 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:49.671 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:49.671 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:49.671 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:49.671 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:49.671 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:49.671 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:49.671 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:49.671 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:49.671 [ 0]:0x2 00:13:49.671 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:49.671 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:49.671 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b2429fdb8db0479d8eb56daae3c93df0 00:13:49.671 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b2429fdb8db0479d8eb56daae3c93df0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:49.671 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:49.671 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:49.671 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:49.671 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:49.671 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:49.671 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:49.671 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:49.671 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:49.671 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:49.671 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:49.671 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:49.671 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:49.932 [2024-11-20 16:25:35.698663] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:49.932 request: 00:13:49.932 { 00:13:49.932 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:49.932 "nsid": 2, 00:13:49.932 "host": "nqn.2016-06.io.spdk:host1", 00:13:49.932 "method": "nvmf_ns_remove_host", 00:13:49.932 "req_id": 1 00:13:49.932 } 00:13:49.932 Got JSON-RPC error response 00:13:49.932 response: 00:13:49.932 { 00:13:49.932 "code": -32602, 00:13:49.932 "message": "Invalid parameters" 00:13:49.932 } 00:13:49.932 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:49.932 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:49.932 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:49.932 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:49.932 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:49.932 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:49.932 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:49.932 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:49.932 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:49.932 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:49.932 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:49.932 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:49.932 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:49.932 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:49.932 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:49.932 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:49.932 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:49.932 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:49.932 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:49.932 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:49.932 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:49.932 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:49.932 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:49.932 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:49.932 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:49.932 [ 0]:0x2 00:13:49.932 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:49.932 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:50.193 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b2429fdb8db0479d8eb56daae3c93df0 00:13:50.193 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b2429fdb8db0479d8eb56daae3c93df0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:50.193 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:50.193 16:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:50.193 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.193 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2146251 00:13:50.193 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:50.193 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:50.193 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2146251 /var/tmp/host.sock 00:13:50.193 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2146251 ']' 00:13:50.193 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:13:50.193 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:50.193 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:50.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:50.193 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:50.193 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:50.193 [2024-11-20 16:25:36.081653] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:13:50.193 [2024-11-20 16:25:36.081705] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2146251 ] 00:13:50.454 [2024-11-20 16:25:36.171128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.454 [2024-11-20 16:25:36.207725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:51.025 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:51.025 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:51.025 16:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.286 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:51.286 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 4ed8bf30-00e9-41c3-9b36-c57f5e575c8b 00:13:51.286 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:51.286 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 4ED8BF3000E941C39B36C57F5E575C8B -i 00:13:51.546 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 6bd2ae94-de99-4b88-9146-bc28af5ded6b 00:13:51.546 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:51.546 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 6BD2AE94DE994B889146BC28AF5DED6B -i 00:13:51.807 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:51.807 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:52.068 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:52.068 16:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:52.329 nvme0n1 00:13:52.329 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:52.329 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:52.590 nvme1n2 00:13:52.590 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:52.590 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:52.590 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:52.590 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:52.590 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:52.850 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:52.850 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:52.850 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:52.850 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:53.110 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 4ed8bf30-00e9-41c3-9b36-c57f5e575c8b == \4\e\d\8\b\f\3\0\-\0\0\e\9\-\4\1\c\3\-\9\b\3\6\-\c\5\7\f\5\e\5\7\5\c\8\b ]] 00:13:53.110 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:53.110 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:53.110 16:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:53.110 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 6bd2ae94-de99-4b88-9146-bc28af5ded6b == \6\b\d\2\a\e\9\4\-\d\e\9\9\-\4\b\8\8\-\9\1\4\6\-\b\c\2\8\a\f\5\d\e\d\6\b ]] 00:13:53.110 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.375 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:53.634 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 4ed8bf30-00e9-41c3-9b36-c57f5e575c8b 00:13:53.634 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:53.634 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 4ED8BF3000E941C39B36C57F5E575C8B 00:13:53.634 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:53.635 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 4ED8BF3000E941C39B36C57F5E575C8B 00:13:53.635 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:53.635 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:53.635 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:53.635 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:53.635 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:53.635 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:53.635 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:53.635 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:53.635 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 4ED8BF3000E941C39B36C57F5E575C8B 00:13:53.635 [2024-11-20 16:25:39.585330] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:13:53.635 [2024-11-20 16:25:39.585361] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:13:53.635 [2024-11-20 16:25:39.585372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:53.635 request: 00:13:53.635 { 00:13:53.635 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:53.635 "namespace": { 00:13:53.635 "bdev_name": "invalid", 00:13:53.635 "nsid": 1, 00:13:53.635 "nguid": "4ED8BF3000E941C39B36C57F5E575C8B", 00:13:53.635 "no_auto_visible": false 00:13:53.635 }, 00:13:53.635 "method": "nvmf_subsystem_add_ns", 00:13:53.635 "req_id": 1 00:13:53.635 } 00:13:53.635 Got JSON-RPC error response 00:13:53.635 response: 00:13:53.635 { 00:13:53.635 "code": -32602, 00:13:53.635 "message": "Invalid parameters" 00:13:53.635 } 00:13:53.895 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:53.895 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:53.895 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:53.895 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:53.895 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 4ed8bf30-00e9-41c3-9b36-c57f5e575c8b 00:13:53.895 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:53.895 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 4ED8BF3000E941C39B36C57F5E575C8B -i 00:13:53.895 16:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:13:56.440 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:13:56.440 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:13:56.440 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:56.440 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:13:56.440 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2146251 00:13:56.440 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2146251 ']' 00:13:56.440 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2146251 00:13:56.440 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:56.440 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:56.440 16:25:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2146251 00:13:56.440 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:56.440 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:56.440 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2146251' 00:13:56.440 killing process with pid 2146251 00:13:56.440 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2146251 00:13:56.440 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2146251 00:13:56.440 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:56.440 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:13:56.440 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:13:56.440 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:56.440 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:13:56.440 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:56.440 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:13:56.440 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:56.440 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:56.701 rmmod nvme_tcp 00:13:56.701 rmmod nvme_fabrics 00:13:56.701 rmmod nvme_keyring 00:13:56.701 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:56.701 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:13:56.701 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:13:56.701 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2143777 ']' 00:13:56.701 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2143777 00:13:56.701 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2143777 ']' 00:13:56.701 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2143777 00:13:56.701 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:56.701 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:56.701 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2143777 00:13:56.701 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:56.701 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:56.701 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2143777' 00:13:56.701 killing process with pid 2143777 00:13:56.701 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2143777 00:13:56.701 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2143777 00:13:56.962 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:56.962 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:56.962 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:56.962 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:13:56.962 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:13:56.962 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:56.962 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:13:56.962 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:56.962 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:56.962 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:56.962 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:56.962 16:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:58.878 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:58.878 00:13:58.878 real 0m27.850s 00:13:58.878 user 0m31.567s 00:13:58.878 sys 0m7.860s 00:13:58.878 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:58.878 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:58.878 ************************************ 00:13:58.878 END TEST nvmf_ns_masking 00:13:58.878 ************************************ 00:13:58.878 16:25:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:13:58.878 16:25:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:58.878 16:25:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:58.878 16:25:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:58.878 16:25:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:58.878 ************************************ 00:13:58.878 START TEST nvmf_nvme_cli 00:13:58.878 ************************************ 00:13:58.878 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:59.141 * Looking for test storage... 00:13:59.141 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:59.141 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:59.141 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:13:59.141 16:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:59.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.141 --rc genhtml_branch_coverage=1 00:13:59.141 --rc genhtml_function_coverage=1 00:13:59.141 --rc genhtml_legend=1 00:13:59.141 --rc geninfo_all_blocks=1 00:13:59.141 --rc geninfo_unexecuted_blocks=1 00:13:59.141 00:13:59.141 ' 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:59.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.141 --rc genhtml_branch_coverage=1 00:13:59.141 --rc genhtml_function_coverage=1 00:13:59.141 --rc genhtml_legend=1 00:13:59.141 --rc geninfo_all_blocks=1 00:13:59.141 --rc geninfo_unexecuted_blocks=1 00:13:59.141 00:13:59.141 ' 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:59.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.141 --rc genhtml_branch_coverage=1 00:13:59.141 --rc genhtml_function_coverage=1 00:13:59.141 --rc genhtml_legend=1 00:13:59.141 --rc geninfo_all_blocks=1 00:13:59.141 --rc geninfo_unexecuted_blocks=1 00:13:59.141 00:13:59.141 ' 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:59.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.141 --rc genhtml_branch_coverage=1 00:13:59.141 --rc genhtml_function_coverage=1 00:13:59.141 --rc genhtml_legend=1 00:13:59.141 --rc geninfo_all_blocks=1 00:13:59.141 --rc geninfo_unexecuted_blocks=1 00:13:59.141 00:13:59.141 ' 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:59.141 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.142 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.142 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.142 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:59.142 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.142 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:13:59.142 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:59.142 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:59.142 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:59.142 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:59.142 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:59.142 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:59.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:59.142 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:59.142 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:59.142 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:59.142 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:59.142 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:59.142 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:59.142 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:59.142 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:59.142 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:59.142 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:59.142 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:59.142 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:59.142 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.142 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:59.142 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.142 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:59.142 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:59.142 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:13:59.142 16:25:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:07.411 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:07.411 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:07.411 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:07.411 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:07.411 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:07.411 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:07.411 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:07.411 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:07.411 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:07.411 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:07.411 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:07.411 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:07.411 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:07.411 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:07.411 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:07.411 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:07.411 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:07.411 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:07.411 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:07.411 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:07.411 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:07.411 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:07.411 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:07.411 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:07.411 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:07.411 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:07.411 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:07.411 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:07.411 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:07.411 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:07.411 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:07.411 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:07.411 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:07.411 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:07.411 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:07.411 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:07.411 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:07.411 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:07.411 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:07.411 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:07.411 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:07.411 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:07.411 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:07.411 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:07.411 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:07.411 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:07.411 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:07.412 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:07.412 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:07.412 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:07.412 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:07.412 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:07.412 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:07.412 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:07.412 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:07.412 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:07.412 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:07.412 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:07.412 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:07.412 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:07.412 Found net devices under 0000:31:00.0: cvl_0_0 00:14:07.412 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:07.412 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:07.412 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:07.412 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:07.412 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:07.412 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:07.412 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:07.412 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:07.412 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:07.412 Found net devices under 0000:31:00.1: cvl_0_1 00:14:07.412 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:07.412 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:07.412 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:14:07.412 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:07.412 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:07.412 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:07.412 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:07.412 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:07.412 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:07.412 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:07.412 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:07.412 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:07.412 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:07.412 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:07.412 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:07.412 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:07.412 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:07.412 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:07.412 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:07.412 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:07.412 16:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:07.412 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:07.412 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:07.412 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:07.412 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:07.412 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:07.412 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:07.412 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:07.412 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:07.412 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:07.412 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.498 ms 00:14:07.412 00:14:07.412 --- 10.0.0.2 ping statistics --- 00:14:07.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.412 rtt min/avg/max/mdev = 0.498/0.498/0.498/0.000 ms 00:14:07.412 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:07.412 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:07.412 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:14:07.412 00:14:07.412 --- 10.0.0.1 ping statistics --- 00:14:07.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.412 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:14:07.412 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:07.412 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:14:07.412 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:07.412 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:07.412 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:07.412 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:07.412 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:07.412 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:07.412 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:07.412 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:07.412 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:07.412 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:07.412 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:07.412 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2151833 00:14:07.412 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:07.412 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2151833 00:14:07.412 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 2151833 ']' 00:14:07.412 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.412 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:07.412 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.412 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:07.412 16:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:07.412 [2024-11-20 16:25:52.336923] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:14:07.412 [2024-11-20 16:25:52.336989] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:07.412 [2024-11-20 16:25:52.418708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:07.412 [2024-11-20 16:25:52.455594] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:07.412 [2024-11-20 16:25:52.455623] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:07.412 [2024-11-20 16:25:52.455632] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:07.412 [2024-11-20 16:25:52.455639] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:07.412 [2024-11-20 16:25:52.455644] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:07.412 [2024-11-20 16:25:52.457295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:07.412 [2024-11-20 16:25:52.457388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:07.412 [2024-11-20 16:25:52.457549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.412 [2024-11-20 16:25:52.457550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:07.412 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:07.412 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:14:07.412 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:07.412 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:07.412 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:07.412 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:07.412 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:07.412 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.412 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:07.413 [2024-11-20 16:25:53.184559] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:07.413 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.413 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:07.413 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.413 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:07.413 Malloc0 00:14:07.413 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.413 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:07.413 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.413 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:07.413 Malloc1 00:14:07.413 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.413 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:07.413 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.413 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:07.413 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.413 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:07.413 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.413 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:07.413 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.413 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:07.413 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.413 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:07.413 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.413 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:07.413 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.413 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:07.413 [2024-11-20 16:25:53.278895] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:07.413 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.413 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:07.413 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.413 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:07.413 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.413 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 4420 00:14:07.673 00:14:07.673 Discovery Log Number of Records 2, Generation counter 2 00:14:07.673 =====Discovery Log Entry 0====== 00:14:07.673 trtype: tcp 00:14:07.673 adrfam: ipv4 00:14:07.674 subtype: current discovery subsystem 00:14:07.674 treq: not required 00:14:07.674 portid: 0 00:14:07.674 trsvcid: 4420 00:14:07.674 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:07.674 traddr: 10.0.0.2 00:14:07.674 eflags: explicit discovery connections, duplicate discovery information 00:14:07.674 sectype: none 00:14:07.674 =====Discovery Log Entry 1====== 00:14:07.674 trtype: tcp 00:14:07.674 adrfam: ipv4 00:14:07.674 subtype: nvme subsystem 00:14:07.674 treq: not required 00:14:07.674 portid: 0 00:14:07.674 trsvcid: 4420 00:14:07.674 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:07.674 traddr: 10.0.0.2 00:14:07.674 eflags: none 00:14:07.674 sectype: none 00:14:07.674 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:07.674 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:07.674 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:07.674 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:07.674 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:07.674 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:07.674 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:07.674 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:07.674 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:07.674 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:07.674 16:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:09.060 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:09.060 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:14:09.060 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:09.060 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:09.060 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:09.060 16:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:14:11.614 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:11.614 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:11.614 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:11.614 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:11.614 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:11.614 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:14:11.614 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:11.614 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:11.614 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:11.614 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:11.614 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:11.615 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:11.615 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:11.615 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:11.615 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:11.615 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:11.615 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:11.615 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:11.615 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:11.615 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:11.615 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:11.615 /dev/nvme0n2 ]] 00:14:11.615 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:11.615 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:11.615 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:11.615 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:11.615 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:11.615 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:11.615 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:11.615 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:11.615 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:11.615 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:11.615 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:11.615 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:11.615 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:11.615 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:11.615 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:11.615 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:11.615 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:11.879 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.879 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:11.879 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:14:11.879 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:11.879 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:11.879 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:11.879 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:11.879 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:14:11.879 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:11.879 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:11.879 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.879 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:11.879 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.879 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:11.879 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:11.879 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:11.879 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:11.879 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:11.879 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:11.879 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:11.879 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:11.879 rmmod nvme_tcp 00:14:11.879 rmmod nvme_fabrics 00:14:11.879 rmmod nvme_keyring 00:14:11.879 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:11.879 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:11.879 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:11.879 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2151833 ']' 00:14:11.879 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2151833 00:14:11.879 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 2151833 ']' 00:14:11.879 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 2151833 00:14:11.879 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:14:11.879 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:11.879 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2151833 00:14:11.879 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:11.879 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:11.879 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2151833' 00:14:11.879 killing process with pid 2151833 00:14:11.879 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 2151833 00:14:11.879 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 2151833 00:14:12.140 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:12.140 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:12.140 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:12.140 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:14:12.140 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:14:12.140 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:12.140 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:14:12.140 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:12.140 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:12.140 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.140 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:12.140 16:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:14.050 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:14.050 00:14:14.050 real 0m15.153s 00:14:14.050 user 0m23.924s 00:14:14.050 sys 0m6.066s 00:14:14.050 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:14.050 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:14.050 ************************************ 00:14:14.050 END TEST nvmf_nvme_cli 00:14:14.050 ************************************ 00:14:14.310 16:26:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:14.310 16:26:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:14.310 16:26:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:14.310 16:26:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:14.310 16:26:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:14.310 ************************************ 00:14:14.310 START TEST nvmf_vfio_user 00:14:14.310 ************************************ 00:14:14.310 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:14.310 * Looking for test storage... 00:14:14.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:14.310 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:14.310 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:14:14.310 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:14.310 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:14.310 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:14.310 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:14.310 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:14.310 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:14.310 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:14.310 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:14.310 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:14.310 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:14.310 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:14.310 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:14.310 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:14.310 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:14.310 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:14.310 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:14.310 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:14.310 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:14.310 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:14.310 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:14.310 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:14.310 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:14.310 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:14.571 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:14.571 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:14.571 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:14.571 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:14.571 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:14.571 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:14.571 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:14.571 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:14.571 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:14.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.571 --rc genhtml_branch_coverage=1 00:14:14.571 --rc genhtml_function_coverage=1 00:14:14.571 --rc genhtml_legend=1 00:14:14.571 --rc geninfo_all_blocks=1 00:14:14.571 --rc geninfo_unexecuted_blocks=1 00:14:14.571 00:14:14.571 ' 00:14:14.571 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:14.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.571 --rc genhtml_branch_coverage=1 00:14:14.571 --rc genhtml_function_coverage=1 00:14:14.571 --rc genhtml_legend=1 00:14:14.571 --rc geninfo_all_blocks=1 00:14:14.571 --rc geninfo_unexecuted_blocks=1 00:14:14.571 00:14:14.571 ' 00:14:14.571 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:14.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.571 --rc genhtml_branch_coverage=1 00:14:14.571 --rc genhtml_function_coverage=1 00:14:14.571 --rc genhtml_legend=1 00:14:14.571 --rc geninfo_all_blocks=1 00:14:14.571 --rc geninfo_unexecuted_blocks=1 00:14:14.571 00:14:14.571 ' 00:14:14.571 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:14.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.571 --rc genhtml_branch_coverage=1 00:14:14.571 --rc genhtml_function_coverage=1 00:14:14.571 --rc genhtml_legend=1 00:14:14.571 --rc geninfo_all_blocks=1 00:14:14.571 --rc geninfo_unexecuted_blocks=1 00:14:14.571 00:14:14.571 ' 00:14:14.571 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:14.571 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:14.571 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:14.571 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:14.571 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:14.571 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:14.571 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:14.571 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:14.571 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:14.571 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:14.571 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:14.571 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:14.571 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:14:14.572 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:14:14.572 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:14.572 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:14.572 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:14.572 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:14.572 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:14.572 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:14.572 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:14.572 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:14.572 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:14.572 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.572 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.572 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.572 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:14.572 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.572 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:14.572 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:14.572 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:14.572 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:14.572 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:14.572 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:14.572 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:14.572 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:14.572 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:14.572 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:14.572 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:14.572 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:14.572 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:14.572 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:14.572 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:14.572 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:14.572 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:14.572 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:14.572 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:14.572 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:14.572 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:14.572 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2153499 00:14:14.572 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2153499' 00:14:14.572 Process pid: 2153499 00:14:14.572 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:14.572 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2153499 00:14:14.572 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2153499 ']' 00:14:14.572 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:14.572 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.572 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:14.572 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.572 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:14.572 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:14.572 [2024-11-20 16:26:00.364434] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:14:14.572 [2024-11-20 16:26:00.364494] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:14.572 [2024-11-20 16:26:00.438279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:14.572 [2024-11-20 16:26:00.473789] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:14.572 [2024-11-20 16:26:00.473822] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:14.572 [2024-11-20 16:26:00.473830] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:14.572 [2024-11-20 16:26:00.473837] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:14.572 [2024-11-20 16:26:00.473843] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:14.572 [2024-11-20 16:26:00.475404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:14.572 [2024-11-20 16:26:00.475518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:14.572 [2024-11-20 16:26:00.475672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.572 [2024-11-20 16:26:00.475672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:15.516 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:15.516 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:15.516 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:16.457 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:16.457 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:16.457 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:16.457 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:16.457 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:16.457 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:16.718 Malloc1 00:14:16.718 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:16.978 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:16.978 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:17.249 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:17.249 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:17.249 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:17.509 Malloc2 00:14:17.509 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:17.770 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:17.770 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:18.033 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:18.033 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:18.033 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:18.033 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:18.033 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:18.033 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:18.033 [2024-11-20 16:26:03.869704] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:14:18.033 [2024-11-20 16:26:03.869774] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2154195 ] 00:14:18.033 [2024-11-20 16:26:03.925109] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:18.033 [2024-11-20 16:26:03.933255] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:18.033 [2024-11-20 16:26:03.933277] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f263f691000 00:14:18.033 [2024-11-20 16:26:03.934245] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:18.033 [2024-11-20 16:26:03.935251] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:18.033 [2024-11-20 16:26:03.936249] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:18.033 [2024-11-20 16:26:03.937256] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:18.033 [2024-11-20 16:26:03.938263] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:18.033 [2024-11-20 16:26:03.939267] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:18.033 [2024-11-20 16:26:03.940276] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:18.033 [2024-11-20 16:26:03.941275] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:18.033 [2024-11-20 16:26:03.942291] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:18.033 [2024-11-20 16:26:03.942300] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f263f686000 00:14:18.033 [2024-11-20 16:26:03.943629] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:18.033 [2024-11-20 16:26:03.965142] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:18.033 [2024-11-20 16:26:03.965173] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:14:18.033 [2024-11-20 16:26:03.967419] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:18.033 [2024-11-20 16:26:03.967467] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:18.033 [2024-11-20 16:26:03.967551] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:14:18.033 [2024-11-20 16:26:03.967567] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:14:18.033 [2024-11-20 16:26:03.967573] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:14:18.033 [2024-11-20 16:26:03.968417] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:18.033 [2024-11-20 16:26:03.968427] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:14:18.033 [2024-11-20 16:26:03.968434] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:14:18.033 [2024-11-20 16:26:03.969423] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:18.033 [2024-11-20 16:26:03.969432] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:14:18.033 [2024-11-20 16:26:03.969439] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:18.033 [2024-11-20 16:26:03.970431] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:18.033 [2024-11-20 16:26:03.970439] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:18.033 [2024-11-20 16:26:03.971434] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:18.033 [2024-11-20 16:26:03.971442] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:18.033 [2024-11-20 16:26:03.971447] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:18.033 [2024-11-20 16:26:03.971454] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:18.033 [2024-11-20 16:26:03.971563] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:14:18.034 [2024-11-20 16:26:03.971568] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:18.034 [2024-11-20 16:26:03.971573] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:18.034 [2024-11-20 16:26:03.972440] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:18.034 [2024-11-20 16:26:03.973438] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:18.034 [2024-11-20 16:26:03.974445] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:18.034 [2024-11-20 16:26:03.975441] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:18.034 [2024-11-20 16:26:03.975511] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:18.034 [2024-11-20 16:26:03.976461] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:18.034 [2024-11-20 16:26:03.976469] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:18.034 [2024-11-20 16:26:03.976474] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:18.034 [2024-11-20 16:26:03.976496] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:14:18.034 [2024-11-20 16:26:03.976504] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:18.034 [2024-11-20 16:26:03.976520] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:18.034 [2024-11-20 16:26:03.976526] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:18.034 [2024-11-20 16:26:03.976530] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:18.034 [2024-11-20 16:26:03.976542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:18.034 [2024-11-20 16:26:03.976582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:18.034 [2024-11-20 16:26:03.976592] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:14:18.034 [2024-11-20 16:26:03.976597] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:14:18.034 [2024-11-20 16:26:03.976602] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:14:18.034 [2024-11-20 16:26:03.976606] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:18.034 [2024-11-20 16:26:03.976613] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:14:18.034 [2024-11-20 16:26:03.976618] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:14:18.034 [2024-11-20 16:26:03.976623] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:14:18.034 [2024-11-20 16:26:03.976632] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:18.034 [2024-11-20 16:26:03.976642] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:18.034 [2024-11-20 16:26:03.976652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:18.034 [2024-11-20 16:26:03.976663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:18.034 [2024-11-20 16:26:03.976672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:18.034 [2024-11-20 16:26:03.976680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:18.034 [2024-11-20 16:26:03.976689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:18.034 [2024-11-20 16:26:03.976694] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:18.034 [2024-11-20 16:26:03.976701] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:18.034 [2024-11-20 16:26:03.976712] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:18.034 [2024-11-20 16:26:03.976720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:18.034 [2024-11-20 16:26:03.976727] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:14:18.034 [2024-11-20 16:26:03.976732] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:18.034 [2024-11-20 16:26:03.976739] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:14:18.034 [2024-11-20 16:26:03.976745] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:18.034 [2024-11-20 16:26:03.976754] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:18.034 [2024-11-20 16:26:03.976761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:18.034 [2024-11-20 16:26:03.976823] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:14:18.034 [2024-11-20 16:26:03.976831] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:18.034 [2024-11-20 16:26:03.976839] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:18.034 [2024-11-20 16:26:03.976844] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:18.034 [2024-11-20 16:26:03.976847] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:18.034 [2024-11-20 16:26:03.976853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:18.034 [2024-11-20 16:26:03.976865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:18.034 [2024-11-20 16:26:03.976874] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:14:18.034 [2024-11-20 16:26:03.976887] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:14:18.034 [2024-11-20 16:26:03.976895] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:18.034 [2024-11-20 16:26:03.976902] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:18.034 [2024-11-20 16:26:03.976906] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:18.034 [2024-11-20 16:26:03.976910] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:18.034 [2024-11-20 16:26:03.976916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:18.034 [2024-11-20 16:26:03.976934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:18.034 [2024-11-20 16:26:03.976946] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:18.034 [2024-11-20 16:26:03.976954] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:18.034 [2024-11-20 16:26:03.976963] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:18.034 [2024-11-20 16:26:03.976968] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:18.034 [2024-11-20 16:26:03.976972] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:18.034 [2024-11-20 16:26:03.976978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:18.034 [2024-11-20 16:26:03.976997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:18.034 [2024-11-20 16:26:03.977005] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:18.034 [2024-11-20 16:26:03.977012] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:18.034 [2024-11-20 16:26:03.977020] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:14:18.034 [2024-11-20 16:26:03.977026] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:18.034 [2024-11-20 16:26:03.977032] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:18.034 [2024-11-20 16:26:03.977037] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:14:18.034 [2024-11-20 16:26:03.977042] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:18.034 [2024-11-20 16:26:03.977046] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:14:18.034 [2024-11-20 16:26:03.977051] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:14:18.034 [2024-11-20 16:26:03.977070] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:18.034 [2024-11-20 16:26:03.977080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:18.034 [2024-11-20 16:26:03.977091] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:18.034 [2024-11-20 16:26:03.977101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:18.034 [2024-11-20 16:26:03.977113] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:18.034 [2024-11-20 16:26:03.977120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:18.034 [2024-11-20 16:26:03.977131] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:18.034 [2024-11-20 16:26:03.977141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:18.035 [2024-11-20 16:26:03.977154] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:18.035 [2024-11-20 16:26:03.977160] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:18.035 [2024-11-20 16:26:03.977163] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:18.035 [2024-11-20 16:26:03.977167] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:18.035 [2024-11-20 16:26:03.977170] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:18.035 [2024-11-20 16:26:03.977178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:18.035 [2024-11-20 16:26:03.977186] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:18.035 [2024-11-20 16:26:03.977191] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:18.035 [2024-11-20 16:26:03.977194] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:18.035 [2024-11-20 16:26:03.977200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:18.035 [2024-11-20 16:26:03.977208] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:18.035 [2024-11-20 16:26:03.977212] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:18.035 [2024-11-20 16:26:03.977216] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:18.035 [2024-11-20 16:26:03.977222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:18.035 [2024-11-20 16:26:03.977229] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:18.035 [2024-11-20 16:26:03.977234] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:18.035 [2024-11-20 16:26:03.977237] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:18.035 [2024-11-20 16:26:03.977243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:18.035 [2024-11-20 16:26:03.977250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:18.035 [2024-11-20 16:26:03.977262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:18.035 [2024-11-20 16:26:03.977274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:18.035 [2024-11-20 16:26:03.977282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:18.035 ===================================================== 00:14:18.035 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:18.035 ===================================================== 00:14:18.035 Controller Capabilities/Features 00:14:18.035 ================================ 00:14:18.035 Vendor ID: 4e58 00:14:18.035 Subsystem Vendor ID: 4e58 00:14:18.035 Serial Number: SPDK1 00:14:18.035 Model Number: SPDK bdev Controller 00:14:18.035 Firmware Version: 25.01 00:14:18.035 Recommended Arb Burst: 6 00:14:18.035 IEEE OUI Identifier: 8d 6b 50 00:14:18.035 Multi-path I/O 00:14:18.035 May have multiple subsystem ports: Yes 00:14:18.035 May have multiple controllers: Yes 00:14:18.035 Associated with SR-IOV VF: No 00:14:18.035 Max Data Transfer Size: 131072 00:14:18.035 Max Number of Namespaces: 32 00:14:18.035 Max Number of I/O Queues: 127 00:14:18.035 NVMe Specification Version (VS): 1.3 00:14:18.035 NVMe Specification Version (Identify): 1.3 00:14:18.035 Maximum Queue Entries: 256 00:14:18.035 Contiguous Queues Required: Yes 00:14:18.035 Arbitration Mechanisms Supported 00:14:18.035 Weighted Round Robin: Not Supported 00:14:18.035 Vendor Specific: Not Supported 00:14:18.035 Reset Timeout: 15000 ms 00:14:18.035 Doorbell Stride: 4 bytes 00:14:18.035 NVM Subsystem Reset: Not Supported 00:14:18.035 Command Sets Supported 00:14:18.035 NVM Command Set: Supported 00:14:18.035 Boot Partition: Not Supported 00:14:18.035 Memory Page Size Minimum: 4096 bytes 00:14:18.035 Memory Page Size Maximum: 4096 bytes 00:14:18.035 Persistent Memory Region: Not Supported 00:14:18.035 Optional Asynchronous Events Supported 00:14:18.035 Namespace Attribute Notices: Supported 00:14:18.035 Firmware Activation Notices: Not Supported 00:14:18.035 ANA Change Notices: Not Supported 00:14:18.035 PLE Aggregate Log Change Notices: Not Supported 00:14:18.035 LBA Status Info Alert Notices: Not Supported 00:14:18.035 EGE Aggregate Log Change Notices: Not Supported 00:14:18.035 Normal NVM Subsystem Shutdown event: Not Supported 00:14:18.035 Zone Descriptor Change Notices: Not Supported 00:14:18.035 Discovery Log Change Notices: Not Supported 00:14:18.035 Controller Attributes 00:14:18.035 128-bit Host Identifier: Supported 00:14:18.035 Non-Operational Permissive Mode: Not Supported 00:14:18.035 NVM Sets: Not Supported 00:14:18.035 Read Recovery Levels: Not Supported 00:14:18.035 Endurance Groups: Not Supported 00:14:18.035 Predictable Latency Mode: Not Supported 00:14:18.035 Traffic Based Keep ALive: Not Supported 00:14:18.035 Namespace Granularity: Not Supported 00:14:18.035 SQ Associations: Not Supported 00:14:18.035 UUID List: Not Supported 00:14:18.035 Multi-Domain Subsystem: Not Supported 00:14:18.035 Fixed Capacity Management: Not Supported 00:14:18.035 Variable Capacity Management: Not Supported 00:14:18.035 Delete Endurance Group: Not Supported 00:14:18.035 Delete NVM Set: Not Supported 00:14:18.035 Extended LBA Formats Supported: Not Supported 00:14:18.035 Flexible Data Placement Supported: Not Supported 00:14:18.035 00:14:18.035 Controller Memory Buffer Support 00:14:18.035 ================================ 00:14:18.035 Supported: No 00:14:18.035 00:14:18.035 Persistent Memory Region Support 00:14:18.035 ================================ 00:14:18.035 Supported: No 00:14:18.035 00:14:18.035 Admin Command Set Attributes 00:14:18.035 ============================ 00:14:18.035 Security Send/Receive: Not Supported 00:14:18.035 Format NVM: Not Supported 00:14:18.035 Firmware Activate/Download: Not Supported 00:14:18.035 Namespace Management: Not Supported 00:14:18.035 Device Self-Test: Not Supported 00:14:18.035 Directives: Not Supported 00:14:18.035 NVMe-MI: Not Supported 00:14:18.035 Virtualization Management: Not Supported 00:14:18.035 Doorbell Buffer Config: Not Supported 00:14:18.035 Get LBA Status Capability: Not Supported 00:14:18.035 Command & Feature Lockdown Capability: Not Supported 00:14:18.035 Abort Command Limit: 4 00:14:18.035 Async Event Request Limit: 4 00:14:18.035 Number of Firmware Slots: N/A 00:14:18.035 Firmware Slot 1 Read-Only: N/A 00:14:18.035 Firmware Activation Without Reset: N/A 00:14:18.035 Multiple Update Detection Support: N/A 00:14:18.035 Firmware Update Granularity: No Information Provided 00:14:18.035 Per-Namespace SMART Log: No 00:14:18.035 Asymmetric Namespace Access Log Page: Not Supported 00:14:18.035 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:18.035 Command Effects Log Page: Supported 00:14:18.035 Get Log Page Extended Data: Supported 00:14:18.035 Telemetry Log Pages: Not Supported 00:14:18.035 Persistent Event Log Pages: Not Supported 00:14:18.035 Supported Log Pages Log Page: May Support 00:14:18.035 Commands Supported & Effects Log Page: Not Supported 00:14:18.035 Feature Identifiers & Effects Log Page:May Support 00:14:18.035 NVMe-MI Commands & Effects Log Page: May Support 00:14:18.035 Data Area 4 for Telemetry Log: Not Supported 00:14:18.035 Error Log Page Entries Supported: 128 00:14:18.035 Keep Alive: Supported 00:14:18.035 Keep Alive Granularity: 10000 ms 00:14:18.035 00:14:18.035 NVM Command Set Attributes 00:14:18.035 ========================== 00:14:18.035 Submission Queue Entry Size 00:14:18.035 Max: 64 00:14:18.035 Min: 64 00:14:18.035 Completion Queue Entry Size 00:14:18.035 Max: 16 00:14:18.035 Min: 16 00:14:18.035 Number of Namespaces: 32 00:14:18.035 Compare Command: Supported 00:14:18.035 Write Uncorrectable Command: Not Supported 00:14:18.035 Dataset Management Command: Supported 00:14:18.035 Write Zeroes Command: Supported 00:14:18.035 Set Features Save Field: Not Supported 00:14:18.035 Reservations: Not Supported 00:14:18.035 Timestamp: Not Supported 00:14:18.035 Copy: Supported 00:14:18.035 Volatile Write Cache: Present 00:14:18.035 Atomic Write Unit (Normal): 1 00:14:18.035 Atomic Write Unit (PFail): 1 00:14:18.035 Atomic Compare & Write Unit: 1 00:14:18.035 Fused Compare & Write: Supported 00:14:18.035 Scatter-Gather List 00:14:18.035 SGL Command Set: Supported (Dword aligned) 00:14:18.035 SGL Keyed: Not Supported 00:14:18.035 SGL Bit Bucket Descriptor: Not Supported 00:14:18.035 SGL Metadata Pointer: Not Supported 00:14:18.035 Oversized SGL: Not Supported 00:14:18.035 SGL Metadata Address: Not Supported 00:14:18.035 SGL Offset: Not Supported 00:14:18.035 Transport SGL Data Block: Not Supported 00:14:18.035 Replay Protected Memory Block: Not Supported 00:14:18.035 00:14:18.035 Firmware Slot Information 00:14:18.035 ========================= 00:14:18.035 Active slot: 1 00:14:18.035 Slot 1 Firmware Revision: 25.01 00:14:18.035 00:14:18.035 00:14:18.035 Commands Supported and Effects 00:14:18.035 ============================== 00:14:18.035 Admin Commands 00:14:18.036 -------------- 00:14:18.036 Get Log Page (02h): Supported 00:14:18.036 Identify (06h): Supported 00:14:18.036 Abort (08h): Supported 00:14:18.036 Set Features (09h): Supported 00:14:18.036 Get Features (0Ah): Supported 00:14:18.036 Asynchronous Event Request (0Ch): Supported 00:14:18.036 Keep Alive (18h): Supported 00:14:18.036 I/O Commands 00:14:18.036 ------------ 00:14:18.036 Flush (00h): Supported LBA-Change 00:14:18.036 Write (01h): Supported LBA-Change 00:14:18.036 Read (02h): Supported 00:14:18.036 Compare (05h): Supported 00:14:18.036 Write Zeroes (08h): Supported LBA-Change 00:14:18.036 Dataset Management (09h): Supported LBA-Change 00:14:18.036 Copy (19h): Supported LBA-Change 00:14:18.036 00:14:18.036 Error Log 00:14:18.036 ========= 00:14:18.036 00:14:18.036 Arbitration 00:14:18.036 =========== 00:14:18.036 Arbitration Burst: 1 00:14:18.036 00:14:18.036 Power Management 00:14:18.036 ================ 00:14:18.036 Number of Power States: 1 00:14:18.036 Current Power State: Power State #0 00:14:18.036 Power State #0: 00:14:18.036 Max Power: 0.00 W 00:14:18.036 Non-Operational State: Operational 00:14:18.036 Entry Latency: Not Reported 00:14:18.036 Exit Latency: Not Reported 00:14:18.036 Relative Read Throughput: 0 00:14:18.036 Relative Read Latency: 0 00:14:18.036 Relative Write Throughput: 0 00:14:18.036 Relative Write Latency: 0 00:14:18.036 Idle Power: Not Reported 00:14:18.036 Active Power: Not Reported 00:14:18.036 Non-Operational Permissive Mode: Not Supported 00:14:18.036 00:14:18.036 Health Information 00:14:18.036 ================== 00:14:18.036 Critical Warnings: 00:14:18.036 Available Spare Space: OK 00:14:18.036 Temperature: OK 00:14:18.036 Device Reliability: OK 00:14:18.036 Read Only: No 00:14:18.036 Volatile Memory Backup: OK 00:14:18.036 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:18.036 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:18.036 Available Spare: 0% 00:14:18.036 Available Sp[2024-11-20 16:26:03.977384] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:18.036 [2024-11-20 16:26:03.977393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:18.036 [2024-11-20 16:26:03.977420] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:14:18.036 [2024-11-20 16:26:03.977430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.036 [2024-11-20 16:26:03.977437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.036 [2024-11-20 16:26:03.977444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.036 [2024-11-20 16:26:03.977450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:18.036 [2024-11-20 16:26:03.978477] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:18.036 [2024-11-20 16:26:03.978488] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:18.036 [2024-11-20 16:26:03.979481] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:18.036 [2024-11-20 16:26:03.982994] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:14:18.036 [2024-11-20 16:26:03.983001] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:14:18.036 [2024-11-20 16:26:03.983503] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:18.036 [2024-11-20 16:26:03.983513] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:14:18.036 [2024-11-20 16:26:03.983574] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:18.036 [2024-11-20 16:26:03.985540] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:18.298 are Threshold: 0% 00:14:18.298 Life Percentage Used: 0% 00:14:18.298 Data Units Read: 0 00:14:18.298 Data Units Written: 0 00:14:18.298 Host Read Commands: 0 00:14:18.298 Host Write Commands: 0 00:14:18.298 Controller Busy Time: 0 minutes 00:14:18.298 Power Cycles: 0 00:14:18.298 Power On Hours: 0 hours 00:14:18.298 Unsafe Shutdowns: 0 00:14:18.298 Unrecoverable Media Errors: 0 00:14:18.298 Lifetime Error Log Entries: 0 00:14:18.298 Warning Temperature Time: 0 minutes 00:14:18.298 Critical Temperature Time: 0 minutes 00:14:18.298 00:14:18.298 Number of Queues 00:14:18.298 ================ 00:14:18.298 Number of I/O Submission Queues: 127 00:14:18.298 Number of I/O Completion Queues: 127 00:14:18.298 00:14:18.298 Active Namespaces 00:14:18.298 ================= 00:14:18.298 Namespace ID:1 00:14:18.298 Error Recovery Timeout: Unlimited 00:14:18.298 Command Set Identifier: NVM (00h) 00:14:18.298 Deallocate: Supported 00:14:18.298 Deallocated/Unwritten Error: Not Supported 00:14:18.298 Deallocated Read Value: Unknown 00:14:18.298 Deallocate in Write Zeroes: Not Supported 00:14:18.298 Deallocated Guard Field: 0xFFFF 00:14:18.298 Flush: Supported 00:14:18.298 Reservation: Supported 00:14:18.298 Namespace Sharing Capabilities: Multiple Controllers 00:14:18.298 Size (in LBAs): 131072 (0GiB) 00:14:18.298 Capacity (in LBAs): 131072 (0GiB) 00:14:18.298 Utilization (in LBAs): 131072 (0GiB) 00:14:18.298 NGUID: 1F839BC5C2514D84A6B440437DB2AB6B 00:14:18.298 UUID: 1f839bc5-c251-4d84-a6b4-40437db2ab6b 00:14:18.298 Thin Provisioning: Not Supported 00:14:18.298 Per-NS Atomic Units: Yes 00:14:18.298 Atomic Boundary Size (Normal): 0 00:14:18.298 Atomic Boundary Size (PFail): 0 00:14:18.298 Atomic Boundary Offset: 0 00:14:18.298 Maximum Single Source Range Length: 65535 00:14:18.298 Maximum Copy Length: 65535 00:14:18.298 Maximum Source Range Count: 1 00:14:18.298 NGUID/EUI64 Never Reused: No 00:14:18.298 Namespace Write Protected: No 00:14:18.298 Number of LBA Formats: 1 00:14:18.298 Current LBA Format: LBA Format #00 00:14:18.298 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:18.298 00:14:18.298 16:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:18.298 [2024-11-20 16:26:04.187704] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:23.581 Initializing NVMe Controllers 00:14:23.581 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:23.581 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:23.581 Initialization complete. Launching workers. 00:14:23.581 ======================================================== 00:14:23.581 Latency(us) 00:14:23.581 Device Information : IOPS MiB/s Average min max 00:14:23.581 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 40106.20 156.66 3192.00 845.94 10782.43 00:14:23.581 ======================================================== 00:14:23.581 Total : 40106.20 156.66 3192.00 845.94 10782.43 00:14:23.581 00:14:23.581 [2024-11-20 16:26:09.208407] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:23.581 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:23.581 [2024-11-20 16:26:09.402301] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:28.865 Initializing NVMe Controllers 00:14:28.865 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:28.865 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:28.865 Initialization complete. Launching workers. 00:14:28.865 ======================================================== 00:14:28.865 Latency(us) 00:14:28.865 Device Information : IOPS MiB/s Average min max 00:14:28.865 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16001.52 62.51 8004.81 5994.02 15933.41 00:14:28.865 ======================================================== 00:14:28.865 Total : 16001.52 62.51 8004.81 5994.02 15933.41 00:14:28.865 00:14:28.865 [2024-11-20 16:26:14.445936] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:28.865 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:28.865 [2024-11-20 16:26:14.643807] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:34.150 [2024-11-20 16:26:19.726238] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:34.150 Initializing NVMe Controllers 00:14:34.150 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:34.150 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:34.150 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:34.150 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:34.150 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:34.150 Initialization complete. Launching workers. 00:14:34.150 Starting thread on core 2 00:14:34.150 Starting thread on core 3 00:14:34.150 Starting thread on core 1 00:14:34.150 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:34.150 [2024-11-20 16:26:20.013077] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:38.352 [2024-11-20 16:26:23.618141] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:38.352 Initializing NVMe Controllers 00:14:38.352 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:38.352 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:38.352 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:38.352 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:38.352 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:38.352 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:38.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:38.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:38.352 Initialization complete. Launching workers. 00:14:38.352 Starting thread on core 1 with urgent priority queue 00:14:38.352 Starting thread on core 2 with urgent priority queue 00:14:38.352 Starting thread on core 3 with urgent priority queue 00:14:38.352 Starting thread on core 0 with urgent priority queue 00:14:38.352 SPDK bdev Controller (SPDK1 ) core 0: 11800.00 IO/s 8.47 secs/100000 ios 00:14:38.352 SPDK bdev Controller (SPDK1 ) core 1: 9337.67 IO/s 10.71 secs/100000 ios 00:14:38.352 SPDK bdev Controller (SPDK1 ) core 2: 9870.33 IO/s 10.13 secs/100000 ios 00:14:38.352 SPDK bdev Controller (SPDK1 ) core 3: 8561.67 IO/s 11.68 secs/100000 ios 00:14:38.352 ======================================================== 00:14:38.352 00:14:38.352 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:38.352 [2024-11-20 16:26:23.910417] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:38.352 Initializing NVMe Controllers 00:14:38.352 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:38.352 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:38.352 Namespace ID: 1 size: 0GB 00:14:38.352 Initialization complete. 00:14:38.352 INFO: using host memory buffer for IO 00:14:38.352 Hello world! 00:14:38.352 [2024-11-20 16:26:23.946634] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:38.352 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:38.352 [2024-11-20 16:26:24.233378] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:39.736 Initializing NVMe Controllers 00:14:39.736 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:39.736 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:39.736 Initialization complete. Launching workers. 00:14:39.736 submit (in ns) avg, min, max = 7892.2, 3893.3, 4997391.7 00:14:39.736 complete (in ns) avg, min, max = 18254.7, 2408.3, 4994172.5 00:14:39.736 00:14:39.736 Submit histogram 00:14:39.736 ================ 00:14:39.736 Range in us Cumulative Count 00:14:39.736 3.893 - 3.920: 1.9555% ( 371) 00:14:39.736 3.920 - 3.947: 8.9289% ( 1323) 00:14:39.736 3.947 - 3.973: 19.6816% ( 2040) 00:14:39.736 3.973 - 4.000: 30.9351% ( 2135) 00:14:39.736 4.000 - 4.027: 41.8986% ( 2080) 00:14:39.736 4.027 - 4.053: 54.7596% ( 2440) 00:14:39.736 4.053 - 4.080: 69.9715% ( 2886) 00:14:39.736 4.080 - 4.107: 84.6721% ( 2789) 00:14:39.736 4.107 - 4.133: 93.7856% ( 1729) 00:14:39.736 4.133 - 4.160: 97.6492% ( 733) 00:14:39.736 4.160 - 4.187: 98.9827% ( 253) 00:14:39.736 4.187 - 4.213: 99.3464% ( 69) 00:14:39.736 4.213 - 4.240: 99.4571% ( 21) 00:14:39.736 4.240 - 4.267: 99.4782% ( 4) 00:14:39.736 4.267 - 4.293: 99.4834% ( 1) 00:14:39.736 4.293 - 4.320: 99.4940% ( 2) 00:14:39.736 4.587 - 4.613: 99.4993% ( 1) 00:14:39.736 4.667 - 4.693: 99.5045% ( 1) 00:14:39.736 4.747 - 4.773: 99.5151% ( 2) 00:14:39.736 4.853 - 4.880: 99.5203% ( 1) 00:14:39.736 5.200 - 5.227: 99.5256% ( 1) 00:14:39.736 5.387 - 5.413: 99.5309% ( 1) 00:14:39.736 5.440 - 5.467: 99.5362% ( 1) 00:14:39.736 5.627 - 5.653: 99.5414% ( 1) 00:14:39.736 5.707 - 5.733: 99.5467% ( 1) 00:14:39.736 5.733 - 5.760: 99.5520% ( 1) 00:14:39.736 5.787 - 5.813: 99.5572% ( 1) 00:14:39.736 5.813 - 5.840: 99.5625% ( 1) 00:14:39.736 5.840 - 5.867: 99.5678% ( 1) 00:14:39.736 5.867 - 5.893: 99.5783% ( 2) 00:14:39.736 6.000 - 6.027: 99.5836% ( 1) 00:14:39.736 6.027 - 6.053: 99.5889% ( 1) 00:14:39.736 6.053 - 6.080: 99.5941% ( 1) 00:14:39.736 6.080 - 6.107: 99.5994% ( 1) 00:14:39.736 6.107 - 6.133: 99.6152% ( 3) 00:14:39.736 6.187 - 6.213: 99.6310% ( 3) 00:14:39.736 6.213 - 6.240: 99.6416% ( 2) 00:14:39.736 6.267 - 6.293: 99.6521% ( 2) 00:14:39.736 6.293 - 6.320: 99.6574% ( 1) 00:14:39.736 6.320 - 6.347: 99.6627% ( 1) 00:14:39.736 6.347 - 6.373: 99.6679% ( 1) 00:14:39.736 6.373 - 6.400: 99.6732% ( 1) 00:14:39.736 6.400 - 6.427: 99.6785% ( 1) 00:14:39.736 6.427 - 6.453: 99.6890% ( 2) 00:14:39.736 6.507 - 6.533: 99.6996% ( 2) 00:14:39.736 6.533 - 6.560: 99.7048% ( 1) 00:14:39.736 6.560 - 6.587: 99.7154% ( 2) 00:14:39.736 6.587 - 6.613: 99.7206% ( 1) 00:14:39.736 6.667 - 6.693: 99.7259% ( 1) 00:14:39.736 6.773 - 6.800: 99.7312% ( 1) 00:14:39.736 6.800 - 6.827: 99.7470% ( 3) 00:14:39.736 6.827 - 6.880: 99.7575% ( 2) 00:14:39.736 6.880 - 6.933: 99.7734% ( 3) 00:14:39.736 6.933 - 6.987: 99.7786% ( 1) 00:14:39.736 7.040 - 7.093: 99.7892% ( 2) 00:14:39.736 7.093 - 7.147: 99.7944% ( 1) 00:14:39.736 7.147 - 7.200: 99.8208% ( 5) 00:14:39.736 7.200 - 7.253: 99.8261% ( 1) 00:14:39.736 7.307 - 7.360: 99.8313% ( 1) 00:14:39.736 7.413 - 7.467: 99.8419% ( 2) 00:14:39.736 7.467 - 7.520: 99.8471% ( 1) 00:14:39.736 7.680 - 7.733: 99.8577% ( 2) 00:14:39.736 7.840 - 7.893: 99.8630% ( 1) 00:14:39.736 7.947 - 8.000: 99.8735% ( 2) 00:14:39.736 8.000 - 8.053: 99.8788% ( 1) 00:14:39.736 8.213 - 8.267: 99.8840% ( 1) 00:14:39.736 8.267 - 8.320: 99.8893% ( 1) 00:14:39.736 11.893 - 11.947: 99.8946% ( 1) 00:14:39.736 13.120 - 13.173: 99.8999% ( 1) 00:14:39.736 13.653 - 13.760: 99.9051% ( 1) 00:14:39.736 3986.773 - 4014.080: 99.9947% ( 17) 00:14:39.736 4997.120 - 5024.427: 100.0000% ( 1) 00:14:39.736 00:14:39.736 Complete histogram 00:14:39.736 ================== 00:14:39.736 Range in us Cumulative Count 00:14:39.736 2.400 - 2.413: 0.1107% ( 21) 00:14:39.736 2.413 - 2.427: 0.8065% ( 132) 00:14:39.736 2.427 - 2.440: 0.8644% ( 11) 00:14:39.736 2.440 - 2.453: 1.0911% ( 43) 00:14:39.736 2.453 - 2.467: 14.7322% ( 2588) 00:14:39.736 2.467 - [2024-11-20 16:26:25.255866] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:39.736 2.480: 54.6542% ( 7574) 00:14:39.736 2.480 - 2.493: 63.4198% ( 1663) 00:14:39.736 2.493 - 2.507: 74.8893% ( 2176) 00:14:39.736 2.507 - 2.520: 79.3116% ( 839) 00:14:39.736 2.520 - 2.533: 81.8100% ( 474) 00:14:39.736 2.533 - 2.547: 87.8242% ( 1141) 00:14:39.736 2.547 - 2.560: 93.4641% ( 1070) 00:14:39.736 2.560 - 2.573: 96.5054% ( 577) 00:14:39.736 2.573 - 2.587: 98.1288% ( 308) 00:14:39.736 2.587 - 2.600: 98.9880% ( 163) 00:14:39.736 2.600 - 2.613: 99.2990% ( 59) 00:14:39.736 2.613 - 2.627: 99.3938% ( 18) 00:14:39.736 2.627 - 2.640: 99.4202% ( 5) 00:14:39.736 2.640 - 2.653: 99.4255% ( 1) 00:14:39.736 4.160 - 4.187: 99.4307% ( 1) 00:14:39.736 4.347 - 4.373: 99.4360% ( 1) 00:14:39.736 4.373 - 4.400: 99.4413% ( 1) 00:14:39.736 4.507 - 4.533: 99.4466% ( 1) 00:14:39.736 4.533 - 4.560: 99.4518% ( 1) 00:14:39.736 4.560 - 4.587: 99.4624% ( 2) 00:14:39.736 4.587 - 4.613: 99.4676% ( 1) 00:14:39.736 4.667 - 4.693: 99.4729% ( 1) 00:14:39.736 4.693 - 4.720: 99.4782% ( 1) 00:14:39.736 4.773 - 4.800: 99.4834% ( 1) 00:14:39.736 4.933 - 4.960: 99.4940% ( 2) 00:14:39.736 4.960 - 4.987: 99.4993% ( 1) 00:14:39.736 5.013 - 5.040: 99.5045% ( 1) 00:14:39.737 5.067 - 5.093: 99.5098% ( 1) 00:14:39.737 5.093 - 5.120: 99.5151% ( 1) 00:14:39.737 5.253 - 5.280: 99.5203% ( 1) 00:14:39.737 5.333 - 5.360: 99.5256% ( 1) 00:14:39.737 5.520 - 5.547: 99.5309% ( 1) 00:14:39.737 5.547 - 5.573: 99.5362% ( 1) 00:14:39.737 5.600 - 5.627: 99.5414% ( 1) 00:14:39.737 5.733 - 5.760: 99.5520% ( 2) 00:14:39.737 5.813 - 5.840: 99.5572% ( 1) 00:14:39.737 5.973 - 6.000: 99.5731% ( 3) 00:14:39.737 6.053 - 6.080: 99.5783% ( 1) 00:14:39.737 6.080 - 6.107: 99.5836% ( 1) 00:14:39.737 6.827 - 6.880: 99.5889% ( 1) 00:14:39.737 9.760 - 9.813: 99.5941% ( 1) 00:14:39.737 42.880 - 43.093: 99.5994% ( 1) 00:14:39.737 153.600 - 154.453: 99.6047% ( 1) 00:14:39.737 3058.347 - 3072.000: 99.6100% ( 1) 00:14:39.737 3072.000 - 3085.653: 99.6152% ( 1) 00:14:39.737 3986.773 - 4014.080: 99.9947% ( 72) 00:14:39.737 4969.813 - 4997.120: 100.0000% ( 1) 00:14:39.737 00:14:39.737 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:39.737 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:39.737 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:39.737 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:39.737 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:39.737 [ 00:14:39.737 { 00:14:39.737 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:39.737 "subtype": "Discovery", 00:14:39.737 "listen_addresses": [], 00:14:39.737 "allow_any_host": true, 00:14:39.737 "hosts": [] 00:14:39.737 }, 00:14:39.737 { 00:14:39.737 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:39.737 "subtype": "NVMe", 00:14:39.737 "listen_addresses": [ 00:14:39.737 { 00:14:39.737 "trtype": "VFIOUSER", 00:14:39.737 "adrfam": "IPv4", 00:14:39.737 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:39.737 "trsvcid": "0" 00:14:39.737 } 00:14:39.737 ], 00:14:39.737 "allow_any_host": true, 00:14:39.737 "hosts": [], 00:14:39.737 "serial_number": "SPDK1", 00:14:39.737 "model_number": "SPDK bdev Controller", 00:14:39.737 "max_namespaces": 32, 00:14:39.737 "min_cntlid": 1, 00:14:39.737 "max_cntlid": 65519, 00:14:39.737 "namespaces": [ 00:14:39.737 { 00:14:39.737 "nsid": 1, 00:14:39.737 "bdev_name": "Malloc1", 00:14:39.737 "name": "Malloc1", 00:14:39.737 "nguid": "1F839BC5C2514D84A6B440437DB2AB6B", 00:14:39.737 "uuid": "1f839bc5-c251-4d84-a6b4-40437db2ab6b" 00:14:39.737 } 00:14:39.737 ] 00:14:39.737 }, 00:14:39.737 { 00:14:39.737 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:39.737 "subtype": "NVMe", 00:14:39.737 "listen_addresses": [ 00:14:39.737 { 00:14:39.737 "trtype": "VFIOUSER", 00:14:39.737 "adrfam": "IPv4", 00:14:39.737 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:39.737 "trsvcid": "0" 00:14:39.737 } 00:14:39.737 ], 00:14:39.737 "allow_any_host": true, 00:14:39.737 "hosts": [], 00:14:39.737 "serial_number": "SPDK2", 00:14:39.737 "model_number": "SPDK bdev Controller", 00:14:39.737 "max_namespaces": 32, 00:14:39.737 "min_cntlid": 1, 00:14:39.737 "max_cntlid": 65519, 00:14:39.737 "namespaces": [ 00:14:39.737 { 00:14:39.737 "nsid": 1, 00:14:39.737 "bdev_name": "Malloc2", 00:14:39.737 "name": "Malloc2", 00:14:39.737 "nguid": "2AE53EA941724D6EAF1BFF872B9AF192", 00:14:39.737 "uuid": "2ae53ea9-4172-4d6e-af1b-ff872b9af192" 00:14:39.737 } 00:14:39.737 ] 00:14:39.737 } 00:14:39.737 ] 00:14:39.737 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:39.737 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2158449 00:14:39.737 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:39.737 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:39.737 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:39.737 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:39.737 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:39.737 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:39.737 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:39.737 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:39.737 Malloc3 00:14:39.737 [2024-11-20 16:26:25.685410] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:39.997 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:39.997 [2024-11-20 16:26:25.865596] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:39.997 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:39.997 Asynchronous Event Request test 00:14:39.997 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:39.997 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:39.997 Registering asynchronous event callbacks... 00:14:39.997 Starting namespace attribute notice tests for all controllers... 00:14:39.997 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:39.997 aer_cb - Changed Namespace 00:14:39.997 Cleaning up... 00:14:40.258 [ 00:14:40.258 { 00:14:40.258 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:40.258 "subtype": "Discovery", 00:14:40.258 "listen_addresses": [], 00:14:40.258 "allow_any_host": true, 00:14:40.258 "hosts": [] 00:14:40.258 }, 00:14:40.258 { 00:14:40.258 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:40.258 "subtype": "NVMe", 00:14:40.258 "listen_addresses": [ 00:14:40.258 { 00:14:40.258 "trtype": "VFIOUSER", 00:14:40.258 "adrfam": "IPv4", 00:14:40.258 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:40.258 "trsvcid": "0" 00:14:40.258 } 00:14:40.258 ], 00:14:40.258 "allow_any_host": true, 00:14:40.258 "hosts": [], 00:14:40.258 "serial_number": "SPDK1", 00:14:40.258 "model_number": "SPDK bdev Controller", 00:14:40.258 "max_namespaces": 32, 00:14:40.258 "min_cntlid": 1, 00:14:40.258 "max_cntlid": 65519, 00:14:40.258 "namespaces": [ 00:14:40.258 { 00:14:40.258 "nsid": 1, 00:14:40.258 "bdev_name": "Malloc1", 00:14:40.258 "name": "Malloc1", 00:14:40.258 "nguid": "1F839BC5C2514D84A6B440437DB2AB6B", 00:14:40.258 "uuid": "1f839bc5-c251-4d84-a6b4-40437db2ab6b" 00:14:40.258 }, 00:14:40.258 { 00:14:40.258 "nsid": 2, 00:14:40.258 "bdev_name": "Malloc3", 00:14:40.258 "name": "Malloc3", 00:14:40.258 "nguid": "C4E13A1B404640728F366711EEB3FC30", 00:14:40.258 "uuid": "c4e13a1b-4046-4072-8f36-6711eeb3fc30" 00:14:40.258 } 00:14:40.258 ] 00:14:40.258 }, 00:14:40.258 { 00:14:40.258 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:40.258 "subtype": "NVMe", 00:14:40.258 "listen_addresses": [ 00:14:40.258 { 00:14:40.258 "trtype": "VFIOUSER", 00:14:40.258 "adrfam": "IPv4", 00:14:40.258 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:40.258 "trsvcid": "0" 00:14:40.258 } 00:14:40.258 ], 00:14:40.258 "allow_any_host": true, 00:14:40.258 "hosts": [], 00:14:40.258 "serial_number": "SPDK2", 00:14:40.258 "model_number": "SPDK bdev Controller", 00:14:40.258 "max_namespaces": 32, 00:14:40.258 "min_cntlid": 1, 00:14:40.258 "max_cntlid": 65519, 00:14:40.258 "namespaces": [ 00:14:40.258 { 00:14:40.258 "nsid": 1, 00:14:40.258 "bdev_name": "Malloc2", 00:14:40.258 "name": "Malloc2", 00:14:40.258 "nguid": "2AE53EA941724D6EAF1BFF872B9AF192", 00:14:40.258 "uuid": "2ae53ea9-4172-4d6e-af1b-ff872b9af192" 00:14:40.258 } 00:14:40.258 ] 00:14:40.258 } 00:14:40.258 ] 00:14:40.258 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2158449 00:14:40.258 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:40.258 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:40.258 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:40.258 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:40.258 [2024-11-20 16:26:26.105892] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:14:40.258 [2024-11-20 16:26:26.105931] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2158550 ] 00:14:40.258 [2024-11-20 16:26:26.158996] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:40.258 [2024-11-20 16:26:26.168220] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:40.258 [2024-11-20 16:26:26.168245] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f5f8b092000 00:14:40.258 [2024-11-20 16:26:26.169213] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:40.258 [2024-11-20 16:26:26.170216] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:40.258 [2024-11-20 16:26:26.171226] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:40.258 [2024-11-20 16:26:26.172231] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:40.258 [2024-11-20 16:26:26.173234] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:40.258 [2024-11-20 16:26:26.174237] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:40.258 [2024-11-20 16:26:26.175245] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:40.258 [2024-11-20 16:26:26.176254] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:40.258 [2024-11-20 16:26:26.177266] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:40.258 [2024-11-20 16:26:26.177277] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f5f8b087000 00:14:40.258 [2024-11-20 16:26:26.178601] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:40.258 [2024-11-20 16:26:26.194808] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:40.258 [2024-11-20 16:26:26.194831] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:14:40.258 [2024-11-20 16:26:26.199911] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:40.258 [2024-11-20 16:26:26.199956] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:40.258 [2024-11-20 16:26:26.200040] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:14:40.258 [2024-11-20 16:26:26.200053] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:14:40.258 [2024-11-20 16:26:26.200058] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:14:40.258 [2024-11-20 16:26:26.200912] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:40.258 [2024-11-20 16:26:26.200921] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:14:40.258 [2024-11-20 16:26:26.200928] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:14:40.259 [2024-11-20 16:26:26.201913] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:40.259 [2024-11-20 16:26:26.201922] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:14:40.259 [2024-11-20 16:26:26.201930] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:40.259 [2024-11-20 16:26:26.202920] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:40.259 [2024-11-20 16:26:26.202932] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:40.259 [2024-11-20 16:26:26.203928] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:40.259 [2024-11-20 16:26:26.203937] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:40.259 [2024-11-20 16:26:26.203943] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:40.259 [2024-11-20 16:26:26.203950] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:40.259 [2024-11-20 16:26:26.204058] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:14:40.259 [2024-11-20 16:26:26.204063] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:40.259 [2024-11-20 16:26:26.204068] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:40.259 [2024-11-20 16:26:26.204938] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:40.259 [2024-11-20 16:26:26.205943] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:40.259 [2024-11-20 16:26:26.206948] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:40.259 [2024-11-20 16:26:26.207949] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:40.259 [2024-11-20 16:26:26.207992] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:40.259 [2024-11-20 16:26:26.208965] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:40.259 [2024-11-20 16:26:26.208973] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:40.259 [2024-11-20 16:26:26.208978] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:40.259 [2024-11-20 16:26:26.209003] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:14:40.259 [2024-11-20 16:26:26.209010] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:40.259 [2024-11-20 16:26:26.209022] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:40.259 [2024-11-20 16:26:26.209028] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:40.259 [2024-11-20 16:26:26.209032] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:40.259 [2024-11-20 16:26:26.209043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:40.259 [2024-11-20 16:26:26.212990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:40.259 [2024-11-20 16:26:26.213001] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:14:40.259 [2024-11-20 16:26:26.213006] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:14:40.259 [2024-11-20 16:26:26.213013] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:14:40.259 [2024-11-20 16:26:26.213018] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:40.259 [2024-11-20 16:26:26.213025] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:14:40.259 [2024-11-20 16:26:26.213030] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:14:40.259 [2024-11-20 16:26:26.213034] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:14:40.259 [2024-11-20 16:26:26.213044] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:40.259 [2024-11-20 16:26:26.213053] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:40.521 [2024-11-20 16:26:26.220988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:40.521 [2024-11-20 16:26:26.221001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:40.521 [2024-11-20 16:26:26.221010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:40.521 [2024-11-20 16:26:26.221018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:40.521 [2024-11-20 16:26:26.221027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:40.521 [2024-11-20 16:26:26.221032] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:40.521 [2024-11-20 16:26:26.221039] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:40.521 [2024-11-20 16:26:26.221048] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:40.521 [2024-11-20 16:26:26.228991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:40.521 [2024-11-20 16:26:26.229001] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:14:40.521 [2024-11-20 16:26:26.229006] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:40.521 [2024-11-20 16:26:26.229013] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:14:40.521 [2024-11-20 16:26:26.229018] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:40.521 [2024-11-20 16:26:26.229027] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:40.521 [2024-11-20 16:26:26.236987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:40.521 [2024-11-20 16:26:26.237051] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:14:40.521 [2024-11-20 16:26:26.237059] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:40.521 [2024-11-20 16:26:26.237067] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:40.521 [2024-11-20 16:26:26.237074] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:40.521 [2024-11-20 16:26:26.237078] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:40.521 [2024-11-20 16:26:26.237084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:40.521 [2024-11-20 16:26:26.244987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:40.521 [2024-11-20 16:26:26.244998] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:14:40.521 [2024-11-20 16:26:26.245012] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:14:40.521 [2024-11-20 16:26:26.245020] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:40.521 [2024-11-20 16:26:26.245027] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:40.521 [2024-11-20 16:26:26.245032] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:40.521 [2024-11-20 16:26:26.245035] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:40.521 [2024-11-20 16:26:26.245041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:40.521 [2024-11-20 16:26:26.252987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:40.521 [2024-11-20 16:26:26.253000] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:40.521 [2024-11-20 16:26:26.253008] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:40.521 [2024-11-20 16:26:26.253015] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:40.521 [2024-11-20 16:26:26.253020] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:40.521 [2024-11-20 16:26:26.253023] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:40.521 [2024-11-20 16:26:26.253029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:40.521 [2024-11-20 16:26:26.260987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:40.521 [2024-11-20 16:26:26.260996] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:40.521 [2024-11-20 16:26:26.261003] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:40.521 [2024-11-20 16:26:26.261012] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:14:40.521 [2024-11-20 16:26:26.261017] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:40.521 [2024-11-20 16:26:26.261022] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:40.521 [2024-11-20 16:26:26.261027] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:14:40.521 [2024-11-20 16:26:26.261032] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:40.521 [2024-11-20 16:26:26.261039] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:14:40.522 [2024-11-20 16:26:26.261044] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:14:40.522 [2024-11-20 16:26:26.261059] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:40.522 [2024-11-20 16:26:26.268989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:40.522 [2024-11-20 16:26:26.269002] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:40.522 [2024-11-20 16:26:26.276989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:40.522 [2024-11-20 16:26:26.277002] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:40.522 [2024-11-20 16:26:26.284989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:40.522 [2024-11-20 16:26:26.285002] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:40.522 [2024-11-20 16:26:26.292986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:40.522 [2024-11-20 16:26:26.293002] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:40.522 [2024-11-20 16:26:26.293006] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:40.522 [2024-11-20 16:26:26.293010] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:40.522 [2024-11-20 16:26:26.293014] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:40.522 [2024-11-20 16:26:26.293017] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:40.522 [2024-11-20 16:26:26.293023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:40.522 [2024-11-20 16:26:26.293031] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:40.522 [2024-11-20 16:26:26.293035] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:40.522 [2024-11-20 16:26:26.293039] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:40.522 [2024-11-20 16:26:26.293045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:40.522 [2024-11-20 16:26:26.293052] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:40.522 [2024-11-20 16:26:26.293057] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:40.522 [2024-11-20 16:26:26.293060] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:40.522 [2024-11-20 16:26:26.293066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:40.522 [2024-11-20 16:26:26.293074] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:40.522 [2024-11-20 16:26:26.293078] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:40.522 [2024-11-20 16:26:26.293081] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:40.522 [2024-11-20 16:26:26.293087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:40.522 [2024-11-20 16:26:26.300988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:40.522 [2024-11-20 16:26:26.301003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:40.522 [2024-11-20 16:26:26.301014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:40.522 [2024-11-20 16:26:26.301021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:40.522 ===================================================== 00:14:40.522 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:40.522 ===================================================== 00:14:40.522 Controller Capabilities/Features 00:14:40.522 ================================ 00:14:40.522 Vendor ID: 4e58 00:14:40.522 Subsystem Vendor ID: 4e58 00:14:40.522 Serial Number: SPDK2 00:14:40.522 Model Number: SPDK bdev Controller 00:14:40.522 Firmware Version: 25.01 00:14:40.522 Recommended Arb Burst: 6 00:14:40.522 IEEE OUI Identifier: 8d 6b 50 00:14:40.522 Multi-path I/O 00:14:40.522 May have multiple subsystem ports: Yes 00:14:40.522 May have multiple controllers: Yes 00:14:40.522 Associated with SR-IOV VF: No 00:14:40.522 Max Data Transfer Size: 131072 00:14:40.522 Max Number of Namespaces: 32 00:14:40.522 Max Number of I/O Queues: 127 00:14:40.522 NVMe Specification Version (VS): 1.3 00:14:40.522 NVMe Specification Version (Identify): 1.3 00:14:40.522 Maximum Queue Entries: 256 00:14:40.522 Contiguous Queues Required: Yes 00:14:40.522 Arbitration Mechanisms Supported 00:14:40.522 Weighted Round Robin: Not Supported 00:14:40.522 Vendor Specific: Not Supported 00:14:40.522 Reset Timeout: 15000 ms 00:14:40.522 Doorbell Stride: 4 bytes 00:14:40.522 NVM Subsystem Reset: Not Supported 00:14:40.522 Command Sets Supported 00:14:40.522 NVM Command Set: Supported 00:14:40.522 Boot Partition: Not Supported 00:14:40.522 Memory Page Size Minimum: 4096 bytes 00:14:40.522 Memory Page Size Maximum: 4096 bytes 00:14:40.522 Persistent Memory Region: Not Supported 00:14:40.522 Optional Asynchronous Events Supported 00:14:40.522 Namespace Attribute Notices: Supported 00:14:40.522 Firmware Activation Notices: Not Supported 00:14:40.522 ANA Change Notices: Not Supported 00:14:40.522 PLE Aggregate Log Change Notices: Not Supported 00:14:40.522 LBA Status Info Alert Notices: Not Supported 00:14:40.522 EGE Aggregate Log Change Notices: Not Supported 00:14:40.522 Normal NVM Subsystem Shutdown event: Not Supported 00:14:40.522 Zone Descriptor Change Notices: Not Supported 00:14:40.522 Discovery Log Change Notices: Not Supported 00:14:40.522 Controller Attributes 00:14:40.522 128-bit Host Identifier: Supported 00:14:40.522 Non-Operational Permissive Mode: Not Supported 00:14:40.522 NVM Sets: Not Supported 00:14:40.522 Read Recovery Levels: Not Supported 00:14:40.522 Endurance Groups: Not Supported 00:14:40.522 Predictable Latency Mode: Not Supported 00:14:40.522 Traffic Based Keep ALive: Not Supported 00:14:40.522 Namespace Granularity: Not Supported 00:14:40.522 SQ Associations: Not Supported 00:14:40.522 UUID List: Not Supported 00:14:40.522 Multi-Domain Subsystem: Not Supported 00:14:40.522 Fixed Capacity Management: Not Supported 00:14:40.522 Variable Capacity Management: Not Supported 00:14:40.522 Delete Endurance Group: Not Supported 00:14:40.522 Delete NVM Set: Not Supported 00:14:40.522 Extended LBA Formats Supported: Not Supported 00:14:40.522 Flexible Data Placement Supported: Not Supported 00:14:40.522 00:14:40.522 Controller Memory Buffer Support 00:14:40.522 ================================ 00:14:40.522 Supported: No 00:14:40.522 00:14:40.522 Persistent Memory Region Support 00:14:40.522 ================================ 00:14:40.522 Supported: No 00:14:40.522 00:14:40.522 Admin Command Set Attributes 00:14:40.522 ============================ 00:14:40.522 Security Send/Receive: Not Supported 00:14:40.522 Format NVM: Not Supported 00:14:40.522 Firmware Activate/Download: Not Supported 00:14:40.522 Namespace Management: Not Supported 00:14:40.522 Device Self-Test: Not Supported 00:14:40.522 Directives: Not Supported 00:14:40.522 NVMe-MI: Not Supported 00:14:40.522 Virtualization Management: Not Supported 00:14:40.522 Doorbell Buffer Config: Not Supported 00:14:40.522 Get LBA Status Capability: Not Supported 00:14:40.522 Command & Feature Lockdown Capability: Not Supported 00:14:40.522 Abort Command Limit: 4 00:14:40.522 Async Event Request Limit: 4 00:14:40.522 Number of Firmware Slots: N/A 00:14:40.522 Firmware Slot 1 Read-Only: N/A 00:14:40.522 Firmware Activation Without Reset: N/A 00:14:40.522 Multiple Update Detection Support: N/A 00:14:40.522 Firmware Update Granularity: No Information Provided 00:14:40.522 Per-Namespace SMART Log: No 00:14:40.522 Asymmetric Namespace Access Log Page: Not Supported 00:14:40.522 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:40.522 Command Effects Log Page: Supported 00:14:40.522 Get Log Page Extended Data: Supported 00:14:40.522 Telemetry Log Pages: Not Supported 00:14:40.522 Persistent Event Log Pages: Not Supported 00:14:40.522 Supported Log Pages Log Page: May Support 00:14:40.522 Commands Supported & Effects Log Page: Not Supported 00:14:40.522 Feature Identifiers & Effects Log Page:May Support 00:14:40.522 NVMe-MI Commands & Effects Log Page: May Support 00:14:40.522 Data Area 4 for Telemetry Log: Not Supported 00:14:40.522 Error Log Page Entries Supported: 128 00:14:40.522 Keep Alive: Supported 00:14:40.522 Keep Alive Granularity: 10000 ms 00:14:40.522 00:14:40.522 NVM Command Set Attributes 00:14:40.522 ========================== 00:14:40.522 Submission Queue Entry Size 00:14:40.522 Max: 64 00:14:40.522 Min: 64 00:14:40.522 Completion Queue Entry Size 00:14:40.522 Max: 16 00:14:40.522 Min: 16 00:14:40.522 Number of Namespaces: 32 00:14:40.522 Compare Command: Supported 00:14:40.522 Write Uncorrectable Command: Not Supported 00:14:40.522 Dataset Management Command: Supported 00:14:40.522 Write Zeroes Command: Supported 00:14:40.522 Set Features Save Field: Not Supported 00:14:40.522 Reservations: Not Supported 00:14:40.523 Timestamp: Not Supported 00:14:40.523 Copy: Supported 00:14:40.523 Volatile Write Cache: Present 00:14:40.523 Atomic Write Unit (Normal): 1 00:14:40.523 Atomic Write Unit (PFail): 1 00:14:40.523 Atomic Compare & Write Unit: 1 00:14:40.523 Fused Compare & Write: Supported 00:14:40.523 Scatter-Gather List 00:14:40.523 SGL Command Set: Supported (Dword aligned) 00:14:40.523 SGL Keyed: Not Supported 00:14:40.523 SGL Bit Bucket Descriptor: Not Supported 00:14:40.523 SGL Metadata Pointer: Not Supported 00:14:40.523 Oversized SGL: Not Supported 00:14:40.523 SGL Metadata Address: Not Supported 00:14:40.523 SGL Offset: Not Supported 00:14:40.523 Transport SGL Data Block: Not Supported 00:14:40.523 Replay Protected Memory Block: Not Supported 00:14:40.523 00:14:40.523 Firmware Slot Information 00:14:40.523 ========================= 00:14:40.523 Active slot: 1 00:14:40.523 Slot 1 Firmware Revision: 25.01 00:14:40.523 00:14:40.523 00:14:40.523 Commands Supported and Effects 00:14:40.523 ============================== 00:14:40.523 Admin Commands 00:14:40.523 -------------- 00:14:40.523 Get Log Page (02h): Supported 00:14:40.523 Identify (06h): Supported 00:14:40.523 Abort (08h): Supported 00:14:40.523 Set Features (09h): Supported 00:14:40.523 Get Features (0Ah): Supported 00:14:40.523 Asynchronous Event Request (0Ch): Supported 00:14:40.523 Keep Alive (18h): Supported 00:14:40.523 I/O Commands 00:14:40.523 ------------ 00:14:40.523 Flush (00h): Supported LBA-Change 00:14:40.523 Write (01h): Supported LBA-Change 00:14:40.523 Read (02h): Supported 00:14:40.523 Compare (05h): Supported 00:14:40.523 Write Zeroes (08h): Supported LBA-Change 00:14:40.523 Dataset Management (09h): Supported LBA-Change 00:14:40.523 Copy (19h): Supported LBA-Change 00:14:40.523 00:14:40.523 Error Log 00:14:40.523 ========= 00:14:40.523 00:14:40.523 Arbitration 00:14:40.523 =========== 00:14:40.523 Arbitration Burst: 1 00:14:40.523 00:14:40.523 Power Management 00:14:40.523 ================ 00:14:40.523 Number of Power States: 1 00:14:40.523 Current Power State: Power State #0 00:14:40.523 Power State #0: 00:14:40.523 Max Power: 0.00 W 00:14:40.523 Non-Operational State: Operational 00:14:40.523 Entry Latency: Not Reported 00:14:40.523 Exit Latency: Not Reported 00:14:40.523 Relative Read Throughput: 0 00:14:40.523 Relative Read Latency: 0 00:14:40.523 Relative Write Throughput: 0 00:14:40.523 Relative Write Latency: 0 00:14:40.523 Idle Power: Not Reported 00:14:40.523 Active Power: Not Reported 00:14:40.523 Non-Operational Permissive Mode: Not Supported 00:14:40.523 00:14:40.523 Health Information 00:14:40.523 ================== 00:14:40.523 Critical Warnings: 00:14:40.523 Available Spare Space: OK 00:14:40.523 Temperature: OK 00:14:40.523 Device Reliability: OK 00:14:40.523 Read Only: No 00:14:40.523 Volatile Memory Backup: OK 00:14:40.523 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:40.523 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:40.523 Available Spare: 0% 00:14:40.523 Available Sp[2024-11-20 16:26:26.301123] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:40.523 [2024-11-20 16:26:26.308987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:40.523 [2024-11-20 16:26:26.309016] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:14:40.523 [2024-11-20 16:26:26.309026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:40.523 [2024-11-20 16:26:26.309032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:40.523 [2024-11-20 16:26:26.309039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:40.523 [2024-11-20 16:26:26.309045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:40.523 [2024-11-20 16:26:26.312988] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:40.523 [2024-11-20 16:26:26.312999] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:40.523 [2024-11-20 16:26:26.313106] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:40.523 [2024-11-20 16:26:26.313154] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:14:40.523 [2024-11-20 16:26:26.313161] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:14:40.523 [2024-11-20 16:26:26.314116] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:40.523 [2024-11-20 16:26:26.314129] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:14:40.523 [2024-11-20 16:26:26.314176] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:40.523 [2024-11-20 16:26:26.315555] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:40.523 are Threshold: 0% 00:14:40.523 Life Percentage Used: 0% 00:14:40.523 Data Units Read: 0 00:14:40.523 Data Units Written: 0 00:14:40.523 Host Read Commands: 0 00:14:40.523 Host Write Commands: 0 00:14:40.523 Controller Busy Time: 0 minutes 00:14:40.523 Power Cycles: 0 00:14:40.523 Power On Hours: 0 hours 00:14:40.523 Unsafe Shutdowns: 0 00:14:40.523 Unrecoverable Media Errors: 0 00:14:40.523 Lifetime Error Log Entries: 0 00:14:40.523 Warning Temperature Time: 0 minutes 00:14:40.523 Critical Temperature Time: 0 minutes 00:14:40.523 00:14:40.523 Number of Queues 00:14:40.523 ================ 00:14:40.523 Number of I/O Submission Queues: 127 00:14:40.523 Number of I/O Completion Queues: 127 00:14:40.523 00:14:40.523 Active Namespaces 00:14:40.523 ================= 00:14:40.523 Namespace ID:1 00:14:40.523 Error Recovery Timeout: Unlimited 00:14:40.523 Command Set Identifier: NVM (00h) 00:14:40.523 Deallocate: Supported 00:14:40.523 Deallocated/Unwritten Error: Not Supported 00:14:40.523 Deallocated Read Value: Unknown 00:14:40.523 Deallocate in Write Zeroes: Not Supported 00:14:40.523 Deallocated Guard Field: 0xFFFF 00:14:40.523 Flush: Supported 00:14:40.523 Reservation: Supported 00:14:40.523 Namespace Sharing Capabilities: Multiple Controllers 00:14:40.523 Size (in LBAs): 131072 (0GiB) 00:14:40.523 Capacity (in LBAs): 131072 (0GiB) 00:14:40.523 Utilization (in LBAs): 131072 (0GiB) 00:14:40.523 NGUID: 2AE53EA941724D6EAF1BFF872B9AF192 00:14:40.523 UUID: 2ae53ea9-4172-4d6e-af1b-ff872b9af192 00:14:40.523 Thin Provisioning: Not Supported 00:14:40.523 Per-NS Atomic Units: Yes 00:14:40.523 Atomic Boundary Size (Normal): 0 00:14:40.523 Atomic Boundary Size (PFail): 0 00:14:40.523 Atomic Boundary Offset: 0 00:14:40.523 Maximum Single Source Range Length: 65535 00:14:40.523 Maximum Copy Length: 65535 00:14:40.523 Maximum Source Range Count: 1 00:14:40.523 NGUID/EUI64 Never Reused: No 00:14:40.523 Namespace Write Protected: No 00:14:40.523 Number of LBA Formats: 1 00:14:40.523 Current LBA Format: LBA Format #00 00:14:40.523 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:40.523 00:14:40.523 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:40.784 [2024-11-20 16:26:26.517085] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:46.085 Initializing NVMe Controllers 00:14:46.085 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:46.085 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:46.085 Initialization complete. Launching workers. 00:14:46.085 ======================================================== 00:14:46.085 Latency(us) 00:14:46.085 Device Information : IOPS MiB/s Average min max 00:14:46.085 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40005.10 156.27 3199.46 843.21 9783.59 00:14:46.085 ======================================================== 00:14:46.085 Total : 40005.10 156.27 3199.46 843.21 9783.59 00:14:46.085 00:14:46.085 [2024-11-20 16:26:31.622178] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:46.085 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:46.085 [2024-11-20 16:26:31.812779] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:51.372 Initializing NVMe Controllers 00:14:51.372 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:51.372 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:51.372 Initialization complete. Launching workers. 00:14:51.372 ======================================================== 00:14:51.372 Latency(us) 00:14:51.372 Device Information : IOPS MiB/s Average min max 00:14:51.372 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34968.40 136.60 3661.68 1107.68 7676.24 00:14:51.372 ======================================================== 00:14:51.372 Total : 34968.40 136.60 3661.68 1107.68 7676.24 00:14:51.372 00:14:51.372 [2024-11-20 16:26:36.837587] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:51.372 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:51.372 [2024-11-20 16:26:37.036372] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:56.661 [2024-11-20 16:26:42.182069] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:56.661 Initializing NVMe Controllers 00:14:56.661 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:56.661 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:56.661 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:56.661 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:56.661 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:56.661 Initialization complete. Launching workers. 00:14:56.661 Starting thread on core 2 00:14:56.661 Starting thread on core 3 00:14:56.661 Starting thread on core 1 00:14:56.661 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:56.661 [2024-11-20 16:26:42.465442] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:59.963 [2024-11-20 16:26:45.633120] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:59.963 Initializing NVMe Controllers 00:14:59.963 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:59.963 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:59.963 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:59.963 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:59.963 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:59.963 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:59.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:59.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:59.963 Initialization complete. Launching workers. 00:14:59.963 Starting thread on core 1 with urgent priority queue 00:14:59.963 Starting thread on core 2 with urgent priority queue 00:14:59.963 Starting thread on core 3 with urgent priority queue 00:14:59.963 Starting thread on core 0 with urgent priority queue 00:14:59.963 SPDK bdev Controller (SPDK2 ) core 0: 4192.33 IO/s 23.85 secs/100000 ios 00:14:59.963 SPDK bdev Controller (SPDK2 ) core 1: 3371.33 IO/s 29.66 secs/100000 ios 00:14:59.963 SPDK bdev Controller (SPDK2 ) core 2: 3209.67 IO/s 31.16 secs/100000 ios 00:14:59.963 SPDK bdev Controller (SPDK2 ) core 3: 3374.00 IO/s 29.64 secs/100000 ios 00:14:59.963 ======================================================== 00:14:59.963 00:14:59.963 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:00.225 [2024-11-20 16:26:45.919390] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:00.225 Initializing NVMe Controllers 00:15:00.225 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:00.225 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:00.225 Namespace ID: 1 size: 0GB 00:15:00.225 Initialization complete. 00:15:00.225 INFO: using host memory buffer for IO 00:15:00.225 Hello world! 00:15:00.225 [2024-11-20 16:26:45.928449] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:00.225 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:00.486 [2024-11-20 16:26:46.218322] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:01.429 Initializing NVMe Controllers 00:15:01.429 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:01.429 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:01.429 Initialization complete. Launching workers. 00:15:01.429 submit (in ns) avg, min, max = 7621.5, 3897.5, 3999750.8 00:15:01.429 complete (in ns) avg, min, max = 16967.4, 2384.2, 4004554.2 00:15:01.429 00:15:01.429 Submit histogram 00:15:01.429 ================ 00:15:01.429 Range in us Cumulative Count 00:15:01.429 3.893 - 3.920: 1.5176% ( 289) 00:15:01.429 3.920 - 3.947: 7.4253% ( 1125) 00:15:01.429 3.947 - 3.973: 16.7463% ( 1775) 00:15:01.429 3.973 - 4.000: 25.9150% ( 1746) 00:15:01.429 4.000 - 4.027: 36.4176% ( 2000) 00:15:01.429 4.027 - 4.053: 47.8969% ( 2186) 00:15:01.429 4.053 - 4.080: 63.0730% ( 2890) 00:15:01.429 4.080 - 4.107: 79.5043% ( 3129) 00:15:01.429 4.107 - 4.133: 90.6527% ( 2123) 00:15:01.429 4.133 - 4.160: 96.7127% ( 1154) 00:15:01.429 4.160 - 4.187: 98.7502% ( 388) 00:15:01.429 4.187 - 4.213: 99.3541% ( 115) 00:15:01.429 4.213 - 4.240: 99.5326% ( 34) 00:15:01.429 4.240 - 4.267: 99.5851% ( 10) 00:15:01.429 4.267 - 4.293: 99.6009% ( 3) 00:15:01.429 4.293 - 4.320: 99.6062% ( 1) 00:15:01.429 4.720 - 4.747: 99.6114% ( 1) 00:15:01.429 4.987 - 5.013: 99.6167% ( 1) 00:15:01.429 5.387 - 5.413: 99.6219% ( 1) 00:15:01.429 5.653 - 5.680: 99.6272% ( 1) 00:15:01.429 5.840 - 5.867: 99.6377% ( 2) 00:15:01.429 5.893 - 5.920: 99.6429% ( 1) 00:15:01.429 5.920 - 5.947: 99.6482% ( 1) 00:15:01.429 5.947 - 5.973: 99.6534% ( 1) 00:15:01.429 5.973 - 6.000: 99.6587% ( 1) 00:15:01.429 6.000 - 6.027: 99.6692% ( 2) 00:15:01.429 6.027 - 6.053: 99.6797% ( 2) 00:15:01.429 6.107 - 6.133: 99.6849% ( 1) 00:15:01.429 6.133 - 6.160: 99.6954% ( 2) 00:15:01.429 6.160 - 6.187: 99.7112% ( 3) 00:15:01.429 6.187 - 6.213: 99.7217% ( 2) 00:15:01.429 6.213 - 6.240: 99.7269% ( 1) 00:15:01.429 6.240 - 6.267: 99.7374% ( 2) 00:15:01.429 6.267 - 6.293: 99.7479% ( 2) 00:15:01.429 6.293 - 6.320: 99.7532% ( 1) 00:15:01.429 6.320 - 6.347: 99.7742% ( 4) 00:15:01.429 6.347 - 6.373: 99.7847% ( 2) 00:15:01.429 6.427 - 6.453: 99.8005% ( 3) 00:15:01.429 6.480 - 6.507: 99.8057% ( 1) 00:15:01.429 6.613 - 6.640: 99.8110% ( 1) 00:15:01.429 6.667 - 6.693: 99.8162% ( 1) 00:15:01.429 6.693 - 6.720: 99.8215% ( 1) 00:15:01.429 6.720 - 6.747: 99.8267% ( 1) 00:15:01.429 6.747 - 6.773: 99.8372% ( 2) 00:15:01.429 6.827 - 6.880: 99.8425% ( 1) 00:15:01.429 6.933 - 6.987: 99.8530% ( 2) 00:15:01.429 6.987 - 7.040: 99.8582% ( 1) 00:15:01.429 7.040 - 7.093: 99.8687% ( 2) 00:15:01.429 7.093 - 7.147: 99.8740% ( 1) 00:15:01.429 7.200 - 7.253: 99.8792% ( 1) 00:15:01.429 7.413 - 7.467: 99.8845% ( 1) 00:15:01.429 7.520 - 7.573: 99.8950% ( 2) 00:15:01.429 8.000 - 8.053: 99.9002% ( 1) 00:15:01.429 9.120 - 9.173: 99.9055% ( 1) 00:15:01.429 9.387 - 9.440: 99.9107% ( 1) 00:15:01.429 3986.773 - 4014.080: 100.0000% ( 17) 00:15:01.429 00:15:01.429 Complete histogram 00:15:01.429 ================== 00:15:01.429 Range in us Cumulative Count 00:15:01.429 2.373 - 2.387: 0.0053% ( 1) 00:15:01.429 2.387 - 2.400: 0.3466% ( 65) 00:15:01.429 2.400 - 2.413: 0.9872% ( 122) 00:15:01.429 2.413 - 2.427: 1.0923% ( 20) 00:15:01.429 2.427 - 2.440: 1.2708% ( 34) 00:15:01.429 2.440 - 2.453: 1.3286% ( 11) 00:15:01.429 2.453 - 2.467: 40.7709% ( 7511) 00:15:01.429 2.467 - 2.480: 57.0603% ( 3102) 00:15:01.429 2.480 - 2.493: 68.6762% ( 2212) 00:15:01.429 2.493 - 2.507: 75.2718% ( 1256) 00:15:01.429 2.507 - 2.520: 79.8929% ( 880) 00:15:01.429 2.520 - 2.533: 82.8598% ( 565) 00:15:01.429 2.533 - 2.547: 87.9746% ( 974) 00:15:01.429 2.547 - 2.560: 93.7090% ( 1092) 00:15:01.429 2.560 - 2.573: 96.2821% ( 490) 00:15:01.429 2.573 - 2.587: 98.0465% ( 336) 00:15:01.429 2.587 - 2.600: 98.9655% ( 175) 00:15:01.429 2.600 - 2.613: 99.3383% ( 71) 00:15:01.429 2.613 - 2.627: 99.4014% ( 12) 00:15:01.429 2.627 - 2.640: 99.4171% ( 3) 00:15:01.429 4.187 - 4.213: 99.4224% ( 1) 00:15:01.429 4.213 - [2024-11-20 16:26:47.312694] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:01.429 4.240: 99.4276% ( 1) 00:15:01.429 4.240 - 4.267: 99.4329% ( 1) 00:15:01.429 4.267 - 4.293: 99.4381% ( 1) 00:15:01.429 4.320 - 4.347: 99.4434% ( 1) 00:15:01.429 4.373 - 4.400: 99.4539% ( 2) 00:15:01.429 4.427 - 4.453: 99.4644% ( 2) 00:15:01.429 4.480 - 4.507: 99.4696% ( 1) 00:15:01.429 4.507 - 4.533: 99.4749% ( 1) 00:15:01.429 4.587 - 4.613: 99.4854% ( 2) 00:15:01.429 4.613 - 4.640: 99.4906% ( 1) 00:15:01.429 4.640 - 4.667: 99.4959% ( 1) 00:15:01.429 4.667 - 4.693: 99.5011% ( 1) 00:15:01.429 4.747 - 4.773: 99.5116% ( 2) 00:15:01.429 4.773 - 4.800: 99.5169% ( 1) 00:15:01.429 4.800 - 4.827: 99.5221% ( 1) 00:15:01.429 4.853 - 4.880: 99.5326% ( 2) 00:15:01.429 4.933 - 4.960: 99.5379% ( 1) 00:15:01.429 4.960 - 4.987: 99.5431% ( 1) 00:15:01.429 5.013 - 5.040: 99.5484% ( 1) 00:15:01.429 5.093 - 5.120: 99.5536% ( 1) 00:15:01.429 5.120 - 5.147: 99.5589% ( 1) 00:15:01.429 5.147 - 5.173: 99.5641% ( 1) 00:15:01.429 5.200 - 5.227: 99.5694% ( 1) 00:15:01.429 5.227 - 5.253: 99.5746% ( 1) 00:15:01.429 5.253 - 5.280: 99.5799% ( 1) 00:15:01.429 5.547 - 5.573: 99.5904% ( 2) 00:15:01.429 5.627 - 5.653: 99.5957% ( 1) 00:15:01.429 5.653 - 5.680: 99.6009% ( 1) 00:15:01.429 5.787 - 5.813: 99.6062% ( 1) 00:15:01.429 5.893 - 5.920: 99.6114% ( 1) 00:15:01.429 6.453 - 6.480: 99.6167% ( 1) 00:15:01.429 7.040 - 7.093: 99.6219% ( 1) 00:15:01.429 9.653 - 9.707: 99.6272% ( 1) 00:15:01.429 10.187 - 10.240: 99.6324% ( 1) 00:15:01.429 10.933 - 10.987: 99.6377% ( 1) 00:15:01.429 3986.773 - 4014.080: 100.0000% ( 69) 00:15:01.429 00:15:01.429 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:01.429 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:01.429 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:01.429 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:01.429 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:01.691 [ 00:15:01.691 { 00:15:01.691 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:01.691 "subtype": "Discovery", 00:15:01.691 "listen_addresses": [], 00:15:01.691 "allow_any_host": true, 00:15:01.691 "hosts": [] 00:15:01.691 }, 00:15:01.691 { 00:15:01.691 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:01.691 "subtype": "NVMe", 00:15:01.691 "listen_addresses": [ 00:15:01.691 { 00:15:01.691 "trtype": "VFIOUSER", 00:15:01.691 "adrfam": "IPv4", 00:15:01.691 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:01.691 "trsvcid": "0" 00:15:01.691 } 00:15:01.691 ], 00:15:01.691 "allow_any_host": true, 00:15:01.691 "hosts": [], 00:15:01.691 "serial_number": "SPDK1", 00:15:01.691 "model_number": "SPDK bdev Controller", 00:15:01.691 "max_namespaces": 32, 00:15:01.691 "min_cntlid": 1, 00:15:01.691 "max_cntlid": 65519, 00:15:01.691 "namespaces": [ 00:15:01.691 { 00:15:01.691 "nsid": 1, 00:15:01.691 "bdev_name": "Malloc1", 00:15:01.691 "name": "Malloc1", 00:15:01.691 "nguid": "1F839BC5C2514D84A6B440437DB2AB6B", 00:15:01.691 "uuid": "1f839bc5-c251-4d84-a6b4-40437db2ab6b" 00:15:01.691 }, 00:15:01.691 { 00:15:01.691 "nsid": 2, 00:15:01.691 "bdev_name": "Malloc3", 00:15:01.691 "name": "Malloc3", 00:15:01.691 "nguid": "C4E13A1B404640728F366711EEB3FC30", 00:15:01.691 "uuid": "c4e13a1b-4046-4072-8f36-6711eeb3fc30" 00:15:01.691 } 00:15:01.691 ] 00:15:01.691 }, 00:15:01.691 { 00:15:01.691 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:01.691 "subtype": "NVMe", 00:15:01.691 "listen_addresses": [ 00:15:01.691 { 00:15:01.691 "trtype": "VFIOUSER", 00:15:01.691 "adrfam": "IPv4", 00:15:01.691 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:01.691 "trsvcid": "0" 00:15:01.691 } 00:15:01.691 ], 00:15:01.691 "allow_any_host": true, 00:15:01.691 "hosts": [], 00:15:01.691 "serial_number": "SPDK2", 00:15:01.691 "model_number": "SPDK bdev Controller", 00:15:01.691 "max_namespaces": 32, 00:15:01.691 "min_cntlid": 1, 00:15:01.691 "max_cntlid": 65519, 00:15:01.691 "namespaces": [ 00:15:01.691 { 00:15:01.691 "nsid": 1, 00:15:01.691 "bdev_name": "Malloc2", 00:15:01.691 "name": "Malloc2", 00:15:01.691 "nguid": "2AE53EA941724D6EAF1BFF872B9AF192", 00:15:01.691 "uuid": "2ae53ea9-4172-4d6e-af1b-ff872b9af192" 00:15:01.691 } 00:15:01.691 ] 00:15:01.691 } 00:15:01.691 ] 00:15:01.692 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:01.692 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2162615 00:15:01.692 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:01.692 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:01.692 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:15:01.692 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:01.692 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:01.692 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:15:01.692 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:01.692 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:01.953 Malloc4 00:15:01.953 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:01.953 [2024-11-20 16:26:47.750460] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:01.953 [2024-11-20 16:26:47.896475] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:02.215 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:02.215 Asynchronous Event Request test 00:15:02.215 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:02.215 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:02.215 Registering asynchronous event callbacks... 00:15:02.215 Starting namespace attribute notice tests for all controllers... 00:15:02.215 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:02.215 aer_cb - Changed Namespace 00:15:02.215 Cleaning up... 00:15:02.215 [ 00:15:02.215 { 00:15:02.215 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:02.215 "subtype": "Discovery", 00:15:02.215 "listen_addresses": [], 00:15:02.215 "allow_any_host": true, 00:15:02.215 "hosts": [] 00:15:02.215 }, 00:15:02.215 { 00:15:02.215 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:02.215 "subtype": "NVMe", 00:15:02.215 "listen_addresses": [ 00:15:02.215 { 00:15:02.215 "trtype": "VFIOUSER", 00:15:02.215 "adrfam": "IPv4", 00:15:02.215 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:02.215 "trsvcid": "0" 00:15:02.215 } 00:15:02.215 ], 00:15:02.215 "allow_any_host": true, 00:15:02.215 "hosts": [], 00:15:02.215 "serial_number": "SPDK1", 00:15:02.215 "model_number": "SPDK bdev Controller", 00:15:02.215 "max_namespaces": 32, 00:15:02.215 "min_cntlid": 1, 00:15:02.215 "max_cntlid": 65519, 00:15:02.215 "namespaces": [ 00:15:02.215 { 00:15:02.215 "nsid": 1, 00:15:02.215 "bdev_name": "Malloc1", 00:15:02.215 "name": "Malloc1", 00:15:02.215 "nguid": "1F839BC5C2514D84A6B440437DB2AB6B", 00:15:02.215 "uuid": "1f839bc5-c251-4d84-a6b4-40437db2ab6b" 00:15:02.215 }, 00:15:02.215 { 00:15:02.215 "nsid": 2, 00:15:02.215 "bdev_name": "Malloc3", 00:15:02.215 "name": "Malloc3", 00:15:02.215 "nguid": "C4E13A1B404640728F366711EEB3FC30", 00:15:02.215 "uuid": "c4e13a1b-4046-4072-8f36-6711eeb3fc30" 00:15:02.215 } 00:15:02.215 ] 00:15:02.215 }, 00:15:02.215 { 00:15:02.215 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:02.215 "subtype": "NVMe", 00:15:02.215 "listen_addresses": [ 00:15:02.215 { 00:15:02.215 "trtype": "VFIOUSER", 00:15:02.215 "adrfam": "IPv4", 00:15:02.215 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:02.215 "trsvcid": "0" 00:15:02.215 } 00:15:02.215 ], 00:15:02.215 "allow_any_host": true, 00:15:02.215 "hosts": [], 00:15:02.215 "serial_number": "SPDK2", 00:15:02.215 "model_number": "SPDK bdev Controller", 00:15:02.215 "max_namespaces": 32, 00:15:02.215 "min_cntlid": 1, 00:15:02.215 "max_cntlid": 65519, 00:15:02.215 "namespaces": [ 00:15:02.215 { 00:15:02.215 "nsid": 1, 00:15:02.215 "bdev_name": "Malloc2", 00:15:02.215 "name": "Malloc2", 00:15:02.215 "nguid": "2AE53EA941724D6EAF1BFF872B9AF192", 00:15:02.215 "uuid": "2ae53ea9-4172-4d6e-af1b-ff872b9af192" 00:15:02.215 }, 00:15:02.215 { 00:15:02.215 "nsid": 2, 00:15:02.215 "bdev_name": "Malloc4", 00:15:02.215 "name": "Malloc4", 00:15:02.215 "nguid": "7DB36CD950BD45058420C4DF58EE8FF7", 00:15:02.215 "uuid": "7db36cd9-50bd-4505-8420-c4df58ee8ff7" 00:15:02.215 } 00:15:02.215 ] 00:15:02.215 } 00:15:02.215 ] 00:15:02.215 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2162615 00:15:02.215 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:02.215 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2153499 00:15:02.215 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2153499 ']' 00:15:02.215 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2153499 00:15:02.215 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:15:02.215 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:02.215 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2153499 00:15:02.476 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:02.476 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:02.476 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2153499' 00:15:02.476 killing process with pid 2153499 00:15:02.476 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2153499 00:15:02.476 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2153499 00:15:02.476 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:02.476 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:02.476 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:02.477 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:02.477 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:02.477 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2162923 00:15:02.477 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2162923' 00:15:02.477 Process pid: 2162923 00:15:02.477 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:02.477 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:02.477 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2162923 00:15:02.477 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2162923 ']' 00:15:02.477 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.477 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:02.477 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.477 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:02.477 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:02.477 [2024-11-20 16:26:48.395175] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:02.477 [2024-11-20 16:26:48.396107] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:15:02.477 [2024-11-20 16:26:48.396149] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:02.738 [2024-11-20 16:26:48.468895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:02.738 [2024-11-20 16:26:48.503704] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:02.738 [2024-11-20 16:26:48.503738] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:02.738 [2024-11-20 16:26:48.503746] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:02.738 [2024-11-20 16:26:48.503753] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:02.738 [2024-11-20 16:26:48.503759] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:02.738 [2024-11-20 16:26:48.505295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:02.738 [2024-11-20 16:26:48.505409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:02.738 [2024-11-20 16:26:48.505562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.738 [2024-11-20 16:26:48.505564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:02.738 [2024-11-20 16:26:48.561690] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:02.738 [2024-11-20 16:26:48.561733] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:02.738 [2024-11-20 16:26:48.562626] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:02.738 [2024-11-20 16:26:48.563352] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:02.738 [2024-11-20 16:26:48.563433] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:03.309 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:03.309 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:15:03.309 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:04.250 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:04.510 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:04.510 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:04.510 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:04.510 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:04.510 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:04.771 Malloc1 00:15:04.771 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:04.771 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:05.031 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:05.291 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:05.291 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:05.291 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:05.291 Malloc2 00:15:05.550 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:05.550 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:05.810 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:06.071 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:06.071 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2162923 00:15:06.071 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2162923 ']' 00:15:06.071 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2162923 00:15:06.071 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:15:06.071 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:06.071 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2162923 00:15:06.071 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:06.071 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:06.071 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2162923' 00:15:06.071 killing process with pid 2162923 00:15:06.071 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2162923 00:15:06.071 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2162923 00:15:06.071 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:06.424 00:15:06.424 real 0m51.960s 00:15:06.424 user 3m19.592s 00:15:06.424 sys 0m2.674s 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:06.424 ************************************ 00:15:06.424 END TEST nvmf_vfio_user 00:15:06.424 ************************************ 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:06.424 ************************************ 00:15:06.424 START TEST nvmf_vfio_user_nvme_compliance 00:15:06.424 ************************************ 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:06.424 * Looking for test storage... 00:15:06.424 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:06.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.424 --rc genhtml_branch_coverage=1 00:15:06.424 --rc genhtml_function_coverage=1 00:15:06.424 --rc genhtml_legend=1 00:15:06.424 --rc geninfo_all_blocks=1 00:15:06.424 --rc geninfo_unexecuted_blocks=1 00:15:06.424 00:15:06.424 ' 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:06.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.424 --rc genhtml_branch_coverage=1 00:15:06.424 --rc genhtml_function_coverage=1 00:15:06.424 --rc genhtml_legend=1 00:15:06.424 --rc geninfo_all_blocks=1 00:15:06.424 --rc geninfo_unexecuted_blocks=1 00:15:06.424 00:15:06.424 ' 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:06.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.424 --rc genhtml_branch_coverage=1 00:15:06.424 --rc genhtml_function_coverage=1 00:15:06.424 --rc genhtml_legend=1 00:15:06.424 --rc geninfo_all_blocks=1 00:15:06.424 --rc geninfo_unexecuted_blocks=1 00:15:06.424 00:15:06.424 ' 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:06.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:06.424 --rc genhtml_branch_coverage=1 00:15:06.424 --rc genhtml_function_coverage=1 00:15:06.424 --rc genhtml_legend=1 00:15:06.424 --rc geninfo_all_blocks=1 00:15:06.424 --rc geninfo_unexecuted_blocks=1 00:15:06.424 00:15:06.424 ' 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:06.424 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:06.425 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:15:06.425 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:15:06.425 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:06.425 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:06.425 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:06.425 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:06.425 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:06.425 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:15:06.425 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:06.425 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:06.425 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:06.425 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.425 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.425 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.425 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:06.425 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.425 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:15:06.425 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:06.425 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:06.425 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:06.425 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:06.425 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:06.425 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:06.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:06.425 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:06.425 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:06.425 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:06.425 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:06.425 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:06.425 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:06.425 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:06.425 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:06.425 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2163683 00:15:06.425 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2163683' 00:15:06.425 Process pid: 2163683 00:15:06.425 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:06.425 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2163683 00:15:06.425 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 2163683 ']' 00:15:06.425 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:06.425 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.425 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:06.425 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.425 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:06.425 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:06.731 [2024-11-20 16:26:52.388762] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:15:06.731 [2024-11-20 16:26:52.388816] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:06.731 [2024-11-20 16:26:52.461672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:06.731 [2024-11-20 16:26:52.497175] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:06.731 [2024-11-20 16:26:52.497208] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:06.731 [2024-11-20 16:26:52.497216] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:06.731 [2024-11-20 16:26:52.497222] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:06.731 [2024-11-20 16:26:52.497228] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:06.731 [2024-11-20 16:26:52.498676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:06.731 [2024-11-20 16:26:52.498790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:06.731 [2024-11-20 16:26:52.498792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.301 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:07.301 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:15:07.301 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:08.241 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:08.241 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:08.241 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:08.241 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.241 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:08.502 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.502 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:08.502 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:08.502 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.502 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:08.502 malloc0 00:15:08.502 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.502 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:08.502 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.502 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:08.502 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.502 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:08.502 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.502 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:08.502 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.502 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:08.502 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.502 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:08.502 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.502 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:08.502 00:15:08.502 00:15:08.502 CUnit - A unit testing framework for C - Version 2.1-3 00:15:08.502 http://cunit.sourceforge.net/ 00:15:08.502 00:15:08.502 00:15:08.502 Suite: nvme_compliance 00:15:08.764 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-20 16:26:54.467405] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:08.764 [2024-11-20 16:26:54.468765] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:08.764 [2024-11-20 16:26:54.468776] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:08.764 [2024-11-20 16:26:54.468781] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:08.764 [2024-11-20 16:26:54.470422] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:08.764 passed 00:15:08.764 Test: admin_identify_ctrlr_verify_fused ...[2024-11-20 16:26:54.566023] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:08.764 [2024-11-20 16:26:54.569038] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:08.764 passed 00:15:08.764 Test: admin_identify_ns ...[2024-11-20 16:26:54.664232] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:09.025 [2024-11-20 16:26:54.727995] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:09.025 [2024-11-20 16:26:54.735993] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:09.025 [2024-11-20 16:26:54.757106] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:09.025 passed 00:15:09.025 Test: admin_get_features_mandatory_features ...[2024-11-20 16:26:54.848778] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:09.025 [2024-11-20 16:26:54.851797] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:09.025 passed 00:15:09.025 Test: admin_get_features_optional_features ...[2024-11-20 16:26:54.945334] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:09.025 [2024-11-20 16:26:54.948349] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:09.285 passed 00:15:09.285 Test: admin_set_features_number_of_queues ...[2024-11-20 16:26:55.042468] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:09.285 [2024-11-20 16:26:55.147090] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:09.285 passed 00:15:09.285 Test: admin_get_log_page_mandatory_logs ...[2024-11-20 16:26:55.240753] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:09.546 [2024-11-20 16:26:55.243773] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:09.546 passed 00:15:09.546 Test: admin_get_log_page_with_lpo ...[2024-11-20 16:26:55.336891] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:09.546 [2024-11-20 16:26:55.404001] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:09.546 [2024-11-20 16:26:55.417057] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:09.546 passed 00:15:09.806 Test: fabric_property_get ...[2024-11-20 16:26:55.509085] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:09.806 [2024-11-20 16:26:55.510455] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:09.806 [2024-11-20 16:26:55.512128] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:09.806 passed 00:15:09.806 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-20 16:26:55.607710] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:09.806 [2024-11-20 16:26:55.608953] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:09.806 [2024-11-20 16:26:55.610726] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:09.806 passed 00:15:09.806 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-20 16:26:55.703228] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:10.066 [2024-11-20 16:26:55.786990] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:10.066 [2024-11-20 16:26:55.802987] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:10.066 [2024-11-20 16:26:55.808077] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:10.066 passed 00:15:10.066 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-20 16:26:55.902073] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:10.066 [2024-11-20 16:26:55.903321] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:10.066 [2024-11-20 16:26:55.905091] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:10.066 passed 00:15:10.066 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-20 16:26:55.997245] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:10.327 [2024-11-20 16:26:56.074000] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:10.327 [2024-11-20 16:26:56.097990] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:10.327 [2024-11-20 16:26:56.103080] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:10.327 passed 00:15:10.327 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-20 16:26:56.195711] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:10.327 [2024-11-20 16:26:56.196960] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:10.327 [2024-11-20 16:26:56.196986] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:10.327 [2024-11-20 16:26:56.198735] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:10.327 passed 00:15:10.588 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-20 16:26:56.292832] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:10.588 [2024-11-20 16:26:56.384991] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:10.588 [2024-11-20 16:26:56.392988] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:10.588 [2024-11-20 16:26:56.400993] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:10.588 [2024-11-20 16:26:56.408988] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:10.588 [2024-11-20 16:26:56.438070] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:10.588 passed 00:15:10.588 Test: admin_create_io_sq_verify_pc ...[2024-11-20 16:26:56.527711] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:10.588 [2024-11-20 16:26:56.542999] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:10.848 [2024-11-20 16:26:56.560856] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:10.849 passed 00:15:10.849 Test: admin_create_io_qp_max_qps ...[2024-11-20 16:26:56.656407] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:12.233 [2024-11-20 16:26:57.763995] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:15:12.233 [2024-11-20 16:26:58.142480] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:12.233 passed 00:15:12.494 Test: admin_create_io_sq_shared_cq ...[2024-11-20 16:26:58.237233] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:12.494 [2024-11-20 16:26:58.368989] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:12.494 [2024-11-20 16:26:58.406038] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:12.494 passed 00:15:12.494 00:15:12.494 Run Summary: Type Total Ran Passed Failed Inactive 00:15:12.494 suites 1 1 n/a 0 0 00:15:12.494 tests 18 18 18 0 0 00:15:12.494 asserts 360 360 360 0 n/a 00:15:12.494 00:15:12.494 Elapsed time = 1.650 seconds 00:15:12.756 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2163683 00:15:12.756 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 2163683 ']' 00:15:12.756 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 2163683 00:15:12.756 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:15:12.756 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:12.756 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2163683 00:15:12.756 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:12.756 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:12.756 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2163683' 00:15:12.756 killing process with pid 2163683 00:15:12.756 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 2163683 00:15:12.756 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 2163683 00:15:12.756 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:12.756 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:12.756 00:15:12.756 real 0m6.557s 00:15:12.756 user 0m18.617s 00:15:12.756 sys 0m0.537s 00:15:12.756 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:12.756 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:12.756 ************************************ 00:15:12.756 END TEST nvmf_vfio_user_nvme_compliance 00:15:12.756 ************************************ 00:15:12.756 16:26:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:12.756 16:26:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:12.756 16:26:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:12.756 16:26:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:13.017 ************************************ 00:15:13.017 START TEST nvmf_vfio_user_fuzz 00:15:13.017 ************************************ 00:15:13.017 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:13.017 * Looking for test storage... 00:15:13.017 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:13.017 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:13.017 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:13.017 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:15:13.017 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:13.017 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:13.017 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:13.017 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:13.017 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:13.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.018 --rc genhtml_branch_coverage=1 00:15:13.018 --rc genhtml_function_coverage=1 00:15:13.018 --rc genhtml_legend=1 00:15:13.018 --rc geninfo_all_blocks=1 00:15:13.018 --rc geninfo_unexecuted_blocks=1 00:15:13.018 00:15:13.018 ' 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:13.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.018 --rc genhtml_branch_coverage=1 00:15:13.018 --rc genhtml_function_coverage=1 00:15:13.018 --rc genhtml_legend=1 00:15:13.018 --rc geninfo_all_blocks=1 00:15:13.018 --rc geninfo_unexecuted_blocks=1 00:15:13.018 00:15:13.018 ' 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:13.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.018 --rc genhtml_branch_coverage=1 00:15:13.018 --rc genhtml_function_coverage=1 00:15:13.018 --rc genhtml_legend=1 00:15:13.018 --rc geninfo_all_blocks=1 00:15:13.018 --rc geninfo_unexecuted_blocks=1 00:15:13.018 00:15:13.018 ' 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:13.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.018 --rc genhtml_branch_coverage=1 00:15:13.018 --rc genhtml_function_coverage=1 00:15:13.018 --rc genhtml_legend=1 00:15:13.018 --rc geninfo_all_blocks=1 00:15:13.018 --rc geninfo_unexecuted_blocks=1 00:15:13.018 00:15:13.018 ' 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:13.018 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:13.018 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:13.019 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2165087 00:15:13.019 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2165087' 00:15:13.019 Process pid: 2165087 00:15:13.019 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:13.019 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:13.019 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2165087 00:15:13.019 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 2165087 ']' 00:15:13.019 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:13.019 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:13.019 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:13.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:13.019 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:13.019 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:13.960 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:13.960 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:15:13.960 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:14.899 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:14.899 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.899 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:14.899 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.899 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:14.899 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:14.899 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.899 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:14.899 malloc0 00:15:14.899 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.899 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:14.899 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.899 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:14.899 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.899 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:14.899 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.899 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:14.899 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.899 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:14.899 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.899 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:14.899 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.899 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:14.899 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:15:47.007 Fuzzing completed. Shutting down the fuzz application 00:15:47.007 00:15:47.007 Dumping successful admin opcodes: 00:15:47.007 8, 9, 10, 24, 00:15:47.007 Dumping successful io opcodes: 00:15:47.007 0, 00:15:47.007 NS: 0x20000081ef00 I/O qp, Total commands completed: 1169914, total successful commands: 4602, random_seed: 261094592 00:15:47.007 NS: 0x20000081ef00 admin qp, Total commands completed: 147004, total successful commands: 1191, random_seed: 2237701312 00:15:47.007 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:15:47.007 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.007 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:47.007 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.007 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2165087 00:15:47.007 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 2165087 ']' 00:15:47.007 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 2165087 00:15:47.007 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:15:47.007 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:47.007 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2165087 00:15:47.007 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:47.007 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:47.007 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2165087' 00:15:47.007 killing process with pid 2165087 00:15:47.007 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 2165087 00:15:47.007 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 2165087 00:15:47.007 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:15:47.007 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:15:47.007 00:15:47.007 real 0m33.736s 00:15:47.007 user 0m39.741s 00:15:47.007 sys 0m23.483s 00:15:47.007 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:47.008 ************************************ 00:15:47.008 END TEST nvmf_vfio_user_fuzz 00:15:47.008 ************************************ 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:47.008 ************************************ 00:15:47.008 START TEST nvmf_auth_target 00:15:47.008 ************************************ 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:47.008 * Looking for test storage... 00:15:47.008 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:47.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.008 --rc genhtml_branch_coverage=1 00:15:47.008 --rc genhtml_function_coverage=1 00:15:47.008 --rc genhtml_legend=1 00:15:47.008 --rc geninfo_all_blocks=1 00:15:47.008 --rc geninfo_unexecuted_blocks=1 00:15:47.008 00:15:47.008 ' 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:47.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.008 --rc genhtml_branch_coverage=1 00:15:47.008 --rc genhtml_function_coverage=1 00:15:47.008 --rc genhtml_legend=1 00:15:47.008 --rc geninfo_all_blocks=1 00:15:47.008 --rc geninfo_unexecuted_blocks=1 00:15:47.008 00:15:47.008 ' 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:47.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.008 --rc genhtml_branch_coverage=1 00:15:47.008 --rc genhtml_function_coverage=1 00:15:47.008 --rc genhtml_legend=1 00:15:47.008 --rc geninfo_all_blocks=1 00:15:47.008 --rc geninfo_unexecuted_blocks=1 00:15:47.008 00:15:47.008 ' 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:47.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.008 --rc genhtml_branch_coverage=1 00:15:47.008 --rc genhtml_function_coverage=1 00:15:47.008 --rc genhtml_legend=1 00:15:47.008 --rc geninfo_all_blocks=1 00:15:47.008 --rc geninfo_unexecuted_blocks=1 00:15:47.008 00:15:47.008 ' 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:47.008 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.009 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.009 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.009 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:47.009 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.009 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:15:47.009 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:47.009 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:47.009 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:47.009 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:47.009 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:47.009 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:47.009 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:47.009 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:47.009 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:47.009 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:47.009 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:47.009 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:47.009 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:47.009 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:15:47.009 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:47.009 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:47.009 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:47.009 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:15:47.009 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:47.009 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:47.009 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:47.009 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:47.009 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:47.009 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:47.009 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:47.009 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:47.009 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:47.009 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:47.009 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:15:47.009 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:55.164 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:55.164 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:55.164 Found net devices under 0000:31:00.0: cvl_0_0 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:55.164 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:55.165 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:55.165 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:55.165 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:55.165 Found net devices under 0000:31:00.1: cvl_0_1 00:15:55.165 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:55.165 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:55.165 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:15:55.165 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:55.165 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:55.165 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:55.165 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:55.165 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:55.165 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:55.165 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:55.165 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:55.165 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:55.165 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:55.165 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:55.165 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:55.165 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:55.165 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:55.165 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:55.165 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:55.165 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:55.165 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:55.165 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:55.165 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:55.165 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:55.165 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:55.165 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:55.165 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:55.165 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:55.165 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:55.165 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:55.165 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.493 ms 00:15:55.165 00:15:55.165 --- 10.0.0.2 ping statistics --- 00:15:55.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.165 rtt min/avg/max/mdev = 0.493/0.493/0.493/0.000 ms 00:15:55.165 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:55.165 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:55.165 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:15:55.165 00:15:55.165 --- 10.0.0.1 ping statistics --- 00:15:55.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.165 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:15:55.165 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:55.165 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:15:55.165 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:55.165 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:55.165 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:55.165 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:55.165 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:55.165 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:55.165 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:55.165 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:15:55.165 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:55.165 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:55.165 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.165 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2175906 00:15:55.165 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2175906 00:15:55.165 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2175906 ']' 00:15:55.165 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.165 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:55.165 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.165 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:55.165 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.165 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:55.165 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:55.165 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:55.165 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:55.165 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:55.165 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.165 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:55.165 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2176044 00:15:55.165 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:55.165 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:55.165 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:15:55.165 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:55.165 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:55.165 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:55.165 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:15:55.165 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:55.165 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:55.165 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=915bbcc17b1c97915068c2e31ec77b3bcf9d12d2fc596348 00:15:55.165 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:15:55.165 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.lAH 00:15:55.165 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 915bbcc17b1c97915068c2e31ec77b3bcf9d12d2fc596348 0 00:15:55.165 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 915bbcc17b1c97915068c2e31ec77b3bcf9d12d2fc596348 0 00:15:55.165 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:55.165 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:55.165 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=915bbcc17b1c97915068c2e31ec77b3bcf9d12d2fc596348 00:15:55.165 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:15:55.165 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:55.165 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.lAH 00:15:55.165 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.lAH 00:15:55.165 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.lAH 00:15:55.165 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:15:55.165 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:55.165 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:55.165 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:55.165 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:55.165 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:55.165 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:55.165 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6bbbdd8246a9ffa73b6878b49a4c8eeb9af75bb9599f82e456e3cb1d8d5b7f2e 00:15:55.165 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:55.165 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Vy2 00:15:55.165 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6bbbdd8246a9ffa73b6878b49a4c8eeb9af75bb9599f82e456e3cb1d8d5b7f2e 3 00:15:55.165 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6bbbdd8246a9ffa73b6878b49a4c8eeb9af75bb9599f82e456e3cb1d8d5b7f2e 3 00:15:55.166 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:55.166 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:55.166 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6bbbdd8246a9ffa73b6878b49a4c8eeb9af75bb9599f82e456e3cb1d8d5b7f2e 00:15:55.166 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:55.166 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:55.166 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Vy2 00:15:55.166 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Vy2 00:15:55.166 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.Vy2 00:15:55.166 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:15:55.166 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:55.166 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:55.166 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:55.166 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:55.166 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:55.166 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:55.166 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=09c00950e69010dee986063a5618db6a 00:15:55.166 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:55.166 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.RBx 00:15:55.166 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 09c00950e69010dee986063a5618db6a 1 00:15:55.166 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 09c00950e69010dee986063a5618db6a 1 00:15:55.166 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:55.166 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:55.166 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=09c00950e69010dee986063a5618db6a 00:15:55.166 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:55.166 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.RBx 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.RBx 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.RBx 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6bf5b8049007c874278a1b6631cd1ee0dab82ff3842edac7 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.knV 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6bf5b8049007c874278a1b6631cd1ee0dab82ff3842edac7 2 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6bf5b8049007c874278a1b6631cd1ee0dab82ff3842edac7 2 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6bf5b8049007c874278a1b6631cd1ee0dab82ff3842edac7 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.knV 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.knV 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.knV 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c2a40c7eccd6ea7347402c17591015b5528b075b48c23482 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.ZhF 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c2a40c7eccd6ea7347402c17591015b5528b075b48c23482 2 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c2a40c7eccd6ea7347402c17591015b5528b075b48c23482 2 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c2a40c7eccd6ea7347402c17591015b5528b075b48c23482 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.ZhF 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.ZhF 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.ZhF 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d3b7e5e1b29c53021f9f6f0fe4a58715 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Bf1 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d3b7e5e1b29c53021f9f6f0fe4a58715 1 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d3b7e5e1b29c53021f9f6f0fe4a58715 1 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d3b7e5e1b29c53021f9f6f0fe4a58715 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:55.428 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Bf1 00:15:55.429 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Bf1 00:15:55.429 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.Bf1 00:15:55.429 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:15:55.429 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:55.429 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:55.429 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:55.429 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:55.429 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:55.429 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:55.429 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d27f663a41780121884a48ca277b1458bde95e9321553de212bc13ab5d9b5c0a 00:15:55.429 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:55.429 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.2L0 00:15:55.429 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d27f663a41780121884a48ca277b1458bde95e9321553de212bc13ab5d9b5c0a 3 00:15:55.429 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d27f663a41780121884a48ca277b1458bde95e9321553de212bc13ab5d9b5c0a 3 00:15:55.429 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:55.429 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:55.429 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d27f663a41780121884a48ca277b1458bde95e9321553de212bc13ab5d9b5c0a 00:15:55.429 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:55.429 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:55.692 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.2L0 00:15:55.692 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.2L0 00:15:55.692 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.2L0 00:15:55.692 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:15:55.692 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2175906 00:15:55.692 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2175906 ']' 00:15:55.692 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.692 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:55.692 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.692 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:55.692 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.692 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:55.692 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:55.692 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2176044 /var/tmp/host.sock 00:15:55.692 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2176044 ']' 00:15:55.692 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:15:55.692 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:55.692 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:55.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:55.692 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:55.692 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.954 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:55.954 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:55.954 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:15:55.954 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.954 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.954 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.954 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:55.954 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.lAH 00:15:55.954 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.954 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.954 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.954 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.lAH 00:15:55.954 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.lAH 00:15:56.215 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.Vy2 ]] 00:15:56.215 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Vy2 00:15:56.215 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.215 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.215 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.215 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Vy2 00:15:56.215 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Vy2 00:15:56.215 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:56.215 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.RBx 00:15:56.215 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.215 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.215 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.215 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.RBx 00:15:56.215 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.RBx 00:15:56.476 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.knV ]] 00:15:56.476 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.knV 00:15:56.476 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.476 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.476 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.476 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.knV 00:15:56.476 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.knV 00:15:56.736 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:56.736 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.ZhF 00:15:56.736 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.736 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.736 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.736 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.ZhF 00:15:56.736 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.ZhF 00:15:56.736 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.Bf1 ]] 00:15:56.736 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Bf1 00:15:56.736 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.736 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.736 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.736 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Bf1 00:15:56.736 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Bf1 00:15:56.996 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:56.996 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.2L0 00:15:56.996 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.996 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.996 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.996 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.2L0 00:15:56.996 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.2L0 00:15:57.257 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:15:57.257 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:57.257 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:57.257 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:57.257 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:57.257 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:57.258 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:15:57.258 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:57.258 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:57.258 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:57.258 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:57.258 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.258 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.258 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.258 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.258 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.258 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.258 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.258 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.519 00:15:57.519 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:57.519 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:57.519 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.780 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.780 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.780 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.780 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.780 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.780 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:57.780 { 00:15:57.780 "cntlid": 1, 00:15:57.780 "qid": 0, 00:15:57.780 "state": "enabled", 00:15:57.780 "thread": "nvmf_tgt_poll_group_000", 00:15:57.780 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:15:57.780 "listen_address": { 00:15:57.780 "trtype": "TCP", 00:15:57.780 "adrfam": "IPv4", 00:15:57.780 "traddr": "10.0.0.2", 00:15:57.780 "trsvcid": "4420" 00:15:57.780 }, 00:15:57.780 "peer_address": { 00:15:57.780 "trtype": "TCP", 00:15:57.780 "adrfam": "IPv4", 00:15:57.780 "traddr": "10.0.0.1", 00:15:57.780 "trsvcid": "49286" 00:15:57.780 }, 00:15:57.780 "auth": { 00:15:57.780 "state": "completed", 00:15:57.780 "digest": "sha256", 00:15:57.780 "dhgroup": "null" 00:15:57.780 } 00:15:57.780 } 00:15:57.780 ]' 00:15:57.780 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:57.780 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:57.780 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:57.780 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:57.780 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:58.042 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.042 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.042 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.042 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTE1YmJjYzE3YjFjOTc5MTUwNjhjMmUzMWVjNzdiM2JjZjlkMTJkMmZjNTk2MzQ40V3t1w==: --dhchap-ctrl-secret DHHC-1:03:NmJiYmRkODI0NmE5ZmZhNzNiNjg3OGI0OWE0YzhlZWI5YWY3NWJiOTU5OWY4MmU0NTZlM2NiMWQ4ZDViN2YyZfnUQlU=: 00:15:58.042 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:OTE1YmJjYzE3YjFjOTc5MTUwNjhjMmUzMWVjNzdiM2JjZjlkMTJkMmZjNTk2MzQ40V3t1w==: --dhchap-ctrl-secret DHHC-1:03:NmJiYmRkODI0NmE5ZmZhNzNiNjg3OGI0OWE0YzhlZWI5YWY3NWJiOTU5OWY4MmU0NTZlM2NiMWQ4ZDViN2YyZfnUQlU=: 00:15:58.985 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.985 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:15:58.985 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.985 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.985 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.985 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:58.985 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:58.985 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:58.985 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:15:58.985 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:58.985 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:58.985 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:58.985 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:58.985 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.985 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.985 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.985 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.985 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.985 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.985 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.985 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.246 00:15:59.246 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:59.246 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:59.246 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.508 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.508 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.508 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.508 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.508 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.508 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:59.508 { 00:15:59.508 "cntlid": 3, 00:15:59.508 "qid": 0, 00:15:59.508 "state": "enabled", 00:15:59.508 "thread": "nvmf_tgt_poll_group_000", 00:15:59.508 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:15:59.508 "listen_address": { 00:15:59.508 "trtype": "TCP", 00:15:59.508 "adrfam": "IPv4", 00:15:59.508 "traddr": "10.0.0.2", 00:15:59.508 "trsvcid": "4420" 00:15:59.508 }, 00:15:59.508 "peer_address": { 00:15:59.508 "trtype": "TCP", 00:15:59.508 "adrfam": "IPv4", 00:15:59.508 "traddr": "10.0.0.1", 00:15:59.508 "trsvcid": "49310" 00:15:59.508 }, 00:15:59.508 "auth": { 00:15:59.508 "state": "completed", 00:15:59.508 "digest": "sha256", 00:15:59.508 "dhgroup": "null" 00:15:59.508 } 00:15:59.508 } 00:15:59.508 ]' 00:15:59.508 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:59.508 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:59.508 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:59.508 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:59.508 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:59.768 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.768 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.768 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.768 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDljMDA5NTBlNjkwMTBkZWU5ODYwNjNhNTYxOGRiNmFZUFSi: --dhchap-ctrl-secret DHHC-1:02:NmJmNWI4MDQ5MDA3Yzg3NDI3OGExYjY2MzFjZDFlZTBkYWI4MmZmMzg0MmVkYWM3B7b5SA==: 00:15:59.768 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:MDljMDA5NTBlNjkwMTBkZWU5ODYwNjNhNTYxOGRiNmFZUFSi: --dhchap-ctrl-secret DHHC-1:02:NmJmNWI4MDQ5MDA3Yzg3NDI3OGExYjY2MzFjZDFlZTBkYWI4MmZmMzg0MmVkYWM3B7b5SA==: 00:16:00.711 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.711 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.711 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:00.711 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.711 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.711 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.711 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:00.711 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:00.711 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:00.711 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:16:00.711 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:00.711 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:00.711 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:00.711 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:00.711 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.711 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.711 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.711 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.711 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.711 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.711 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.711 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.972 00:16:00.972 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:00.972 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.972 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:01.233 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.233 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.233 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.233 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.233 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.233 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:01.233 { 00:16:01.233 "cntlid": 5, 00:16:01.233 "qid": 0, 00:16:01.233 "state": "enabled", 00:16:01.233 "thread": "nvmf_tgt_poll_group_000", 00:16:01.233 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:01.233 "listen_address": { 00:16:01.233 "trtype": "TCP", 00:16:01.233 "adrfam": "IPv4", 00:16:01.233 "traddr": "10.0.0.2", 00:16:01.233 "trsvcid": "4420" 00:16:01.233 }, 00:16:01.233 "peer_address": { 00:16:01.233 "trtype": "TCP", 00:16:01.233 "adrfam": "IPv4", 00:16:01.233 "traddr": "10.0.0.1", 00:16:01.233 "trsvcid": "49344" 00:16:01.233 }, 00:16:01.233 "auth": { 00:16:01.233 "state": "completed", 00:16:01.233 "digest": "sha256", 00:16:01.233 "dhgroup": "null" 00:16:01.233 } 00:16:01.233 } 00:16:01.233 ]' 00:16:01.233 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:01.233 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:01.233 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:01.233 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:01.233 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:01.233 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.233 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.233 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.494 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzJhNDBjN2VjY2Q2ZWE3MzQ3NDAyYzE3NTkxMDE1YjU1MjhiMDc1YjQ4YzIzNDgypMXG7w==: --dhchap-ctrl-secret DHHC-1:01:ZDNiN2U1ZTFiMjljNTMwMjFmOWY2ZjBmZTRhNTg3MTUTmxRf: 00:16:01.494 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:YzJhNDBjN2VjY2Q2ZWE3MzQ3NDAyYzE3NTkxMDE1YjU1MjhiMDc1YjQ4YzIzNDgypMXG7w==: --dhchap-ctrl-secret DHHC-1:01:ZDNiN2U1ZTFiMjljNTMwMjFmOWY2ZjBmZTRhNTg3MTUTmxRf: 00:16:02.437 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.437 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.437 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:02.437 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.437 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.437 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.437 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:02.437 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:02.437 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:02.437 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:16:02.437 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:02.437 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:02.437 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:02.437 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:02.437 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.437 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:16:02.437 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.437 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.437 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.437 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:02.437 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:02.437 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:02.698 00:16:02.698 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:02.698 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:02.698 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.959 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.959 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.959 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.959 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.959 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.959 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:02.959 { 00:16:02.959 "cntlid": 7, 00:16:02.959 "qid": 0, 00:16:02.959 "state": "enabled", 00:16:02.959 "thread": "nvmf_tgt_poll_group_000", 00:16:02.959 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:02.959 "listen_address": { 00:16:02.959 "trtype": "TCP", 00:16:02.959 "adrfam": "IPv4", 00:16:02.959 "traddr": "10.0.0.2", 00:16:02.959 "trsvcid": "4420" 00:16:02.959 }, 00:16:02.959 "peer_address": { 00:16:02.959 "trtype": "TCP", 00:16:02.959 "adrfam": "IPv4", 00:16:02.959 "traddr": "10.0.0.1", 00:16:02.959 "trsvcid": "39346" 00:16:02.959 }, 00:16:02.959 "auth": { 00:16:02.959 "state": "completed", 00:16:02.959 "digest": "sha256", 00:16:02.959 "dhgroup": "null" 00:16:02.959 } 00:16:02.959 } 00:16:02.959 ]' 00:16:02.959 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:02.959 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:02.959 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:02.959 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:02.959 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:02.959 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.959 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.959 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.220 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI3ZjY2M2E0MTc4MDEyMTg4NGE0OGNhMjc3YjE0NThiZGU5NWU5MzIxNTUzZGUyMTJiYzEzYWI1ZDliNWMwYTvXris=: 00:16:03.220 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:ZDI3ZjY2M2E0MTc4MDEyMTg4NGE0OGNhMjc3YjE0NThiZGU5NWU5MzIxNTUzZGUyMTJiYzEzYWI1ZDliNWMwYTvXris=: 00:16:03.789 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.050 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:04.050 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.050 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.050 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.050 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:04.050 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:04.050 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:04.050 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:04.050 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:16:04.050 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:04.050 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:04.050 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:04.050 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:04.050 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.050 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.050 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.050 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.050 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.050 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.050 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.050 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.310 00:16:04.310 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:04.310 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:04.310 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.571 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.571 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.571 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.571 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.571 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.571 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:04.571 { 00:16:04.571 "cntlid": 9, 00:16:04.571 "qid": 0, 00:16:04.571 "state": "enabled", 00:16:04.571 "thread": "nvmf_tgt_poll_group_000", 00:16:04.571 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:04.571 "listen_address": { 00:16:04.571 "trtype": "TCP", 00:16:04.571 "adrfam": "IPv4", 00:16:04.571 "traddr": "10.0.0.2", 00:16:04.571 "trsvcid": "4420" 00:16:04.571 }, 00:16:04.571 "peer_address": { 00:16:04.571 "trtype": "TCP", 00:16:04.571 "adrfam": "IPv4", 00:16:04.571 "traddr": "10.0.0.1", 00:16:04.571 "trsvcid": "39382" 00:16:04.571 }, 00:16:04.571 "auth": { 00:16:04.571 "state": "completed", 00:16:04.571 "digest": "sha256", 00:16:04.571 "dhgroup": "ffdhe2048" 00:16:04.571 } 00:16:04.571 } 00:16:04.571 ]' 00:16:04.571 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:04.571 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:04.571 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:04.571 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:04.571 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:04.833 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.833 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.833 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.833 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTE1YmJjYzE3YjFjOTc5MTUwNjhjMmUzMWVjNzdiM2JjZjlkMTJkMmZjNTk2MzQ40V3t1w==: --dhchap-ctrl-secret DHHC-1:03:NmJiYmRkODI0NmE5ZmZhNzNiNjg3OGI0OWE0YzhlZWI5YWY3NWJiOTU5OWY4MmU0NTZlM2NiMWQ4ZDViN2YyZfnUQlU=: 00:16:04.833 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:OTE1YmJjYzE3YjFjOTc5MTUwNjhjMmUzMWVjNzdiM2JjZjlkMTJkMmZjNTk2MzQ40V3t1w==: --dhchap-ctrl-secret DHHC-1:03:NmJiYmRkODI0NmE5ZmZhNzNiNjg3OGI0OWE0YzhlZWI5YWY3NWJiOTU5OWY4MmU0NTZlM2NiMWQ4ZDViN2YyZfnUQlU=: 00:16:05.775 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.775 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.775 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:05.775 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.776 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.776 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.776 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:05.776 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:05.776 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:05.776 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:16:05.776 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:05.776 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:05.776 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:05.776 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:05.776 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.776 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.776 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.776 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.776 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.776 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.776 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.776 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.037 00:16:06.037 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:06.037 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:06.037 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.297 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.297 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.297 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.297 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.297 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.297 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:06.297 { 00:16:06.297 "cntlid": 11, 00:16:06.297 "qid": 0, 00:16:06.297 "state": "enabled", 00:16:06.297 "thread": "nvmf_tgt_poll_group_000", 00:16:06.297 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:06.297 "listen_address": { 00:16:06.297 "trtype": "TCP", 00:16:06.297 "adrfam": "IPv4", 00:16:06.297 "traddr": "10.0.0.2", 00:16:06.297 "trsvcid": "4420" 00:16:06.297 }, 00:16:06.297 "peer_address": { 00:16:06.297 "trtype": "TCP", 00:16:06.297 "adrfam": "IPv4", 00:16:06.297 "traddr": "10.0.0.1", 00:16:06.297 "trsvcid": "39410" 00:16:06.297 }, 00:16:06.297 "auth": { 00:16:06.297 "state": "completed", 00:16:06.297 "digest": "sha256", 00:16:06.297 "dhgroup": "ffdhe2048" 00:16:06.297 } 00:16:06.297 } 00:16:06.297 ]' 00:16:06.297 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:06.297 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:06.297 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:06.297 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:06.297 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:06.559 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.559 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.559 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.559 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDljMDA5NTBlNjkwMTBkZWU5ODYwNjNhNTYxOGRiNmFZUFSi: --dhchap-ctrl-secret DHHC-1:02:NmJmNWI4MDQ5MDA3Yzg3NDI3OGExYjY2MzFjZDFlZTBkYWI4MmZmMzg0MmVkYWM3B7b5SA==: 00:16:06.559 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:MDljMDA5NTBlNjkwMTBkZWU5ODYwNjNhNTYxOGRiNmFZUFSi: --dhchap-ctrl-secret DHHC-1:02:NmJmNWI4MDQ5MDA3Yzg3NDI3OGExYjY2MzFjZDFlZTBkYWI4MmZmMzg0MmVkYWM3B7b5SA==: 00:16:07.502 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.502 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:07.502 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.502 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.502 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.502 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:07.502 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:07.502 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:07.502 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:16:07.502 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:07.502 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:07.502 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:07.502 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:07.502 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.502 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.502 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.502 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.502 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.502 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.502 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.502 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.763 00:16:07.763 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:07.763 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.763 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.024 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.024 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.024 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.024 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.024 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.024 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:08.024 { 00:16:08.024 "cntlid": 13, 00:16:08.024 "qid": 0, 00:16:08.024 "state": "enabled", 00:16:08.024 "thread": "nvmf_tgt_poll_group_000", 00:16:08.024 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:08.024 "listen_address": { 00:16:08.024 "trtype": "TCP", 00:16:08.024 "adrfam": "IPv4", 00:16:08.024 "traddr": "10.0.0.2", 00:16:08.024 "trsvcid": "4420" 00:16:08.024 }, 00:16:08.024 "peer_address": { 00:16:08.024 "trtype": "TCP", 00:16:08.024 "adrfam": "IPv4", 00:16:08.024 "traddr": "10.0.0.1", 00:16:08.024 "trsvcid": "39448" 00:16:08.024 }, 00:16:08.024 "auth": { 00:16:08.024 "state": "completed", 00:16:08.024 "digest": "sha256", 00:16:08.024 "dhgroup": "ffdhe2048" 00:16:08.024 } 00:16:08.024 } 00:16:08.024 ]' 00:16:08.024 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:08.024 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:08.024 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:08.024 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:08.024 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:08.284 16:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.284 16:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.284 16:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.284 16:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzJhNDBjN2VjY2Q2ZWE3MzQ3NDAyYzE3NTkxMDE1YjU1MjhiMDc1YjQ4YzIzNDgypMXG7w==: --dhchap-ctrl-secret DHHC-1:01:ZDNiN2U1ZTFiMjljNTMwMjFmOWY2ZjBmZTRhNTg3MTUTmxRf: 00:16:08.284 16:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:YzJhNDBjN2VjY2Q2ZWE3MzQ3NDAyYzE3NTkxMDE1YjU1MjhiMDc1YjQ4YzIzNDgypMXG7w==: --dhchap-ctrl-secret DHHC-1:01:ZDNiN2U1ZTFiMjljNTMwMjFmOWY2ZjBmZTRhNTg3MTUTmxRf: 00:16:09.225 16:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.225 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.225 16:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:09.225 16:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.225 16:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.225 16:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.225 16:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:09.225 16:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:09.225 16:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:09.225 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:16:09.225 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:09.225 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:09.225 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:09.225 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:09.226 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.226 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:16:09.226 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.226 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.226 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.226 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:09.226 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:09.226 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:09.487 00:16:09.487 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:09.487 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:09.487 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.749 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.749 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.749 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.749 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.749 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.749 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:09.749 { 00:16:09.749 "cntlid": 15, 00:16:09.749 "qid": 0, 00:16:09.749 "state": "enabled", 00:16:09.749 "thread": "nvmf_tgt_poll_group_000", 00:16:09.749 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:09.749 "listen_address": { 00:16:09.749 "trtype": "TCP", 00:16:09.749 "adrfam": "IPv4", 00:16:09.749 "traddr": "10.0.0.2", 00:16:09.749 "trsvcid": "4420" 00:16:09.749 }, 00:16:09.749 "peer_address": { 00:16:09.749 "trtype": "TCP", 00:16:09.749 "adrfam": "IPv4", 00:16:09.749 "traddr": "10.0.0.1", 00:16:09.749 "trsvcid": "39482" 00:16:09.749 }, 00:16:09.749 "auth": { 00:16:09.749 "state": "completed", 00:16:09.749 "digest": "sha256", 00:16:09.749 "dhgroup": "ffdhe2048" 00:16:09.749 } 00:16:09.749 } 00:16:09.749 ]' 00:16:09.749 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:09.749 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:09.749 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:09.749 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:09.749 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:10.010 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.010 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.010 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.010 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI3ZjY2M2E0MTc4MDEyMTg4NGE0OGNhMjc3YjE0NThiZGU5NWU5MzIxNTUzZGUyMTJiYzEzYWI1ZDliNWMwYTvXris=: 00:16:10.010 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:ZDI3ZjY2M2E0MTc4MDEyMTg4NGE0OGNhMjc3YjE0NThiZGU5NWU5MzIxNTUzZGUyMTJiYzEzYWI1ZDliNWMwYTvXris=: 00:16:10.953 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.953 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.953 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:10.953 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.953 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.953 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.953 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:10.953 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:10.953 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:10.953 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:10.953 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:16:10.953 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:10.953 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:10.953 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:10.953 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:10.953 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.953 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.953 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.953 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.953 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.953 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.954 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.954 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.215 00:16:11.215 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:11.215 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.215 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:11.476 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.476 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.476 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.476 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.476 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.476 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:11.476 { 00:16:11.476 "cntlid": 17, 00:16:11.476 "qid": 0, 00:16:11.476 "state": "enabled", 00:16:11.476 "thread": "nvmf_tgt_poll_group_000", 00:16:11.476 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:11.476 "listen_address": { 00:16:11.476 "trtype": "TCP", 00:16:11.476 "adrfam": "IPv4", 00:16:11.476 "traddr": "10.0.0.2", 00:16:11.476 "trsvcid": "4420" 00:16:11.476 }, 00:16:11.476 "peer_address": { 00:16:11.476 "trtype": "TCP", 00:16:11.476 "adrfam": "IPv4", 00:16:11.476 "traddr": "10.0.0.1", 00:16:11.476 "trsvcid": "39500" 00:16:11.476 }, 00:16:11.476 "auth": { 00:16:11.476 "state": "completed", 00:16:11.476 "digest": "sha256", 00:16:11.476 "dhgroup": "ffdhe3072" 00:16:11.476 } 00:16:11.477 } 00:16:11.477 ]' 00:16:11.477 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:11.477 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:11.477 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:11.477 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:11.477 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:11.477 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.477 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.477 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.737 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTE1YmJjYzE3YjFjOTc5MTUwNjhjMmUzMWVjNzdiM2JjZjlkMTJkMmZjNTk2MzQ40V3t1w==: --dhchap-ctrl-secret DHHC-1:03:NmJiYmRkODI0NmE5ZmZhNzNiNjg3OGI0OWE0YzhlZWI5YWY3NWJiOTU5OWY4MmU0NTZlM2NiMWQ4ZDViN2YyZfnUQlU=: 00:16:11.737 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:OTE1YmJjYzE3YjFjOTc5MTUwNjhjMmUzMWVjNzdiM2JjZjlkMTJkMmZjNTk2MzQ40V3t1w==: --dhchap-ctrl-secret DHHC-1:03:NmJiYmRkODI0NmE5ZmZhNzNiNjg3OGI0OWE0YzhlZWI5YWY3NWJiOTU5OWY4MmU0NTZlM2NiMWQ4ZDViN2YyZfnUQlU=: 00:16:12.679 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.680 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:12.680 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.680 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.680 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.680 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:12.680 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:12.680 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:12.680 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:12.680 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:12.680 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:12.680 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:12.680 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:12.680 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.680 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.680 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.680 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.680 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.680 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.680 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.680 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.940 00:16:12.940 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:12.940 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:12.940 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.203 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.203 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.203 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.203 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.203 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.203 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:13.203 { 00:16:13.203 "cntlid": 19, 00:16:13.203 "qid": 0, 00:16:13.203 "state": "enabled", 00:16:13.203 "thread": "nvmf_tgt_poll_group_000", 00:16:13.203 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:13.203 "listen_address": { 00:16:13.203 "trtype": "TCP", 00:16:13.203 "adrfam": "IPv4", 00:16:13.203 "traddr": "10.0.0.2", 00:16:13.203 "trsvcid": "4420" 00:16:13.203 }, 00:16:13.203 "peer_address": { 00:16:13.203 "trtype": "TCP", 00:16:13.203 "adrfam": "IPv4", 00:16:13.203 "traddr": "10.0.0.1", 00:16:13.203 "trsvcid": "43960" 00:16:13.203 }, 00:16:13.203 "auth": { 00:16:13.203 "state": "completed", 00:16:13.203 "digest": "sha256", 00:16:13.203 "dhgroup": "ffdhe3072" 00:16:13.203 } 00:16:13.203 } 00:16:13.203 ]' 00:16:13.203 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:13.203 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:13.203 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:13.203 16:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:13.203 16:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:13.203 16:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.203 16:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.203 16:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.464 16:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDljMDA5NTBlNjkwMTBkZWU5ODYwNjNhNTYxOGRiNmFZUFSi: --dhchap-ctrl-secret DHHC-1:02:NmJmNWI4MDQ5MDA3Yzg3NDI3OGExYjY2MzFjZDFlZTBkYWI4MmZmMzg0MmVkYWM3B7b5SA==: 00:16:13.464 16:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:MDljMDA5NTBlNjkwMTBkZWU5ODYwNjNhNTYxOGRiNmFZUFSi: --dhchap-ctrl-secret DHHC-1:02:NmJmNWI4MDQ5MDA3Yzg3NDI3OGExYjY2MzFjZDFlZTBkYWI4MmZmMzg0MmVkYWM3B7b5SA==: 00:16:14.406 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.406 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.406 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:14.406 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.406 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.406 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.406 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:14.406 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:14.406 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:14.406 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:14.406 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:14.406 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:14.406 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:14.406 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:14.406 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.406 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.406 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.406 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.406 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.406 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.406 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.406 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.666 00:16:14.666 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:14.666 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:14.666 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.927 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.927 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.927 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.927 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.927 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.927 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:14.927 { 00:16:14.927 "cntlid": 21, 00:16:14.927 "qid": 0, 00:16:14.927 "state": "enabled", 00:16:14.927 "thread": "nvmf_tgt_poll_group_000", 00:16:14.927 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:14.927 "listen_address": { 00:16:14.927 "trtype": "TCP", 00:16:14.927 "adrfam": "IPv4", 00:16:14.927 "traddr": "10.0.0.2", 00:16:14.927 "trsvcid": "4420" 00:16:14.927 }, 00:16:14.927 "peer_address": { 00:16:14.927 "trtype": "TCP", 00:16:14.927 "adrfam": "IPv4", 00:16:14.927 "traddr": "10.0.0.1", 00:16:14.927 "trsvcid": "43986" 00:16:14.927 }, 00:16:14.927 "auth": { 00:16:14.927 "state": "completed", 00:16:14.927 "digest": "sha256", 00:16:14.927 "dhgroup": "ffdhe3072" 00:16:14.927 } 00:16:14.927 } 00:16:14.927 ]' 00:16:14.927 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:14.927 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:14.927 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:14.927 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:14.927 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:14.927 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.927 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.927 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.187 16:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzJhNDBjN2VjY2Q2ZWE3MzQ3NDAyYzE3NTkxMDE1YjU1MjhiMDc1YjQ4YzIzNDgypMXG7w==: --dhchap-ctrl-secret DHHC-1:01:ZDNiN2U1ZTFiMjljNTMwMjFmOWY2ZjBmZTRhNTg3MTUTmxRf: 00:16:15.187 16:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:YzJhNDBjN2VjY2Q2ZWE3MzQ3NDAyYzE3NTkxMDE1YjU1MjhiMDc1YjQ4YzIzNDgypMXG7w==: --dhchap-ctrl-secret DHHC-1:01:ZDNiN2U1ZTFiMjljNTMwMjFmOWY2ZjBmZTRhNTg3MTUTmxRf: 00:16:15.756 16:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.016 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.016 16:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:16.016 16:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.016 16:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.016 16:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.016 16:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:16.017 16:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:16.017 16:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:16.017 16:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:16.017 16:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:16.017 16:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:16.017 16:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:16.017 16:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:16.017 16:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.017 16:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:16:16.017 16:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.017 16:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.017 16:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.017 16:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:16.017 16:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:16.017 16:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:16.277 00:16:16.277 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:16.277 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:16.277 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.537 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.537 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.537 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.537 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.537 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.537 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:16.537 { 00:16:16.537 "cntlid": 23, 00:16:16.537 "qid": 0, 00:16:16.537 "state": "enabled", 00:16:16.537 "thread": "nvmf_tgt_poll_group_000", 00:16:16.537 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:16.537 "listen_address": { 00:16:16.537 "trtype": "TCP", 00:16:16.537 "adrfam": "IPv4", 00:16:16.537 "traddr": "10.0.0.2", 00:16:16.537 "trsvcid": "4420" 00:16:16.537 }, 00:16:16.537 "peer_address": { 00:16:16.538 "trtype": "TCP", 00:16:16.538 "adrfam": "IPv4", 00:16:16.538 "traddr": "10.0.0.1", 00:16:16.538 "trsvcid": "44012" 00:16:16.538 }, 00:16:16.538 "auth": { 00:16:16.538 "state": "completed", 00:16:16.538 "digest": "sha256", 00:16:16.538 "dhgroup": "ffdhe3072" 00:16:16.538 } 00:16:16.538 } 00:16:16.538 ]' 00:16:16.538 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:16.538 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:16.538 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:16.538 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:16.538 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:16.812 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.812 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.812 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.812 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI3ZjY2M2E0MTc4MDEyMTg4NGE0OGNhMjc3YjE0NThiZGU5NWU5MzIxNTUzZGUyMTJiYzEzYWI1ZDliNWMwYTvXris=: 00:16:16.812 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:ZDI3ZjY2M2E0MTc4MDEyMTg4NGE0OGNhMjc3YjE0NThiZGU5NWU5MzIxNTUzZGUyMTJiYzEzYWI1ZDliNWMwYTvXris=: 00:16:17.842 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.842 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.842 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:17.842 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.842 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.842 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.842 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:17.842 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:17.842 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:17.842 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:17.842 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:17.842 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:17.842 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:17.842 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:17.842 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:17.842 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.842 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.842 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.842 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.842 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.842 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.842 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.842 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.103 00:16:18.103 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.103 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:18.103 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.365 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.365 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.365 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.365 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.365 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.365 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.365 { 00:16:18.365 "cntlid": 25, 00:16:18.365 "qid": 0, 00:16:18.365 "state": "enabled", 00:16:18.365 "thread": "nvmf_tgt_poll_group_000", 00:16:18.365 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:18.365 "listen_address": { 00:16:18.365 "trtype": "TCP", 00:16:18.365 "adrfam": "IPv4", 00:16:18.365 "traddr": "10.0.0.2", 00:16:18.365 "trsvcid": "4420" 00:16:18.365 }, 00:16:18.365 "peer_address": { 00:16:18.365 "trtype": "TCP", 00:16:18.365 "adrfam": "IPv4", 00:16:18.365 "traddr": "10.0.0.1", 00:16:18.365 "trsvcid": "44052" 00:16:18.365 }, 00:16:18.365 "auth": { 00:16:18.365 "state": "completed", 00:16:18.365 "digest": "sha256", 00:16:18.365 "dhgroup": "ffdhe4096" 00:16:18.365 } 00:16:18.365 } 00:16:18.365 ]' 00:16:18.365 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.365 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:18.365 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.365 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:18.365 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:18.365 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.365 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.365 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.626 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTE1YmJjYzE3YjFjOTc5MTUwNjhjMmUzMWVjNzdiM2JjZjlkMTJkMmZjNTk2MzQ40V3t1w==: --dhchap-ctrl-secret DHHC-1:03:NmJiYmRkODI0NmE5ZmZhNzNiNjg3OGI0OWE0YzhlZWI5YWY3NWJiOTU5OWY4MmU0NTZlM2NiMWQ4ZDViN2YyZfnUQlU=: 00:16:18.626 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:OTE1YmJjYzE3YjFjOTc5MTUwNjhjMmUzMWVjNzdiM2JjZjlkMTJkMmZjNTk2MzQ40V3t1w==: --dhchap-ctrl-secret DHHC-1:03:NmJiYmRkODI0NmE5ZmZhNzNiNjg3OGI0OWE0YzhlZWI5YWY3NWJiOTU5OWY4MmU0NTZlM2NiMWQ4ZDViN2YyZfnUQlU=: 00:16:19.570 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.570 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:19.570 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.570 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.570 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.570 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:19.570 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:19.570 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:19.570 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:16:19.570 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:19.570 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:19.570 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:19.570 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:19.570 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.570 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.570 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.570 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.570 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.570 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.570 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.570 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.831 00:16:19.831 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:19.831 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:19.831 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.092 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.092 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.092 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.092 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.092 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.092 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:20.092 { 00:16:20.092 "cntlid": 27, 00:16:20.092 "qid": 0, 00:16:20.092 "state": "enabled", 00:16:20.092 "thread": "nvmf_tgt_poll_group_000", 00:16:20.092 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:20.092 "listen_address": { 00:16:20.092 "trtype": "TCP", 00:16:20.092 "adrfam": "IPv4", 00:16:20.092 "traddr": "10.0.0.2", 00:16:20.092 "trsvcid": "4420" 00:16:20.092 }, 00:16:20.092 "peer_address": { 00:16:20.092 "trtype": "TCP", 00:16:20.092 "adrfam": "IPv4", 00:16:20.092 "traddr": "10.0.0.1", 00:16:20.092 "trsvcid": "44082" 00:16:20.092 }, 00:16:20.092 "auth": { 00:16:20.092 "state": "completed", 00:16:20.092 "digest": "sha256", 00:16:20.092 "dhgroup": "ffdhe4096" 00:16:20.092 } 00:16:20.092 } 00:16:20.092 ]' 00:16:20.092 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.092 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:20.092 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.092 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:20.092 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.092 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.092 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.092 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.353 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDljMDA5NTBlNjkwMTBkZWU5ODYwNjNhNTYxOGRiNmFZUFSi: --dhchap-ctrl-secret DHHC-1:02:NmJmNWI4MDQ5MDA3Yzg3NDI3OGExYjY2MzFjZDFlZTBkYWI4MmZmMzg0MmVkYWM3B7b5SA==: 00:16:20.353 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:MDljMDA5NTBlNjkwMTBkZWU5ODYwNjNhNTYxOGRiNmFZUFSi: --dhchap-ctrl-secret DHHC-1:02:NmJmNWI4MDQ5MDA3Yzg3NDI3OGExYjY2MzFjZDFlZTBkYWI4MmZmMzg0MmVkYWM3B7b5SA==: 00:16:20.924 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.184 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.184 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:21.184 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.184 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.184 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.184 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.184 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:21.184 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:21.185 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:16:21.185 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.185 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:21.185 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:21.185 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:21.185 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.185 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.185 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.185 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.185 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.185 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.185 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.185 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.445 00:16:21.445 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.445 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.445 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.704 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.704 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.704 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.704 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.704 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.704 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.704 { 00:16:21.704 "cntlid": 29, 00:16:21.704 "qid": 0, 00:16:21.704 "state": "enabled", 00:16:21.704 "thread": "nvmf_tgt_poll_group_000", 00:16:21.704 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:21.704 "listen_address": { 00:16:21.704 "trtype": "TCP", 00:16:21.704 "adrfam": "IPv4", 00:16:21.704 "traddr": "10.0.0.2", 00:16:21.704 "trsvcid": "4420" 00:16:21.704 }, 00:16:21.704 "peer_address": { 00:16:21.704 "trtype": "TCP", 00:16:21.704 "adrfam": "IPv4", 00:16:21.704 "traddr": "10.0.0.1", 00:16:21.704 "trsvcid": "44100" 00:16:21.704 }, 00:16:21.704 "auth": { 00:16:21.704 "state": "completed", 00:16:21.704 "digest": "sha256", 00:16:21.704 "dhgroup": "ffdhe4096" 00:16:21.704 } 00:16:21.704 } 00:16:21.704 ]' 00:16:21.704 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:21.704 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:21.704 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.704 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:21.704 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.964 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.964 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.964 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.964 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzJhNDBjN2VjY2Q2ZWE3MzQ3NDAyYzE3NTkxMDE1YjU1MjhiMDc1YjQ4YzIzNDgypMXG7w==: --dhchap-ctrl-secret DHHC-1:01:ZDNiN2U1ZTFiMjljNTMwMjFmOWY2ZjBmZTRhNTg3MTUTmxRf: 00:16:21.964 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:YzJhNDBjN2VjY2Q2ZWE3MzQ3NDAyYzE3NTkxMDE1YjU1MjhiMDc1YjQ4YzIzNDgypMXG7w==: --dhchap-ctrl-secret DHHC-1:01:ZDNiN2U1ZTFiMjljNTMwMjFmOWY2ZjBmZTRhNTg3MTUTmxRf: 00:16:22.904 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.904 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:22.904 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.904 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.904 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.904 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.904 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:22.904 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:22.904 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:22.904 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:22.904 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:22.904 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:22.904 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:22.904 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.904 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:16:22.904 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.904 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.904 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.904 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:22.904 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:22.904 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:23.164 00:16:23.164 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.164 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.164 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.425 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.425 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.425 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.425 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.425 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.425 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.425 { 00:16:23.425 "cntlid": 31, 00:16:23.425 "qid": 0, 00:16:23.425 "state": "enabled", 00:16:23.425 "thread": "nvmf_tgt_poll_group_000", 00:16:23.425 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:23.425 "listen_address": { 00:16:23.425 "trtype": "TCP", 00:16:23.425 "adrfam": "IPv4", 00:16:23.425 "traddr": "10.0.0.2", 00:16:23.425 "trsvcid": "4420" 00:16:23.425 }, 00:16:23.425 "peer_address": { 00:16:23.425 "trtype": "TCP", 00:16:23.425 "adrfam": "IPv4", 00:16:23.425 "traddr": "10.0.0.1", 00:16:23.425 "trsvcid": "60604" 00:16:23.425 }, 00:16:23.425 "auth": { 00:16:23.425 "state": "completed", 00:16:23.425 "digest": "sha256", 00:16:23.425 "dhgroup": "ffdhe4096" 00:16:23.425 } 00:16:23.425 } 00:16:23.425 ]' 00:16:23.425 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.425 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:23.425 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.425 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:23.425 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.425 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.425 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.425 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.690 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI3ZjY2M2E0MTc4MDEyMTg4NGE0OGNhMjc3YjE0NThiZGU5NWU5MzIxNTUzZGUyMTJiYzEzYWI1ZDliNWMwYTvXris=: 00:16:23.690 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:ZDI3ZjY2M2E0MTc4MDEyMTg4NGE0OGNhMjc3YjE0NThiZGU5NWU5MzIxNTUzZGUyMTJiYzEzYWI1ZDliNWMwYTvXris=: 00:16:24.632 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.632 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.632 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:24.632 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.632 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.632 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.632 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:24.632 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.632 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:24.632 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:24.632 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:16:24.632 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:24.632 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:24.632 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:24.632 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:24.632 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.632 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.632 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.632 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.632 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.632 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.632 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.632 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.893 00:16:24.893 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:24.893 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:24.893 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.155 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.155 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.155 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.155 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.155 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.155 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.155 { 00:16:25.155 "cntlid": 33, 00:16:25.155 "qid": 0, 00:16:25.155 "state": "enabled", 00:16:25.155 "thread": "nvmf_tgt_poll_group_000", 00:16:25.155 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:25.155 "listen_address": { 00:16:25.155 "trtype": "TCP", 00:16:25.155 "adrfam": "IPv4", 00:16:25.155 "traddr": "10.0.0.2", 00:16:25.155 "trsvcid": "4420" 00:16:25.155 }, 00:16:25.155 "peer_address": { 00:16:25.155 "trtype": "TCP", 00:16:25.155 "adrfam": "IPv4", 00:16:25.155 "traddr": "10.0.0.1", 00:16:25.155 "trsvcid": "60616" 00:16:25.155 }, 00:16:25.155 "auth": { 00:16:25.155 "state": "completed", 00:16:25.155 "digest": "sha256", 00:16:25.155 "dhgroup": "ffdhe6144" 00:16:25.155 } 00:16:25.155 } 00:16:25.155 ]' 00:16:25.155 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.155 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:25.155 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.155 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:25.155 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.416 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.416 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.416 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.416 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTE1YmJjYzE3YjFjOTc5MTUwNjhjMmUzMWVjNzdiM2JjZjlkMTJkMmZjNTk2MzQ40V3t1w==: --dhchap-ctrl-secret DHHC-1:03:NmJiYmRkODI0NmE5ZmZhNzNiNjg3OGI0OWE0YzhlZWI5YWY3NWJiOTU5OWY4MmU0NTZlM2NiMWQ4ZDViN2YyZfnUQlU=: 00:16:25.416 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:OTE1YmJjYzE3YjFjOTc5MTUwNjhjMmUzMWVjNzdiM2JjZjlkMTJkMmZjNTk2MzQ40V3t1w==: --dhchap-ctrl-secret DHHC-1:03:NmJiYmRkODI0NmE5ZmZhNzNiNjg3OGI0OWE0YzhlZWI5YWY3NWJiOTU5OWY4MmU0NTZlM2NiMWQ4ZDViN2YyZfnUQlU=: 00:16:26.359 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.359 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.359 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:26.359 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.359 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.359 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.359 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.359 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:26.359 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:26.359 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:16:26.359 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.359 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:26.359 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:26.359 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:26.359 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.359 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.359 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.359 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.359 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.359 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.359 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.359 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.931 00:16:26.931 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.931 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.931 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.931 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.931 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.931 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.931 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.931 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.931 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.931 { 00:16:26.931 "cntlid": 35, 00:16:26.931 "qid": 0, 00:16:26.931 "state": "enabled", 00:16:26.931 "thread": "nvmf_tgt_poll_group_000", 00:16:26.931 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:26.931 "listen_address": { 00:16:26.931 "trtype": "TCP", 00:16:26.931 "adrfam": "IPv4", 00:16:26.931 "traddr": "10.0.0.2", 00:16:26.931 "trsvcid": "4420" 00:16:26.931 }, 00:16:26.931 "peer_address": { 00:16:26.931 "trtype": "TCP", 00:16:26.931 "adrfam": "IPv4", 00:16:26.931 "traddr": "10.0.0.1", 00:16:26.931 "trsvcid": "60644" 00:16:26.931 }, 00:16:26.931 "auth": { 00:16:26.931 "state": "completed", 00:16:26.931 "digest": "sha256", 00:16:26.931 "dhgroup": "ffdhe6144" 00:16:26.931 } 00:16:26.931 } 00:16:26.931 ]' 00:16:26.931 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.931 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:26.931 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:27.192 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:27.192 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:27.192 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.192 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.192 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.192 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDljMDA5NTBlNjkwMTBkZWU5ODYwNjNhNTYxOGRiNmFZUFSi: --dhchap-ctrl-secret DHHC-1:02:NmJmNWI4MDQ5MDA3Yzg3NDI3OGExYjY2MzFjZDFlZTBkYWI4MmZmMzg0MmVkYWM3B7b5SA==: 00:16:27.192 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:MDljMDA5NTBlNjkwMTBkZWU5ODYwNjNhNTYxOGRiNmFZUFSi: --dhchap-ctrl-secret DHHC-1:02:NmJmNWI4MDQ5MDA3Yzg3NDI3OGExYjY2MzFjZDFlZTBkYWI4MmZmMzg0MmVkYWM3B7b5SA==: 00:16:28.133 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.133 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.133 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:28.133 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.133 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.133 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.133 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:28.133 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:28.133 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:28.133 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:16:28.133 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:28.133 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:28.133 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:28.133 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:28.133 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.133 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.133 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.133 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.133 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.133 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.133 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.133 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.705 00:16:28.705 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.705 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.705 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.705 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.705 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.705 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.705 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.705 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.705 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.705 { 00:16:28.705 "cntlid": 37, 00:16:28.705 "qid": 0, 00:16:28.705 "state": "enabled", 00:16:28.705 "thread": "nvmf_tgt_poll_group_000", 00:16:28.705 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:28.705 "listen_address": { 00:16:28.705 "trtype": "TCP", 00:16:28.705 "adrfam": "IPv4", 00:16:28.705 "traddr": "10.0.0.2", 00:16:28.705 "trsvcid": "4420" 00:16:28.705 }, 00:16:28.705 "peer_address": { 00:16:28.705 "trtype": "TCP", 00:16:28.705 "adrfam": "IPv4", 00:16:28.705 "traddr": "10.0.0.1", 00:16:28.705 "trsvcid": "60672" 00:16:28.705 }, 00:16:28.705 "auth": { 00:16:28.705 "state": "completed", 00:16:28.705 "digest": "sha256", 00:16:28.705 "dhgroup": "ffdhe6144" 00:16:28.705 } 00:16:28.705 } 00:16:28.705 ]' 00:16:28.705 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.967 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:28.967 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.967 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:28.967 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.967 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.967 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.967 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.967 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzJhNDBjN2VjY2Q2ZWE3MzQ3NDAyYzE3NTkxMDE1YjU1MjhiMDc1YjQ4YzIzNDgypMXG7w==: --dhchap-ctrl-secret DHHC-1:01:ZDNiN2U1ZTFiMjljNTMwMjFmOWY2ZjBmZTRhNTg3MTUTmxRf: 00:16:28.967 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:YzJhNDBjN2VjY2Q2ZWE3MzQ3NDAyYzE3NTkxMDE1YjU1MjhiMDc1YjQ4YzIzNDgypMXG7w==: --dhchap-ctrl-secret DHHC-1:01:ZDNiN2U1ZTFiMjljNTMwMjFmOWY2ZjBmZTRhNTg3MTUTmxRf: 00:16:29.909 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.909 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.909 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:29.909 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.909 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.909 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.909 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.909 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:29.909 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:29.909 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:16:29.909 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.909 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:29.909 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:29.909 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:29.909 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.909 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:16:29.909 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.909 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.909 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.909 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:29.909 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:29.909 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:30.479 00:16:30.479 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.479 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.479 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.479 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.479 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.479 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.479 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.479 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.479 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.479 { 00:16:30.479 "cntlid": 39, 00:16:30.479 "qid": 0, 00:16:30.480 "state": "enabled", 00:16:30.480 "thread": "nvmf_tgt_poll_group_000", 00:16:30.480 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:30.480 "listen_address": { 00:16:30.480 "trtype": "TCP", 00:16:30.480 "adrfam": "IPv4", 00:16:30.480 "traddr": "10.0.0.2", 00:16:30.480 "trsvcid": "4420" 00:16:30.480 }, 00:16:30.480 "peer_address": { 00:16:30.480 "trtype": "TCP", 00:16:30.480 "adrfam": "IPv4", 00:16:30.480 "traddr": "10.0.0.1", 00:16:30.480 "trsvcid": "60704" 00:16:30.480 }, 00:16:30.480 "auth": { 00:16:30.480 "state": "completed", 00:16:30.480 "digest": "sha256", 00:16:30.480 "dhgroup": "ffdhe6144" 00:16:30.480 } 00:16:30.480 } 00:16:30.480 ]' 00:16:30.480 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.741 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:30.741 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.741 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:30.741 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.741 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.741 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.741 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.002 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI3ZjY2M2E0MTc4MDEyMTg4NGE0OGNhMjc3YjE0NThiZGU5NWU5MzIxNTUzZGUyMTJiYzEzYWI1ZDliNWMwYTvXris=: 00:16:31.002 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:ZDI3ZjY2M2E0MTc4MDEyMTg4NGE0OGNhMjc3YjE0NThiZGU5NWU5MzIxNTUzZGUyMTJiYzEzYWI1ZDliNWMwYTvXris=: 00:16:31.574 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.574 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.574 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:31.574 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.574 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.574 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.574 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:31.574 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.574 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:31.574 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:31.835 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:16:31.835 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.835 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:31.835 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:31.835 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:31.835 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.835 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.835 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.835 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.835 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.835 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.835 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.835 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.406 00:16:32.406 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:32.406 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.406 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.666 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.666 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.666 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.666 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.666 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.666 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.666 { 00:16:32.666 "cntlid": 41, 00:16:32.666 "qid": 0, 00:16:32.666 "state": "enabled", 00:16:32.666 "thread": "nvmf_tgt_poll_group_000", 00:16:32.666 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:32.666 "listen_address": { 00:16:32.666 "trtype": "TCP", 00:16:32.666 "adrfam": "IPv4", 00:16:32.666 "traddr": "10.0.0.2", 00:16:32.666 "trsvcid": "4420" 00:16:32.666 }, 00:16:32.666 "peer_address": { 00:16:32.666 "trtype": "TCP", 00:16:32.666 "adrfam": "IPv4", 00:16:32.666 "traddr": "10.0.0.1", 00:16:32.666 "trsvcid": "55036" 00:16:32.666 }, 00:16:32.666 "auth": { 00:16:32.666 "state": "completed", 00:16:32.666 "digest": "sha256", 00:16:32.666 "dhgroup": "ffdhe8192" 00:16:32.666 } 00:16:32.666 } 00:16:32.666 ]' 00:16:32.667 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.667 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:32.667 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.667 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:32.667 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.667 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.667 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.667 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.927 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTE1YmJjYzE3YjFjOTc5MTUwNjhjMmUzMWVjNzdiM2JjZjlkMTJkMmZjNTk2MzQ40V3t1w==: --dhchap-ctrl-secret DHHC-1:03:NmJiYmRkODI0NmE5ZmZhNzNiNjg3OGI0OWE0YzhlZWI5YWY3NWJiOTU5OWY4MmU0NTZlM2NiMWQ4ZDViN2YyZfnUQlU=: 00:16:32.927 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:OTE1YmJjYzE3YjFjOTc5MTUwNjhjMmUzMWVjNzdiM2JjZjlkMTJkMmZjNTk2MzQ40V3t1w==: --dhchap-ctrl-secret DHHC-1:03:NmJiYmRkODI0NmE5ZmZhNzNiNjg3OGI0OWE0YzhlZWI5YWY3NWJiOTU5OWY4MmU0NTZlM2NiMWQ4ZDViN2YyZfnUQlU=: 00:16:33.865 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.865 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:33.865 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.865 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.865 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.865 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.865 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:33.865 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:33.865 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:16:33.865 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.865 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:33.865 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:33.865 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:33.865 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.865 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.865 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.865 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.865 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.865 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.865 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.865 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.436 00:16:34.436 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.436 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.436 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.697 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.697 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.697 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.697 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.697 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.697 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.697 { 00:16:34.697 "cntlid": 43, 00:16:34.697 "qid": 0, 00:16:34.697 "state": "enabled", 00:16:34.697 "thread": "nvmf_tgt_poll_group_000", 00:16:34.697 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:34.697 "listen_address": { 00:16:34.697 "trtype": "TCP", 00:16:34.697 "adrfam": "IPv4", 00:16:34.697 "traddr": "10.0.0.2", 00:16:34.697 "trsvcid": "4420" 00:16:34.698 }, 00:16:34.698 "peer_address": { 00:16:34.698 "trtype": "TCP", 00:16:34.698 "adrfam": "IPv4", 00:16:34.698 "traddr": "10.0.0.1", 00:16:34.698 "trsvcid": "55058" 00:16:34.698 }, 00:16:34.698 "auth": { 00:16:34.698 "state": "completed", 00:16:34.698 "digest": "sha256", 00:16:34.698 "dhgroup": "ffdhe8192" 00:16:34.698 } 00:16:34.698 } 00:16:34.698 ]' 00:16:34.698 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.698 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:34.698 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.698 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:34.698 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.698 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.698 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.698 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.958 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDljMDA5NTBlNjkwMTBkZWU5ODYwNjNhNTYxOGRiNmFZUFSi: --dhchap-ctrl-secret DHHC-1:02:NmJmNWI4MDQ5MDA3Yzg3NDI3OGExYjY2MzFjZDFlZTBkYWI4MmZmMzg0MmVkYWM3B7b5SA==: 00:16:34.958 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:MDljMDA5NTBlNjkwMTBkZWU5ODYwNjNhNTYxOGRiNmFZUFSi: --dhchap-ctrl-secret DHHC-1:02:NmJmNWI4MDQ5MDA3Yzg3NDI3OGExYjY2MzFjZDFlZTBkYWI4MmZmMzg0MmVkYWM3B7b5SA==: 00:16:35.529 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.789 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:35.789 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.789 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.789 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.789 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.789 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:35.789 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:35.789 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:16:35.789 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.789 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:35.789 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:35.789 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:35.789 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.789 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.789 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.789 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.789 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.789 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.789 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.789 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.361 00:16:36.361 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.361 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.361 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.621 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.621 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.621 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.621 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.621 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.621 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.621 { 00:16:36.621 "cntlid": 45, 00:16:36.621 "qid": 0, 00:16:36.621 "state": "enabled", 00:16:36.621 "thread": "nvmf_tgt_poll_group_000", 00:16:36.621 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:36.621 "listen_address": { 00:16:36.621 "trtype": "TCP", 00:16:36.621 "adrfam": "IPv4", 00:16:36.621 "traddr": "10.0.0.2", 00:16:36.621 "trsvcid": "4420" 00:16:36.621 }, 00:16:36.621 "peer_address": { 00:16:36.621 "trtype": "TCP", 00:16:36.621 "adrfam": "IPv4", 00:16:36.621 "traddr": "10.0.0.1", 00:16:36.621 "trsvcid": "55082" 00:16:36.621 }, 00:16:36.621 "auth": { 00:16:36.621 "state": "completed", 00:16:36.621 "digest": "sha256", 00:16:36.621 "dhgroup": "ffdhe8192" 00:16:36.621 } 00:16:36.621 } 00:16:36.621 ]' 00:16:36.621 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.621 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:36.621 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.621 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:36.621 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.621 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.621 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.621 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.881 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzJhNDBjN2VjY2Q2ZWE3MzQ3NDAyYzE3NTkxMDE1YjU1MjhiMDc1YjQ4YzIzNDgypMXG7w==: --dhchap-ctrl-secret DHHC-1:01:ZDNiN2U1ZTFiMjljNTMwMjFmOWY2ZjBmZTRhNTg3MTUTmxRf: 00:16:36.881 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:YzJhNDBjN2VjY2Q2ZWE3MzQ3NDAyYzE3NTkxMDE1YjU1MjhiMDc1YjQ4YzIzNDgypMXG7w==: --dhchap-ctrl-secret DHHC-1:01:ZDNiN2U1ZTFiMjljNTMwMjFmOWY2ZjBmZTRhNTg3MTUTmxRf: 00:16:37.822 16:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.822 16:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:37.822 16:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.822 16:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.822 16:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.822 16:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.822 16:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:37.822 16:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:37.822 16:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:16:37.822 16:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.822 16:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:37.822 16:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:37.822 16:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:37.822 16:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.822 16:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:16:37.822 16:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.822 16:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.822 16:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.823 16:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:37.823 16:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:37.823 16:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:38.393 00:16:38.393 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.393 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.393 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.393 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.393 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.393 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.393 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.653 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.653 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.653 { 00:16:38.653 "cntlid": 47, 00:16:38.653 "qid": 0, 00:16:38.653 "state": "enabled", 00:16:38.653 "thread": "nvmf_tgt_poll_group_000", 00:16:38.653 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:38.653 "listen_address": { 00:16:38.653 "trtype": "TCP", 00:16:38.653 "adrfam": "IPv4", 00:16:38.653 "traddr": "10.0.0.2", 00:16:38.653 "trsvcid": "4420" 00:16:38.653 }, 00:16:38.653 "peer_address": { 00:16:38.653 "trtype": "TCP", 00:16:38.653 "adrfam": "IPv4", 00:16:38.653 "traddr": "10.0.0.1", 00:16:38.653 "trsvcid": "55120" 00:16:38.653 }, 00:16:38.653 "auth": { 00:16:38.653 "state": "completed", 00:16:38.653 "digest": "sha256", 00:16:38.653 "dhgroup": "ffdhe8192" 00:16:38.653 } 00:16:38.653 } 00:16:38.653 ]' 00:16:38.653 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.653 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:38.653 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.653 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:38.653 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.653 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.653 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.653 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.914 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI3ZjY2M2E0MTc4MDEyMTg4NGE0OGNhMjc3YjE0NThiZGU5NWU5MzIxNTUzZGUyMTJiYzEzYWI1ZDliNWMwYTvXris=: 00:16:38.914 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:ZDI3ZjY2M2E0MTc4MDEyMTg4NGE0OGNhMjc3YjE0NThiZGU5NWU5MzIxNTUzZGUyMTJiYzEzYWI1ZDliNWMwYTvXris=: 00:16:39.484 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.485 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.485 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:39.485 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.485 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.745 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.745 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:39.745 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:39.745 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.745 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:39.745 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:39.745 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:16:39.745 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.745 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:39.745 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:39.745 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:39.745 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.745 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.745 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.745 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.745 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.745 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.745 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.745 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.005 00:16:40.005 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.005 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.005 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.266 16:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.266 16:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.266 16:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.266 16:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.266 16:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.266 16:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.266 { 00:16:40.266 "cntlid": 49, 00:16:40.266 "qid": 0, 00:16:40.266 "state": "enabled", 00:16:40.266 "thread": "nvmf_tgt_poll_group_000", 00:16:40.266 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:40.266 "listen_address": { 00:16:40.266 "trtype": "TCP", 00:16:40.266 "adrfam": "IPv4", 00:16:40.266 "traddr": "10.0.0.2", 00:16:40.266 "trsvcid": "4420" 00:16:40.266 }, 00:16:40.266 "peer_address": { 00:16:40.266 "trtype": "TCP", 00:16:40.266 "adrfam": "IPv4", 00:16:40.266 "traddr": "10.0.0.1", 00:16:40.266 "trsvcid": "55144" 00:16:40.266 }, 00:16:40.266 "auth": { 00:16:40.266 "state": "completed", 00:16:40.266 "digest": "sha384", 00:16:40.266 "dhgroup": "null" 00:16:40.266 } 00:16:40.266 } 00:16:40.266 ]' 00:16:40.266 16:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.266 16:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:40.266 16:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.266 16:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:40.266 16:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.266 16:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.266 16:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.266 16:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.526 16:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTE1YmJjYzE3YjFjOTc5MTUwNjhjMmUzMWVjNzdiM2JjZjlkMTJkMmZjNTk2MzQ40V3t1w==: --dhchap-ctrl-secret DHHC-1:03:NmJiYmRkODI0NmE5ZmZhNzNiNjg3OGI0OWE0YzhlZWI5YWY3NWJiOTU5OWY4MmU0NTZlM2NiMWQ4ZDViN2YyZfnUQlU=: 00:16:40.526 16:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:OTE1YmJjYzE3YjFjOTc5MTUwNjhjMmUzMWVjNzdiM2JjZjlkMTJkMmZjNTk2MzQ40V3t1w==: --dhchap-ctrl-secret DHHC-1:03:NmJiYmRkODI0NmE5ZmZhNzNiNjg3OGI0OWE0YzhlZWI5YWY3NWJiOTU5OWY4MmU0NTZlM2NiMWQ4ZDViN2YyZfnUQlU=: 00:16:41.468 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.469 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.469 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:41.469 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.469 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.469 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.469 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.469 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:41.469 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:41.469 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:16:41.469 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.469 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:41.469 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:41.469 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:41.469 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.469 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.469 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.469 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.469 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.469 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.469 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.469 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.729 00:16:41.729 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.729 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.729 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.989 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.989 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.989 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.989 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.989 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.989 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.989 { 00:16:41.989 "cntlid": 51, 00:16:41.989 "qid": 0, 00:16:41.989 "state": "enabled", 00:16:41.989 "thread": "nvmf_tgt_poll_group_000", 00:16:41.989 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:41.989 "listen_address": { 00:16:41.989 "trtype": "TCP", 00:16:41.989 "adrfam": "IPv4", 00:16:41.989 "traddr": "10.0.0.2", 00:16:41.989 "trsvcid": "4420" 00:16:41.989 }, 00:16:41.989 "peer_address": { 00:16:41.989 "trtype": "TCP", 00:16:41.989 "adrfam": "IPv4", 00:16:41.989 "traddr": "10.0.0.1", 00:16:41.989 "trsvcid": "55170" 00:16:41.989 }, 00:16:41.989 "auth": { 00:16:41.989 "state": "completed", 00:16:41.989 "digest": "sha384", 00:16:41.989 "dhgroup": "null" 00:16:41.989 } 00:16:41.989 } 00:16:41.989 ]' 00:16:41.989 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.989 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:41.989 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.989 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:41.989 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.989 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.989 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.989 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.249 16:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDljMDA5NTBlNjkwMTBkZWU5ODYwNjNhNTYxOGRiNmFZUFSi: --dhchap-ctrl-secret DHHC-1:02:NmJmNWI4MDQ5MDA3Yzg3NDI3OGExYjY2MzFjZDFlZTBkYWI4MmZmMzg0MmVkYWM3B7b5SA==: 00:16:42.249 16:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:MDljMDA5NTBlNjkwMTBkZWU5ODYwNjNhNTYxOGRiNmFZUFSi: --dhchap-ctrl-secret DHHC-1:02:NmJmNWI4MDQ5MDA3Yzg3NDI3OGExYjY2MzFjZDFlZTBkYWI4MmZmMzg0MmVkYWM3B7b5SA==: 00:16:43.202 16:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.202 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.202 16:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:43.202 16:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.202 16:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.202 16:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.202 16:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.202 16:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:43.202 16:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:43.202 16:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:16:43.202 16:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.202 16:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:43.203 16:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:43.203 16:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:43.203 16:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.203 16:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.203 16:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.203 16:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.203 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.203 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.203 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.203 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.462 00:16:43.462 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.462 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.462 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.722 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.722 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.722 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.722 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.722 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.722 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.722 { 00:16:43.722 "cntlid": 53, 00:16:43.722 "qid": 0, 00:16:43.722 "state": "enabled", 00:16:43.722 "thread": "nvmf_tgt_poll_group_000", 00:16:43.722 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:43.722 "listen_address": { 00:16:43.722 "trtype": "TCP", 00:16:43.722 "adrfam": "IPv4", 00:16:43.722 "traddr": "10.0.0.2", 00:16:43.722 "trsvcid": "4420" 00:16:43.722 }, 00:16:43.722 "peer_address": { 00:16:43.722 "trtype": "TCP", 00:16:43.722 "adrfam": "IPv4", 00:16:43.722 "traddr": "10.0.0.1", 00:16:43.722 "trsvcid": "42948" 00:16:43.722 }, 00:16:43.722 "auth": { 00:16:43.722 "state": "completed", 00:16:43.722 "digest": "sha384", 00:16:43.722 "dhgroup": "null" 00:16:43.722 } 00:16:43.722 } 00:16:43.722 ]' 00:16:43.722 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.722 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:43.722 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.722 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:43.722 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.722 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.722 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.722 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.981 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzJhNDBjN2VjY2Q2ZWE3MzQ3NDAyYzE3NTkxMDE1YjU1MjhiMDc1YjQ4YzIzNDgypMXG7w==: --dhchap-ctrl-secret DHHC-1:01:ZDNiN2U1ZTFiMjljNTMwMjFmOWY2ZjBmZTRhNTg3MTUTmxRf: 00:16:43.981 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:YzJhNDBjN2VjY2Q2ZWE3MzQ3NDAyYzE3NTkxMDE1YjU1MjhiMDc1YjQ4YzIzNDgypMXG7w==: --dhchap-ctrl-secret DHHC-1:01:ZDNiN2U1ZTFiMjljNTMwMjFmOWY2ZjBmZTRhNTg3MTUTmxRf: 00:16:44.919 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.919 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:44.919 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.919 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.919 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.919 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.919 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:44.919 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:44.919 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:16:44.919 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.919 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:44.919 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:44.919 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:44.919 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.919 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:16:44.919 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.919 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.919 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.919 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:44.919 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:44.919 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:45.178 00:16:45.178 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.178 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.178 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.178 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.436 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.436 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.436 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.436 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.436 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.436 { 00:16:45.436 "cntlid": 55, 00:16:45.436 "qid": 0, 00:16:45.436 "state": "enabled", 00:16:45.436 "thread": "nvmf_tgt_poll_group_000", 00:16:45.436 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:45.436 "listen_address": { 00:16:45.436 "trtype": "TCP", 00:16:45.436 "adrfam": "IPv4", 00:16:45.436 "traddr": "10.0.0.2", 00:16:45.436 "trsvcid": "4420" 00:16:45.436 }, 00:16:45.436 "peer_address": { 00:16:45.436 "trtype": "TCP", 00:16:45.436 "adrfam": "IPv4", 00:16:45.436 "traddr": "10.0.0.1", 00:16:45.436 "trsvcid": "42962" 00:16:45.436 }, 00:16:45.436 "auth": { 00:16:45.436 "state": "completed", 00:16:45.436 "digest": "sha384", 00:16:45.436 "dhgroup": "null" 00:16:45.436 } 00:16:45.436 } 00:16:45.436 ]' 00:16:45.436 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.436 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:45.436 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.436 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:45.436 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.436 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.436 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.436 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.695 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI3ZjY2M2E0MTc4MDEyMTg4NGE0OGNhMjc3YjE0NThiZGU5NWU5MzIxNTUzZGUyMTJiYzEzYWI1ZDliNWMwYTvXris=: 00:16:45.695 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:ZDI3ZjY2M2E0MTc4MDEyMTg4NGE0OGNhMjc3YjE0NThiZGU5NWU5MzIxNTUzZGUyMTJiYzEzYWI1ZDliNWMwYTvXris=: 00:16:46.272 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.272 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:46.272 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.272 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.272 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.272 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:46.272 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.272 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:46.272 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:46.531 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:16:46.531 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.531 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:46.531 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:46.531 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:46.531 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.531 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.531 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.531 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.531 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.531 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.531 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.531 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.790 00:16:46.790 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:46.790 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:46.790 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.051 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.051 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.051 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.051 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.051 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.051 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.051 { 00:16:47.051 "cntlid": 57, 00:16:47.051 "qid": 0, 00:16:47.051 "state": "enabled", 00:16:47.051 "thread": "nvmf_tgt_poll_group_000", 00:16:47.051 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:47.051 "listen_address": { 00:16:47.051 "trtype": "TCP", 00:16:47.051 "adrfam": "IPv4", 00:16:47.051 "traddr": "10.0.0.2", 00:16:47.051 "trsvcid": "4420" 00:16:47.051 }, 00:16:47.051 "peer_address": { 00:16:47.051 "trtype": "TCP", 00:16:47.051 "adrfam": "IPv4", 00:16:47.051 "traddr": "10.0.0.1", 00:16:47.051 "trsvcid": "42988" 00:16:47.051 }, 00:16:47.051 "auth": { 00:16:47.051 "state": "completed", 00:16:47.051 "digest": "sha384", 00:16:47.051 "dhgroup": "ffdhe2048" 00:16:47.051 } 00:16:47.051 } 00:16:47.051 ]' 00:16:47.051 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.051 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:47.051 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.051 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:47.051 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.051 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.051 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.051 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.312 16:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTE1YmJjYzE3YjFjOTc5MTUwNjhjMmUzMWVjNzdiM2JjZjlkMTJkMmZjNTk2MzQ40V3t1w==: --dhchap-ctrl-secret DHHC-1:03:NmJiYmRkODI0NmE5ZmZhNzNiNjg3OGI0OWE0YzhlZWI5YWY3NWJiOTU5OWY4MmU0NTZlM2NiMWQ4ZDViN2YyZfnUQlU=: 00:16:47.312 16:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:OTE1YmJjYzE3YjFjOTc5MTUwNjhjMmUzMWVjNzdiM2JjZjlkMTJkMmZjNTk2MzQ40V3t1w==: --dhchap-ctrl-secret DHHC-1:03:NmJiYmRkODI0NmE5ZmZhNzNiNjg3OGI0OWE0YzhlZWI5YWY3NWJiOTU5OWY4MmU0NTZlM2NiMWQ4ZDViN2YyZfnUQlU=: 00:16:48.256 16:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.256 16:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:48.256 16:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.256 16:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.256 16:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.256 16:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.256 16:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:48.256 16:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:48.256 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:16:48.256 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.256 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:48.256 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:48.256 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:48.256 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.256 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.256 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.256 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.256 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.256 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.256 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.256 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.516 00:16:48.516 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.516 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.516 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.777 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.777 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.777 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.777 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.777 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.777 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.777 { 00:16:48.777 "cntlid": 59, 00:16:48.777 "qid": 0, 00:16:48.777 "state": "enabled", 00:16:48.777 "thread": "nvmf_tgt_poll_group_000", 00:16:48.777 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:48.777 "listen_address": { 00:16:48.777 "trtype": "TCP", 00:16:48.777 "adrfam": "IPv4", 00:16:48.777 "traddr": "10.0.0.2", 00:16:48.777 "trsvcid": "4420" 00:16:48.777 }, 00:16:48.777 "peer_address": { 00:16:48.777 "trtype": "TCP", 00:16:48.777 "adrfam": "IPv4", 00:16:48.777 "traddr": "10.0.0.1", 00:16:48.777 "trsvcid": "43006" 00:16:48.777 }, 00:16:48.777 "auth": { 00:16:48.777 "state": "completed", 00:16:48.777 "digest": "sha384", 00:16:48.777 "dhgroup": "ffdhe2048" 00:16:48.777 } 00:16:48.777 } 00:16:48.777 ]' 00:16:48.777 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.777 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:48.777 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.777 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:48.777 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.777 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.777 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.777 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.038 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDljMDA5NTBlNjkwMTBkZWU5ODYwNjNhNTYxOGRiNmFZUFSi: --dhchap-ctrl-secret DHHC-1:02:NmJmNWI4MDQ5MDA3Yzg3NDI3OGExYjY2MzFjZDFlZTBkYWI4MmZmMzg0MmVkYWM3B7b5SA==: 00:16:49.038 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:MDljMDA5NTBlNjkwMTBkZWU5ODYwNjNhNTYxOGRiNmFZUFSi: --dhchap-ctrl-secret DHHC-1:02:NmJmNWI4MDQ5MDA3Yzg3NDI3OGExYjY2MzFjZDFlZTBkYWI4MmZmMzg0MmVkYWM3B7b5SA==: 00:16:49.980 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.980 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.980 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:49.980 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.980 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.980 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.980 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.980 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:49.980 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:49.980 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:16:49.980 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.980 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:49.980 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:49.980 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:49.980 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.980 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.980 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.980 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.980 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.980 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.980 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.980 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.242 00:16:50.242 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.242 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.242 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.503 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.503 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.503 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.503 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.503 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.503 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.503 { 00:16:50.503 "cntlid": 61, 00:16:50.503 "qid": 0, 00:16:50.503 "state": "enabled", 00:16:50.503 "thread": "nvmf_tgt_poll_group_000", 00:16:50.503 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:50.503 "listen_address": { 00:16:50.503 "trtype": "TCP", 00:16:50.503 "adrfam": "IPv4", 00:16:50.503 "traddr": "10.0.0.2", 00:16:50.503 "trsvcid": "4420" 00:16:50.503 }, 00:16:50.503 "peer_address": { 00:16:50.503 "trtype": "TCP", 00:16:50.503 "adrfam": "IPv4", 00:16:50.503 "traddr": "10.0.0.1", 00:16:50.503 "trsvcid": "43030" 00:16:50.503 }, 00:16:50.503 "auth": { 00:16:50.503 "state": "completed", 00:16:50.503 "digest": "sha384", 00:16:50.503 "dhgroup": "ffdhe2048" 00:16:50.503 } 00:16:50.503 } 00:16:50.503 ]' 00:16:50.503 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.503 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:50.503 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.503 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:50.503 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.503 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.503 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.503 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.764 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzJhNDBjN2VjY2Q2ZWE3MzQ3NDAyYzE3NTkxMDE1YjU1MjhiMDc1YjQ4YzIzNDgypMXG7w==: --dhchap-ctrl-secret DHHC-1:01:ZDNiN2U1ZTFiMjljNTMwMjFmOWY2ZjBmZTRhNTg3MTUTmxRf: 00:16:50.764 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:YzJhNDBjN2VjY2Q2ZWE3MzQ3NDAyYzE3NTkxMDE1YjU1MjhiMDc1YjQ4YzIzNDgypMXG7w==: --dhchap-ctrl-secret DHHC-1:01:ZDNiN2U1ZTFiMjljNTMwMjFmOWY2ZjBmZTRhNTg3MTUTmxRf: 00:16:51.334 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.334 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:51.334 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.334 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.334 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.334 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.595 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:51.595 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:51.595 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:16:51.595 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.595 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:51.595 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:51.595 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:51.595 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.595 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:16:51.595 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.595 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.595 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.595 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:51.595 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:51.595 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:51.855 00:16:51.855 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:51.855 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.855 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.115 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.115 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.115 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.115 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.115 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.115 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.115 { 00:16:52.115 "cntlid": 63, 00:16:52.115 "qid": 0, 00:16:52.115 "state": "enabled", 00:16:52.115 "thread": "nvmf_tgt_poll_group_000", 00:16:52.115 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:52.115 "listen_address": { 00:16:52.115 "trtype": "TCP", 00:16:52.115 "adrfam": "IPv4", 00:16:52.115 "traddr": "10.0.0.2", 00:16:52.115 "trsvcid": "4420" 00:16:52.115 }, 00:16:52.115 "peer_address": { 00:16:52.115 "trtype": "TCP", 00:16:52.115 "adrfam": "IPv4", 00:16:52.115 "traddr": "10.0.0.1", 00:16:52.115 "trsvcid": "38050" 00:16:52.115 }, 00:16:52.115 "auth": { 00:16:52.115 "state": "completed", 00:16:52.115 "digest": "sha384", 00:16:52.115 "dhgroup": "ffdhe2048" 00:16:52.115 } 00:16:52.115 } 00:16:52.115 ]' 00:16:52.115 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.115 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:52.115 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.115 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:52.115 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.116 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.116 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.116 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.376 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI3ZjY2M2E0MTc4MDEyMTg4NGE0OGNhMjc3YjE0NThiZGU5NWU5MzIxNTUzZGUyMTJiYzEzYWI1ZDliNWMwYTvXris=: 00:16:52.376 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:ZDI3ZjY2M2E0MTc4MDEyMTg4NGE0OGNhMjc3YjE0NThiZGU5NWU5MzIxNTUzZGUyMTJiYzEzYWI1ZDliNWMwYTvXris=: 00:16:53.317 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.317 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.317 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:53.317 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.317 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.317 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.317 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:53.317 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.317 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:53.317 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:53.317 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:16:53.317 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.317 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:53.317 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:53.317 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:53.317 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.317 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.317 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.318 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.318 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.318 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.318 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.318 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.579 00:16:53.579 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.579 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.579 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.839 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.839 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.839 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.839 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.839 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.839 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.839 { 00:16:53.839 "cntlid": 65, 00:16:53.839 "qid": 0, 00:16:53.839 "state": "enabled", 00:16:53.839 "thread": "nvmf_tgt_poll_group_000", 00:16:53.839 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:53.839 "listen_address": { 00:16:53.839 "trtype": "TCP", 00:16:53.839 "adrfam": "IPv4", 00:16:53.839 "traddr": "10.0.0.2", 00:16:53.839 "trsvcid": "4420" 00:16:53.839 }, 00:16:53.839 "peer_address": { 00:16:53.839 "trtype": "TCP", 00:16:53.839 "adrfam": "IPv4", 00:16:53.839 "traddr": "10.0.0.1", 00:16:53.839 "trsvcid": "38070" 00:16:53.839 }, 00:16:53.839 "auth": { 00:16:53.839 "state": "completed", 00:16:53.839 "digest": "sha384", 00:16:53.839 "dhgroup": "ffdhe3072" 00:16:53.839 } 00:16:53.839 } 00:16:53.839 ]' 00:16:53.839 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.840 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:53.840 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.840 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:53.840 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.840 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.840 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.840 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.101 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTE1YmJjYzE3YjFjOTc5MTUwNjhjMmUzMWVjNzdiM2JjZjlkMTJkMmZjNTk2MzQ40V3t1w==: --dhchap-ctrl-secret DHHC-1:03:NmJiYmRkODI0NmE5ZmZhNzNiNjg3OGI0OWE0YzhlZWI5YWY3NWJiOTU5OWY4MmU0NTZlM2NiMWQ4ZDViN2YyZfnUQlU=: 00:16:54.101 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:OTE1YmJjYzE3YjFjOTc5MTUwNjhjMmUzMWVjNzdiM2JjZjlkMTJkMmZjNTk2MzQ40V3t1w==: --dhchap-ctrl-secret DHHC-1:03:NmJiYmRkODI0NmE5ZmZhNzNiNjg3OGI0OWE0YzhlZWI5YWY3NWJiOTU5OWY4MmU0NTZlM2NiMWQ4ZDViN2YyZfnUQlU=: 00:16:54.672 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.933 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.933 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:54.933 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.933 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.933 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.933 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.933 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:54.933 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:54.933 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:16:54.933 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.933 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:54.933 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:54.933 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:54.933 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.933 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.933 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.933 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.933 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.933 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.933 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.933 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.194 00:16:55.194 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.194 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.194 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.455 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.455 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.455 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.455 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.455 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.455 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.455 { 00:16:55.455 "cntlid": 67, 00:16:55.455 "qid": 0, 00:16:55.455 "state": "enabled", 00:16:55.455 "thread": "nvmf_tgt_poll_group_000", 00:16:55.455 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:55.455 "listen_address": { 00:16:55.455 "trtype": "TCP", 00:16:55.455 "adrfam": "IPv4", 00:16:55.455 "traddr": "10.0.0.2", 00:16:55.455 "trsvcid": "4420" 00:16:55.455 }, 00:16:55.455 "peer_address": { 00:16:55.455 "trtype": "TCP", 00:16:55.455 "adrfam": "IPv4", 00:16:55.455 "traddr": "10.0.0.1", 00:16:55.455 "trsvcid": "38094" 00:16:55.455 }, 00:16:55.455 "auth": { 00:16:55.455 "state": "completed", 00:16:55.455 "digest": "sha384", 00:16:55.455 "dhgroup": "ffdhe3072" 00:16:55.455 } 00:16:55.455 } 00:16:55.455 ]' 00:16:55.455 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.455 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:55.455 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.455 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:55.455 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.455 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.455 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.455 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.715 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDljMDA5NTBlNjkwMTBkZWU5ODYwNjNhNTYxOGRiNmFZUFSi: --dhchap-ctrl-secret DHHC-1:02:NmJmNWI4MDQ5MDA3Yzg3NDI3OGExYjY2MzFjZDFlZTBkYWI4MmZmMzg0MmVkYWM3B7b5SA==: 00:16:55.715 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:MDljMDA5NTBlNjkwMTBkZWU5ODYwNjNhNTYxOGRiNmFZUFSi: --dhchap-ctrl-secret DHHC-1:02:NmJmNWI4MDQ5MDA3Yzg3NDI3OGExYjY2MzFjZDFlZTBkYWI4MmZmMzg0MmVkYWM3B7b5SA==: 00:16:56.656 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.656 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.656 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:56.656 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.656 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.656 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.656 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.656 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:56.656 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:56.656 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:16:56.656 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.656 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:56.656 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:56.656 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:56.656 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.656 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.656 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.656 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.656 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.656 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.656 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.656 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.921 00:16:56.921 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.921 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.921 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.216 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.216 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.216 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.216 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.216 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.216 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.216 { 00:16:57.216 "cntlid": 69, 00:16:57.216 "qid": 0, 00:16:57.216 "state": "enabled", 00:16:57.216 "thread": "nvmf_tgt_poll_group_000", 00:16:57.216 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:57.216 "listen_address": { 00:16:57.216 "trtype": "TCP", 00:16:57.216 "adrfam": "IPv4", 00:16:57.216 "traddr": "10.0.0.2", 00:16:57.216 "trsvcid": "4420" 00:16:57.216 }, 00:16:57.216 "peer_address": { 00:16:57.216 "trtype": "TCP", 00:16:57.216 "adrfam": "IPv4", 00:16:57.216 "traddr": "10.0.0.1", 00:16:57.216 "trsvcid": "38114" 00:16:57.216 }, 00:16:57.216 "auth": { 00:16:57.216 "state": "completed", 00:16:57.216 "digest": "sha384", 00:16:57.216 "dhgroup": "ffdhe3072" 00:16:57.216 } 00:16:57.216 } 00:16:57.216 ]' 00:16:57.216 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.216 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:57.216 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.216 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:57.216 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.216 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.216 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.216 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.516 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzJhNDBjN2VjY2Q2ZWE3MzQ3NDAyYzE3NTkxMDE1YjU1MjhiMDc1YjQ4YzIzNDgypMXG7w==: --dhchap-ctrl-secret DHHC-1:01:ZDNiN2U1ZTFiMjljNTMwMjFmOWY2ZjBmZTRhNTg3MTUTmxRf: 00:16:57.516 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:YzJhNDBjN2VjY2Q2ZWE3MzQ3NDAyYzE3NTkxMDE1YjU1MjhiMDc1YjQ4YzIzNDgypMXG7w==: --dhchap-ctrl-secret DHHC-1:01:ZDNiN2U1ZTFiMjljNTMwMjFmOWY2ZjBmZTRhNTg3MTUTmxRf: 00:16:58.116 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.116 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:58.116 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.116 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.116 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.116 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.116 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:58.116 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:58.376 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:16:58.376 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.376 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:58.376 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:58.376 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:58.376 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.376 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:16:58.376 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.376 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.377 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.377 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:58.377 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:58.377 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:58.637 00:16:58.637 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.637 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.637 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.897 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.897 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.897 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.897 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.897 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.897 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.897 { 00:16:58.897 "cntlid": 71, 00:16:58.897 "qid": 0, 00:16:58.897 "state": "enabled", 00:16:58.897 "thread": "nvmf_tgt_poll_group_000", 00:16:58.897 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:16:58.897 "listen_address": { 00:16:58.897 "trtype": "TCP", 00:16:58.897 "adrfam": "IPv4", 00:16:58.897 "traddr": "10.0.0.2", 00:16:58.897 "trsvcid": "4420" 00:16:58.897 }, 00:16:58.897 "peer_address": { 00:16:58.897 "trtype": "TCP", 00:16:58.897 "adrfam": "IPv4", 00:16:58.897 "traddr": "10.0.0.1", 00:16:58.897 "trsvcid": "38136" 00:16:58.897 }, 00:16:58.897 "auth": { 00:16:58.897 "state": "completed", 00:16:58.897 "digest": "sha384", 00:16:58.897 "dhgroup": "ffdhe3072" 00:16:58.897 } 00:16:58.897 } 00:16:58.897 ]' 00:16:58.897 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.897 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:58.897 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.897 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:58.897 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.897 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.897 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.897 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.158 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI3ZjY2M2E0MTc4MDEyMTg4NGE0OGNhMjc3YjE0NThiZGU5NWU5MzIxNTUzZGUyMTJiYzEzYWI1ZDliNWMwYTvXris=: 00:16:59.158 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:ZDI3ZjY2M2E0MTc4MDEyMTg4NGE0OGNhMjc3YjE0NThiZGU5NWU5MzIxNTUzZGUyMTJiYzEzYWI1ZDliNWMwYTvXris=: 00:17:00.101 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.101 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:00.101 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.101 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.101 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.101 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:00.101 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.101 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:00.101 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:00.101 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:17:00.101 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.101 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:00.101 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:00.101 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:00.101 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.101 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.101 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.101 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.101 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.101 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.101 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.101 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.361 00:17:00.361 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.361 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.361 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.622 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.622 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.622 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.622 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.622 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.622 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.622 { 00:17:00.622 "cntlid": 73, 00:17:00.622 "qid": 0, 00:17:00.622 "state": "enabled", 00:17:00.622 "thread": "nvmf_tgt_poll_group_000", 00:17:00.622 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:00.622 "listen_address": { 00:17:00.622 "trtype": "TCP", 00:17:00.622 "adrfam": "IPv4", 00:17:00.622 "traddr": "10.0.0.2", 00:17:00.622 "trsvcid": "4420" 00:17:00.622 }, 00:17:00.622 "peer_address": { 00:17:00.622 "trtype": "TCP", 00:17:00.622 "adrfam": "IPv4", 00:17:00.622 "traddr": "10.0.0.1", 00:17:00.622 "trsvcid": "38152" 00:17:00.622 }, 00:17:00.622 "auth": { 00:17:00.622 "state": "completed", 00:17:00.622 "digest": "sha384", 00:17:00.622 "dhgroup": "ffdhe4096" 00:17:00.622 } 00:17:00.622 } 00:17:00.622 ]' 00:17:00.622 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.622 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:00.622 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:00.622 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:00.622 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.622 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.622 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.622 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.882 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTE1YmJjYzE3YjFjOTc5MTUwNjhjMmUzMWVjNzdiM2JjZjlkMTJkMmZjNTk2MzQ40V3t1w==: --dhchap-ctrl-secret DHHC-1:03:NmJiYmRkODI0NmE5ZmZhNzNiNjg3OGI0OWE0YzhlZWI5YWY3NWJiOTU5OWY4MmU0NTZlM2NiMWQ4ZDViN2YyZfnUQlU=: 00:17:00.882 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:OTE1YmJjYzE3YjFjOTc5MTUwNjhjMmUzMWVjNzdiM2JjZjlkMTJkMmZjNTk2MzQ40V3t1w==: --dhchap-ctrl-secret DHHC-1:03:NmJiYmRkODI0NmE5ZmZhNzNiNjg3OGI0OWE0YzhlZWI5YWY3NWJiOTU5OWY4MmU0NTZlM2NiMWQ4ZDViN2YyZfnUQlU=: 00:17:01.824 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.824 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.824 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:01.824 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.824 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.824 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.824 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:01.824 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:01.824 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:01.824 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:17:01.824 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:01.824 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:01.824 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:01.824 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:01.824 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.824 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.824 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.824 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.824 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.824 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.824 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.824 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.084 00:17:02.084 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.084 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.084 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.345 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.345 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.345 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.345 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.345 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.345 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.345 { 00:17:02.345 "cntlid": 75, 00:17:02.345 "qid": 0, 00:17:02.345 "state": "enabled", 00:17:02.345 "thread": "nvmf_tgt_poll_group_000", 00:17:02.345 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:02.345 "listen_address": { 00:17:02.345 "trtype": "TCP", 00:17:02.345 "adrfam": "IPv4", 00:17:02.345 "traddr": "10.0.0.2", 00:17:02.345 "trsvcid": "4420" 00:17:02.345 }, 00:17:02.345 "peer_address": { 00:17:02.345 "trtype": "TCP", 00:17:02.345 "adrfam": "IPv4", 00:17:02.345 "traddr": "10.0.0.1", 00:17:02.345 "trsvcid": "44346" 00:17:02.345 }, 00:17:02.345 "auth": { 00:17:02.345 "state": "completed", 00:17:02.345 "digest": "sha384", 00:17:02.345 "dhgroup": "ffdhe4096" 00:17:02.345 } 00:17:02.345 } 00:17:02.345 ]' 00:17:02.345 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:02.345 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:02.345 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:02.345 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:02.345 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:02.345 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.345 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.345 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.605 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDljMDA5NTBlNjkwMTBkZWU5ODYwNjNhNTYxOGRiNmFZUFSi: --dhchap-ctrl-secret DHHC-1:02:NmJmNWI4MDQ5MDA3Yzg3NDI3OGExYjY2MzFjZDFlZTBkYWI4MmZmMzg0MmVkYWM3B7b5SA==: 00:17:02.605 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:MDljMDA5NTBlNjkwMTBkZWU5ODYwNjNhNTYxOGRiNmFZUFSi: --dhchap-ctrl-secret DHHC-1:02:NmJmNWI4MDQ5MDA3Yzg3NDI3OGExYjY2MzFjZDFlZTBkYWI4MmZmMzg0MmVkYWM3B7b5SA==: 00:17:03.548 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.548 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.548 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:03.548 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.548 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.548 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.548 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.549 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:03.549 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:03.549 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:17:03.549 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.549 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:03.549 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:03.549 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:03.549 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.549 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.549 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.549 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.549 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.549 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.549 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.549 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.810 00:17:03.810 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.810 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.810 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.810 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.810 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.810 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.810 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.070 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.070 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.070 { 00:17:04.070 "cntlid": 77, 00:17:04.070 "qid": 0, 00:17:04.070 "state": "enabled", 00:17:04.070 "thread": "nvmf_tgt_poll_group_000", 00:17:04.070 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:04.070 "listen_address": { 00:17:04.070 "trtype": "TCP", 00:17:04.070 "adrfam": "IPv4", 00:17:04.070 "traddr": "10.0.0.2", 00:17:04.070 "trsvcid": "4420" 00:17:04.070 }, 00:17:04.070 "peer_address": { 00:17:04.070 "trtype": "TCP", 00:17:04.070 "adrfam": "IPv4", 00:17:04.070 "traddr": "10.0.0.1", 00:17:04.070 "trsvcid": "44378" 00:17:04.070 }, 00:17:04.070 "auth": { 00:17:04.070 "state": "completed", 00:17:04.070 "digest": "sha384", 00:17:04.070 "dhgroup": "ffdhe4096" 00:17:04.070 } 00:17:04.070 } 00:17:04.070 ]' 00:17:04.070 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.070 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:04.070 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.070 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:04.070 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.070 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.070 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.070 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.332 16:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzJhNDBjN2VjY2Q2ZWE3MzQ3NDAyYzE3NTkxMDE1YjU1MjhiMDc1YjQ4YzIzNDgypMXG7w==: --dhchap-ctrl-secret DHHC-1:01:ZDNiN2U1ZTFiMjljNTMwMjFmOWY2ZjBmZTRhNTg3MTUTmxRf: 00:17:04.332 16:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:YzJhNDBjN2VjY2Q2ZWE3MzQ3NDAyYzE3NTkxMDE1YjU1MjhiMDc1YjQ4YzIzNDgypMXG7w==: --dhchap-ctrl-secret DHHC-1:01:ZDNiN2U1ZTFiMjljNTMwMjFmOWY2ZjBmZTRhNTg3MTUTmxRf: 00:17:04.903 16:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.903 16:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:04.903 16:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.903 16:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.903 16:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.903 16:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:04.903 16:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:04.903 16:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:05.164 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:17:05.164 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.164 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:05.164 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:05.164 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:05.164 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.164 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:17:05.164 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.165 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.165 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.165 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:05.165 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:05.165 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:05.426 00:17:05.426 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:05.426 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.426 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:05.687 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.687 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.687 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.687 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.687 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.687 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:05.687 { 00:17:05.687 "cntlid": 79, 00:17:05.687 "qid": 0, 00:17:05.687 "state": "enabled", 00:17:05.687 "thread": "nvmf_tgt_poll_group_000", 00:17:05.687 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:05.687 "listen_address": { 00:17:05.687 "trtype": "TCP", 00:17:05.687 "adrfam": "IPv4", 00:17:05.687 "traddr": "10.0.0.2", 00:17:05.687 "trsvcid": "4420" 00:17:05.687 }, 00:17:05.687 "peer_address": { 00:17:05.687 "trtype": "TCP", 00:17:05.687 "adrfam": "IPv4", 00:17:05.687 "traddr": "10.0.0.1", 00:17:05.687 "trsvcid": "44404" 00:17:05.687 }, 00:17:05.687 "auth": { 00:17:05.687 "state": "completed", 00:17:05.687 "digest": "sha384", 00:17:05.687 "dhgroup": "ffdhe4096" 00:17:05.687 } 00:17:05.687 } 00:17:05.687 ]' 00:17:05.687 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:05.687 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:05.687 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:05.687 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:05.687 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.687 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.687 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.687 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.948 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI3ZjY2M2E0MTc4MDEyMTg4NGE0OGNhMjc3YjE0NThiZGU5NWU5MzIxNTUzZGUyMTJiYzEzYWI1ZDliNWMwYTvXris=: 00:17:05.948 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:ZDI3ZjY2M2E0MTc4MDEyMTg4NGE0OGNhMjc3YjE0NThiZGU5NWU5MzIxNTUzZGUyMTJiYzEzYWI1ZDliNWMwYTvXris=: 00:17:06.890 16:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.890 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.890 16:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:06.890 16:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.890 16:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.890 16:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.890 16:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:06.890 16:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:06.890 16:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:06.890 16:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:06.890 16:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:17:06.890 16:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.890 16:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:06.890 16:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:06.890 16:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:06.890 16:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.890 16:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.890 16:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.890 16:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.890 16:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.890 16:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.890 16:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.890 16:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.151 00:17:07.151 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.151 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.151 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.412 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.412 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.412 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.412 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.412 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.412 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.412 { 00:17:07.412 "cntlid": 81, 00:17:07.412 "qid": 0, 00:17:07.412 "state": "enabled", 00:17:07.412 "thread": "nvmf_tgt_poll_group_000", 00:17:07.412 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:07.412 "listen_address": { 00:17:07.412 "trtype": "TCP", 00:17:07.412 "adrfam": "IPv4", 00:17:07.412 "traddr": "10.0.0.2", 00:17:07.412 "trsvcid": "4420" 00:17:07.412 }, 00:17:07.412 "peer_address": { 00:17:07.412 "trtype": "TCP", 00:17:07.412 "adrfam": "IPv4", 00:17:07.412 "traddr": "10.0.0.1", 00:17:07.412 "trsvcid": "44438" 00:17:07.412 }, 00:17:07.412 "auth": { 00:17:07.412 "state": "completed", 00:17:07.412 "digest": "sha384", 00:17:07.412 "dhgroup": "ffdhe6144" 00:17:07.412 } 00:17:07.412 } 00:17:07.412 ]' 00:17:07.412 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.412 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:07.412 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.672 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:07.672 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.672 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.672 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.672 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.672 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTE1YmJjYzE3YjFjOTc5MTUwNjhjMmUzMWVjNzdiM2JjZjlkMTJkMmZjNTk2MzQ40V3t1w==: --dhchap-ctrl-secret DHHC-1:03:NmJiYmRkODI0NmE5ZmZhNzNiNjg3OGI0OWE0YzhlZWI5YWY3NWJiOTU5OWY4MmU0NTZlM2NiMWQ4ZDViN2YyZfnUQlU=: 00:17:07.672 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:OTE1YmJjYzE3YjFjOTc5MTUwNjhjMmUzMWVjNzdiM2JjZjlkMTJkMmZjNTk2MzQ40V3t1w==: --dhchap-ctrl-secret DHHC-1:03:NmJiYmRkODI0NmE5ZmZhNzNiNjg3OGI0OWE0YzhlZWI5YWY3NWJiOTU5OWY4MmU0NTZlM2NiMWQ4ZDViN2YyZfnUQlU=: 00:17:08.614 16:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.614 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.614 16:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:08.614 16:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.614 16:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.614 16:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.614 16:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:08.614 16:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:08.614 16:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:08.614 16:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:17:08.614 16:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:08.614 16:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:08.614 16:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:08.614 16:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:08.614 16:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.614 16:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.614 16:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.614 16:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.614 16:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.614 16:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.614 16:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.614 16:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.186 00:17:09.186 16:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.186 16:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.186 16:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.186 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.186 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.186 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.186 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.186 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.186 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.186 { 00:17:09.186 "cntlid": 83, 00:17:09.186 "qid": 0, 00:17:09.186 "state": "enabled", 00:17:09.186 "thread": "nvmf_tgt_poll_group_000", 00:17:09.186 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:09.186 "listen_address": { 00:17:09.186 "trtype": "TCP", 00:17:09.186 "adrfam": "IPv4", 00:17:09.186 "traddr": "10.0.0.2", 00:17:09.186 "trsvcid": "4420" 00:17:09.186 }, 00:17:09.186 "peer_address": { 00:17:09.186 "trtype": "TCP", 00:17:09.186 "adrfam": "IPv4", 00:17:09.186 "traddr": "10.0.0.1", 00:17:09.186 "trsvcid": "44480" 00:17:09.186 }, 00:17:09.186 "auth": { 00:17:09.186 "state": "completed", 00:17:09.186 "digest": "sha384", 00:17:09.186 "dhgroup": "ffdhe6144" 00:17:09.186 } 00:17:09.186 } 00:17:09.186 ]' 00:17:09.186 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.186 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:09.186 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:09.446 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:09.446 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.446 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.446 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.446 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.446 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDljMDA5NTBlNjkwMTBkZWU5ODYwNjNhNTYxOGRiNmFZUFSi: --dhchap-ctrl-secret DHHC-1:02:NmJmNWI4MDQ5MDA3Yzg3NDI3OGExYjY2MzFjZDFlZTBkYWI4MmZmMzg0MmVkYWM3B7b5SA==: 00:17:09.446 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:MDljMDA5NTBlNjkwMTBkZWU5ODYwNjNhNTYxOGRiNmFZUFSi: --dhchap-ctrl-secret DHHC-1:02:NmJmNWI4MDQ5MDA3Yzg3NDI3OGExYjY2MzFjZDFlZTBkYWI4MmZmMzg0MmVkYWM3B7b5SA==: 00:17:10.387 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.387 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:10.387 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.387 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.387 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.387 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:10.387 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:10.387 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:10.387 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:17:10.387 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:10.387 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:10.387 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:10.387 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:10.387 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.387 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.387 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.387 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.387 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.387 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.387 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.387 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.959 00:17:10.959 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:10.959 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:10.959 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.959 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.959 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.959 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.959 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.959 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.959 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.959 { 00:17:10.959 "cntlid": 85, 00:17:10.959 "qid": 0, 00:17:10.959 "state": "enabled", 00:17:10.959 "thread": "nvmf_tgt_poll_group_000", 00:17:10.959 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:10.959 "listen_address": { 00:17:10.959 "trtype": "TCP", 00:17:10.959 "adrfam": "IPv4", 00:17:10.959 "traddr": "10.0.0.2", 00:17:10.959 "trsvcid": "4420" 00:17:10.959 }, 00:17:10.959 "peer_address": { 00:17:10.959 "trtype": "TCP", 00:17:10.959 "adrfam": "IPv4", 00:17:10.960 "traddr": "10.0.0.1", 00:17:10.960 "trsvcid": "44506" 00:17:10.960 }, 00:17:10.960 "auth": { 00:17:10.960 "state": "completed", 00:17:10.960 "digest": "sha384", 00:17:10.960 "dhgroup": "ffdhe6144" 00:17:10.960 } 00:17:10.960 } 00:17:10.960 ]' 00:17:10.960 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.220 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:11.220 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:11.220 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:11.220 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.220 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.220 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.220 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.480 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzJhNDBjN2VjY2Q2ZWE3MzQ3NDAyYzE3NTkxMDE1YjU1MjhiMDc1YjQ4YzIzNDgypMXG7w==: --dhchap-ctrl-secret DHHC-1:01:ZDNiN2U1ZTFiMjljNTMwMjFmOWY2ZjBmZTRhNTg3MTUTmxRf: 00:17:11.480 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:YzJhNDBjN2VjY2Q2ZWE3MzQ3NDAyYzE3NTkxMDE1YjU1MjhiMDc1YjQ4YzIzNDgypMXG7w==: --dhchap-ctrl-secret DHHC-1:01:ZDNiN2U1ZTFiMjljNTMwMjFmOWY2ZjBmZTRhNTg3MTUTmxRf: 00:17:12.052 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.052 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.052 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:12.052 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.052 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.052 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.052 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:12.052 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:12.052 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:12.313 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:17:12.313 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.313 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:12.313 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:12.313 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:12.313 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.313 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:17:12.313 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.313 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.313 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.313 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:12.313 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:12.313 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:12.573 00:17:12.573 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:12.574 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.574 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.833 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.833 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.833 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.833 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.833 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.833 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.833 { 00:17:12.833 "cntlid": 87, 00:17:12.833 "qid": 0, 00:17:12.833 "state": "enabled", 00:17:12.833 "thread": "nvmf_tgt_poll_group_000", 00:17:12.833 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:12.833 "listen_address": { 00:17:12.833 "trtype": "TCP", 00:17:12.833 "adrfam": "IPv4", 00:17:12.833 "traddr": "10.0.0.2", 00:17:12.833 "trsvcid": "4420" 00:17:12.833 }, 00:17:12.833 "peer_address": { 00:17:12.833 "trtype": "TCP", 00:17:12.833 "adrfam": "IPv4", 00:17:12.833 "traddr": "10.0.0.1", 00:17:12.833 "trsvcid": "54228" 00:17:12.833 }, 00:17:12.833 "auth": { 00:17:12.833 "state": "completed", 00:17:12.833 "digest": "sha384", 00:17:12.833 "dhgroup": "ffdhe6144" 00:17:12.833 } 00:17:12.833 } 00:17:12.833 ]' 00:17:12.833 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.833 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:12.833 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.091 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:13.091 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.091 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.091 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.091 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.091 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI3ZjY2M2E0MTc4MDEyMTg4NGE0OGNhMjc3YjE0NThiZGU5NWU5MzIxNTUzZGUyMTJiYzEzYWI1ZDliNWMwYTvXris=: 00:17:13.091 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:ZDI3ZjY2M2E0MTc4MDEyMTg4NGE0OGNhMjc3YjE0NThiZGU5NWU5MzIxNTUzZGUyMTJiYzEzYWI1ZDliNWMwYTvXris=: 00:17:14.029 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.029 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:14.030 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.030 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.030 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.030 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:14.030 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:14.030 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:14.030 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:14.030 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:17:14.030 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.030 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:14.030 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:14.030 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:14.030 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.030 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.030 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.030 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.030 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.030 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.030 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.030 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.598 00:17:14.598 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.598 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:14.598 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.858 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.858 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.858 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.858 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.858 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.858 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:14.858 { 00:17:14.858 "cntlid": 89, 00:17:14.858 "qid": 0, 00:17:14.858 "state": "enabled", 00:17:14.858 "thread": "nvmf_tgt_poll_group_000", 00:17:14.858 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:14.858 "listen_address": { 00:17:14.858 "trtype": "TCP", 00:17:14.858 "adrfam": "IPv4", 00:17:14.858 "traddr": "10.0.0.2", 00:17:14.858 "trsvcid": "4420" 00:17:14.858 }, 00:17:14.858 "peer_address": { 00:17:14.858 "trtype": "TCP", 00:17:14.858 "adrfam": "IPv4", 00:17:14.858 "traddr": "10.0.0.1", 00:17:14.858 "trsvcid": "54246" 00:17:14.858 }, 00:17:14.858 "auth": { 00:17:14.858 "state": "completed", 00:17:14.858 "digest": "sha384", 00:17:14.858 "dhgroup": "ffdhe8192" 00:17:14.858 } 00:17:14.858 } 00:17:14.858 ]' 00:17:14.858 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:14.858 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:14.858 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:14.858 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:14.858 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.858 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.858 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.858 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.119 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTE1YmJjYzE3YjFjOTc5MTUwNjhjMmUzMWVjNzdiM2JjZjlkMTJkMmZjNTk2MzQ40V3t1w==: --dhchap-ctrl-secret DHHC-1:03:NmJiYmRkODI0NmE5ZmZhNzNiNjg3OGI0OWE0YzhlZWI5YWY3NWJiOTU5OWY4MmU0NTZlM2NiMWQ4ZDViN2YyZfnUQlU=: 00:17:15.119 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:OTE1YmJjYzE3YjFjOTc5MTUwNjhjMmUzMWVjNzdiM2JjZjlkMTJkMmZjNTk2MzQ40V3t1w==: --dhchap-ctrl-secret DHHC-1:03:NmJiYmRkODI0NmE5ZmZhNzNiNjg3OGI0OWE0YzhlZWI5YWY3NWJiOTU5OWY4MmU0NTZlM2NiMWQ4ZDViN2YyZfnUQlU=: 00:17:16.057 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.057 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.057 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:16.057 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.057 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.058 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.058 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.058 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:16.058 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:16.058 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:17:16.058 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.058 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:16.058 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:16.058 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:16.058 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.058 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.058 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.058 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.058 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.058 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.058 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.058 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.628 00:17:16.628 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.628 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.628 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.887 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.887 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.887 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.887 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.887 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.887 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:16.887 { 00:17:16.887 "cntlid": 91, 00:17:16.887 "qid": 0, 00:17:16.887 "state": "enabled", 00:17:16.887 "thread": "nvmf_tgt_poll_group_000", 00:17:16.887 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:16.887 "listen_address": { 00:17:16.887 "trtype": "TCP", 00:17:16.887 "adrfam": "IPv4", 00:17:16.887 "traddr": "10.0.0.2", 00:17:16.887 "trsvcid": "4420" 00:17:16.887 }, 00:17:16.887 "peer_address": { 00:17:16.887 "trtype": "TCP", 00:17:16.887 "adrfam": "IPv4", 00:17:16.887 "traddr": "10.0.0.1", 00:17:16.887 "trsvcid": "54280" 00:17:16.887 }, 00:17:16.887 "auth": { 00:17:16.887 "state": "completed", 00:17:16.887 "digest": "sha384", 00:17:16.887 "dhgroup": "ffdhe8192" 00:17:16.887 } 00:17:16.887 } 00:17:16.887 ]' 00:17:16.887 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:16.887 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:16.887 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:16.887 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:16.888 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:16.888 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.888 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.888 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.147 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDljMDA5NTBlNjkwMTBkZWU5ODYwNjNhNTYxOGRiNmFZUFSi: --dhchap-ctrl-secret DHHC-1:02:NmJmNWI4MDQ5MDA3Yzg3NDI3OGExYjY2MzFjZDFlZTBkYWI4MmZmMzg0MmVkYWM3B7b5SA==: 00:17:17.147 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:MDljMDA5NTBlNjkwMTBkZWU5ODYwNjNhNTYxOGRiNmFZUFSi: --dhchap-ctrl-secret DHHC-1:02:NmJmNWI4MDQ5MDA3Yzg3NDI3OGExYjY2MzFjZDFlZTBkYWI4MmZmMzg0MmVkYWM3B7b5SA==: 00:17:18.085 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.085 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.085 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:18.085 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.085 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.085 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.085 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:18.085 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:18.085 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:18.085 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:17:18.085 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.085 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:18.085 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:18.085 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:18.085 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.085 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.085 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.085 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.085 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.085 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.085 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.085 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.654 00:17:18.654 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.654 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.654 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.914 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.914 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.914 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.914 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.914 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.914 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.914 { 00:17:18.914 "cntlid": 93, 00:17:18.914 "qid": 0, 00:17:18.914 "state": "enabled", 00:17:18.914 "thread": "nvmf_tgt_poll_group_000", 00:17:18.914 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:18.914 "listen_address": { 00:17:18.914 "trtype": "TCP", 00:17:18.914 "adrfam": "IPv4", 00:17:18.914 "traddr": "10.0.0.2", 00:17:18.914 "trsvcid": "4420" 00:17:18.914 }, 00:17:18.914 "peer_address": { 00:17:18.914 "trtype": "TCP", 00:17:18.914 "adrfam": "IPv4", 00:17:18.914 "traddr": "10.0.0.1", 00:17:18.914 "trsvcid": "54316" 00:17:18.914 }, 00:17:18.914 "auth": { 00:17:18.914 "state": "completed", 00:17:18.914 "digest": "sha384", 00:17:18.914 "dhgroup": "ffdhe8192" 00:17:18.914 } 00:17:18.914 } 00:17:18.914 ]' 00:17:18.914 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.914 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:18.914 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.914 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:18.914 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.914 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.914 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.914 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.174 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzJhNDBjN2VjY2Q2ZWE3MzQ3NDAyYzE3NTkxMDE1YjU1MjhiMDc1YjQ4YzIzNDgypMXG7w==: --dhchap-ctrl-secret DHHC-1:01:ZDNiN2U1ZTFiMjljNTMwMjFmOWY2ZjBmZTRhNTg3MTUTmxRf: 00:17:19.174 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:YzJhNDBjN2VjY2Q2ZWE3MzQ3NDAyYzE3NTkxMDE1YjU1MjhiMDc1YjQ4YzIzNDgypMXG7w==: --dhchap-ctrl-secret DHHC-1:01:ZDNiN2U1ZTFiMjljNTMwMjFmOWY2ZjBmZTRhNTg3MTUTmxRf: 00:17:20.116 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.116 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:20.116 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.116 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.116 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.116 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:20.116 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:20.116 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:20.116 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:20.116 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.116 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:20.116 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:20.116 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:20.116 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.116 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:17:20.116 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.116 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.116 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.116 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:20.116 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:20.116 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:20.686 00:17:20.686 16:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.686 16:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.686 16:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.947 16:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.947 16:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.947 16:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.947 16:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.947 16:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.947 16:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:20.947 { 00:17:20.947 "cntlid": 95, 00:17:20.947 "qid": 0, 00:17:20.947 "state": "enabled", 00:17:20.947 "thread": "nvmf_tgt_poll_group_000", 00:17:20.947 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:20.947 "listen_address": { 00:17:20.947 "trtype": "TCP", 00:17:20.947 "adrfam": "IPv4", 00:17:20.947 "traddr": "10.0.0.2", 00:17:20.947 "trsvcid": "4420" 00:17:20.947 }, 00:17:20.947 "peer_address": { 00:17:20.947 "trtype": "TCP", 00:17:20.947 "adrfam": "IPv4", 00:17:20.947 "traddr": "10.0.0.1", 00:17:20.947 "trsvcid": "54326" 00:17:20.947 }, 00:17:20.947 "auth": { 00:17:20.947 "state": "completed", 00:17:20.947 "digest": "sha384", 00:17:20.947 "dhgroup": "ffdhe8192" 00:17:20.947 } 00:17:20.947 } 00:17:20.947 ]' 00:17:20.947 16:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.947 16:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:20.947 16:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:20.948 16:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:20.948 16:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:20.948 16:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.948 16:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.948 16:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.208 16:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI3ZjY2M2E0MTc4MDEyMTg4NGE0OGNhMjc3YjE0NThiZGU5NWU5MzIxNTUzZGUyMTJiYzEzYWI1ZDliNWMwYTvXris=: 00:17:21.208 16:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:ZDI3ZjY2M2E0MTc4MDEyMTg4NGE0OGNhMjc3YjE0NThiZGU5NWU5MzIxNTUzZGUyMTJiYzEzYWI1ZDliNWMwYTvXris=: 00:17:21.780 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.780 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.780 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:21.780 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.780 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.040 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.040 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:22.040 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:22.040 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:22.040 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:22.040 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:22.040 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:22.040 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.040 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:22.040 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:22.040 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:22.040 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.040 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.040 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.040 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.040 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.040 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.040 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.040 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.301 00:17:22.301 16:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:22.301 16:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.301 16:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.561 16:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.561 16:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.561 16:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.561 16:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.561 16:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.561 16:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:22.561 { 00:17:22.561 "cntlid": 97, 00:17:22.561 "qid": 0, 00:17:22.561 "state": "enabled", 00:17:22.562 "thread": "nvmf_tgt_poll_group_000", 00:17:22.562 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:22.562 "listen_address": { 00:17:22.562 "trtype": "TCP", 00:17:22.562 "adrfam": "IPv4", 00:17:22.562 "traddr": "10.0.0.2", 00:17:22.562 "trsvcid": "4420" 00:17:22.562 }, 00:17:22.562 "peer_address": { 00:17:22.562 "trtype": "TCP", 00:17:22.562 "adrfam": "IPv4", 00:17:22.562 "traddr": "10.0.0.1", 00:17:22.562 "trsvcid": "46670" 00:17:22.562 }, 00:17:22.562 "auth": { 00:17:22.562 "state": "completed", 00:17:22.562 "digest": "sha512", 00:17:22.562 "dhgroup": "null" 00:17:22.562 } 00:17:22.562 } 00:17:22.562 ]' 00:17:22.562 16:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:22.562 16:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:22.562 16:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.562 16:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:22.562 16:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:22.562 16:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.562 16:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.562 16:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.823 16:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTE1YmJjYzE3YjFjOTc5MTUwNjhjMmUzMWVjNzdiM2JjZjlkMTJkMmZjNTk2MzQ40V3t1w==: --dhchap-ctrl-secret DHHC-1:03:NmJiYmRkODI0NmE5ZmZhNzNiNjg3OGI0OWE0YzhlZWI5YWY3NWJiOTU5OWY4MmU0NTZlM2NiMWQ4ZDViN2YyZfnUQlU=: 00:17:22.823 16:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:OTE1YmJjYzE3YjFjOTc5MTUwNjhjMmUzMWVjNzdiM2JjZjlkMTJkMmZjNTk2MzQ40V3t1w==: --dhchap-ctrl-secret DHHC-1:03:NmJiYmRkODI0NmE5ZmZhNzNiNjg3OGI0OWE0YzhlZWI5YWY3NWJiOTU5OWY4MmU0NTZlM2NiMWQ4ZDViN2YyZfnUQlU=: 00:17:23.763 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.763 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.763 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:23.763 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.763 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.763 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.763 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.763 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:23.763 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:23.763 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:23.763 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:23.763 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:23.763 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:23.763 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:23.763 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.763 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.763 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.763 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.763 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.763 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.763 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.763 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.023 00:17:24.023 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.023 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.023 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:24.023 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.023 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.023 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.023 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.285 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.285 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:24.285 { 00:17:24.285 "cntlid": 99, 00:17:24.285 "qid": 0, 00:17:24.285 "state": "enabled", 00:17:24.285 "thread": "nvmf_tgt_poll_group_000", 00:17:24.285 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:24.285 "listen_address": { 00:17:24.285 "trtype": "TCP", 00:17:24.285 "adrfam": "IPv4", 00:17:24.285 "traddr": "10.0.0.2", 00:17:24.285 "trsvcid": "4420" 00:17:24.285 }, 00:17:24.285 "peer_address": { 00:17:24.285 "trtype": "TCP", 00:17:24.285 "adrfam": "IPv4", 00:17:24.285 "traddr": "10.0.0.1", 00:17:24.285 "trsvcid": "46712" 00:17:24.285 }, 00:17:24.285 "auth": { 00:17:24.285 "state": "completed", 00:17:24.285 "digest": "sha512", 00:17:24.285 "dhgroup": "null" 00:17:24.285 } 00:17:24.285 } 00:17:24.285 ]' 00:17:24.285 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:24.285 16:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:24.285 16:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.285 16:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:24.285 16:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:24.285 16:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.285 16:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.285 16:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.546 16:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDljMDA5NTBlNjkwMTBkZWU5ODYwNjNhNTYxOGRiNmFZUFSi: --dhchap-ctrl-secret DHHC-1:02:NmJmNWI4MDQ5MDA3Yzg3NDI3OGExYjY2MzFjZDFlZTBkYWI4MmZmMzg0MmVkYWM3B7b5SA==: 00:17:24.546 16:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:MDljMDA5NTBlNjkwMTBkZWU5ODYwNjNhNTYxOGRiNmFZUFSi: --dhchap-ctrl-secret DHHC-1:02:NmJmNWI4MDQ5MDA3Yzg3NDI3OGExYjY2MzFjZDFlZTBkYWI4MmZmMzg0MmVkYWM3B7b5SA==: 00:17:25.118 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.380 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.380 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:25.380 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.380 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.380 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.380 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.380 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:25.380 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:25.380 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:25.380 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.380 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:25.380 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:25.380 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:25.380 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.380 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.380 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.380 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.380 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.380 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.380 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.380 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.641 00:17:25.641 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:25.641 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:25.641 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.903 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.903 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.903 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.903 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.903 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.903 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:25.903 { 00:17:25.903 "cntlid": 101, 00:17:25.903 "qid": 0, 00:17:25.903 "state": "enabled", 00:17:25.903 "thread": "nvmf_tgt_poll_group_000", 00:17:25.903 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:25.903 "listen_address": { 00:17:25.903 "trtype": "TCP", 00:17:25.903 "adrfam": "IPv4", 00:17:25.903 "traddr": "10.0.0.2", 00:17:25.903 "trsvcid": "4420" 00:17:25.903 }, 00:17:25.903 "peer_address": { 00:17:25.903 "trtype": "TCP", 00:17:25.903 "adrfam": "IPv4", 00:17:25.903 "traddr": "10.0.0.1", 00:17:25.903 "trsvcid": "46740" 00:17:25.903 }, 00:17:25.903 "auth": { 00:17:25.903 "state": "completed", 00:17:25.903 "digest": "sha512", 00:17:25.903 "dhgroup": "null" 00:17:25.903 } 00:17:25.903 } 00:17:25.903 ]' 00:17:25.903 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:25.903 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:25.903 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:25.903 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:25.903 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:25.903 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.903 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.903 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.166 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzJhNDBjN2VjY2Q2ZWE3MzQ3NDAyYzE3NTkxMDE1YjU1MjhiMDc1YjQ4YzIzNDgypMXG7w==: --dhchap-ctrl-secret DHHC-1:01:ZDNiN2U1ZTFiMjljNTMwMjFmOWY2ZjBmZTRhNTg3MTUTmxRf: 00:17:26.166 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:YzJhNDBjN2VjY2Q2ZWE3MzQ3NDAyYzE3NTkxMDE1YjU1MjhiMDc1YjQ4YzIzNDgypMXG7w==: --dhchap-ctrl-secret DHHC-1:01:ZDNiN2U1ZTFiMjljNTMwMjFmOWY2ZjBmZTRhNTg3MTUTmxRf: 00:17:27.108 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.108 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.108 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:27.108 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.108 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.108 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.108 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.108 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:27.108 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:27.108 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:27.108 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.108 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:27.108 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:27.108 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:27.108 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.108 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:17:27.108 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.108 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.108 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.108 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:27.108 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:27.108 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:27.369 00:17:27.369 16:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:27.369 16:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:27.369 16:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.630 16:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.630 16:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.630 16:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.630 16:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.630 16:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.630 16:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:27.630 { 00:17:27.630 "cntlid": 103, 00:17:27.630 "qid": 0, 00:17:27.630 "state": "enabled", 00:17:27.630 "thread": "nvmf_tgt_poll_group_000", 00:17:27.630 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:27.630 "listen_address": { 00:17:27.630 "trtype": "TCP", 00:17:27.631 "adrfam": "IPv4", 00:17:27.631 "traddr": "10.0.0.2", 00:17:27.631 "trsvcid": "4420" 00:17:27.631 }, 00:17:27.631 "peer_address": { 00:17:27.631 "trtype": "TCP", 00:17:27.631 "adrfam": "IPv4", 00:17:27.631 "traddr": "10.0.0.1", 00:17:27.631 "trsvcid": "46772" 00:17:27.631 }, 00:17:27.631 "auth": { 00:17:27.631 "state": "completed", 00:17:27.631 "digest": "sha512", 00:17:27.631 "dhgroup": "null" 00:17:27.631 } 00:17:27.631 } 00:17:27.631 ]' 00:17:27.631 16:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:27.631 16:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:27.631 16:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:27.631 16:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:27.631 16:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:27.631 16:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.631 16:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.631 16:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.892 16:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI3ZjY2M2E0MTc4MDEyMTg4NGE0OGNhMjc3YjE0NThiZGU5NWU5MzIxNTUzZGUyMTJiYzEzYWI1ZDliNWMwYTvXris=: 00:17:27.892 16:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:ZDI3ZjY2M2E0MTc4MDEyMTg4NGE0OGNhMjc3YjE0NThiZGU5NWU5MzIxNTUzZGUyMTJiYzEzYWI1ZDliNWMwYTvXris=: 00:17:28.464 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.464 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:28.464 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.464 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.464 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.464 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:28.464 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:28.464 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:28.464 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:28.725 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:28.725 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:28.725 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:28.725 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:28.725 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:28.725 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.725 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.725 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.725 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.725 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.725 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.725 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.725 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.986 00:17:28.986 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.986 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.986 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:29.247 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.247 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.247 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.247 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.247 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.247 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:29.247 { 00:17:29.247 "cntlid": 105, 00:17:29.247 "qid": 0, 00:17:29.247 "state": "enabled", 00:17:29.247 "thread": "nvmf_tgt_poll_group_000", 00:17:29.247 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:29.247 "listen_address": { 00:17:29.247 "trtype": "TCP", 00:17:29.247 "adrfam": "IPv4", 00:17:29.247 "traddr": "10.0.0.2", 00:17:29.247 "trsvcid": "4420" 00:17:29.247 }, 00:17:29.247 "peer_address": { 00:17:29.247 "trtype": "TCP", 00:17:29.247 "adrfam": "IPv4", 00:17:29.247 "traddr": "10.0.0.1", 00:17:29.247 "trsvcid": "46796" 00:17:29.247 }, 00:17:29.247 "auth": { 00:17:29.247 "state": "completed", 00:17:29.247 "digest": "sha512", 00:17:29.247 "dhgroup": "ffdhe2048" 00:17:29.247 } 00:17:29.247 } 00:17:29.247 ]' 00:17:29.247 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:29.247 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:29.247 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:29.247 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:29.247 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:29.247 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.247 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.247 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.507 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTE1YmJjYzE3YjFjOTc5MTUwNjhjMmUzMWVjNzdiM2JjZjlkMTJkMmZjNTk2MzQ40V3t1w==: --dhchap-ctrl-secret DHHC-1:03:NmJiYmRkODI0NmE5ZmZhNzNiNjg3OGI0OWE0YzhlZWI5YWY3NWJiOTU5OWY4MmU0NTZlM2NiMWQ4ZDViN2YyZfnUQlU=: 00:17:29.507 16:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:OTE1YmJjYzE3YjFjOTc5MTUwNjhjMmUzMWVjNzdiM2JjZjlkMTJkMmZjNTk2MzQ40V3t1w==: --dhchap-ctrl-secret DHHC-1:03:NmJiYmRkODI0NmE5ZmZhNzNiNjg3OGI0OWE0YzhlZWI5YWY3NWJiOTU5OWY4MmU0NTZlM2NiMWQ4ZDViN2YyZfnUQlU=: 00:17:30.449 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.449 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.449 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:30.449 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.449 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.449 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.449 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.449 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:30.449 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:30.449 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:17:30.449 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:30.449 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:30.449 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:30.449 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:30.449 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.449 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.449 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.449 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.449 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.449 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.449 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.449 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.710 00:17:30.710 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.710 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.710 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.970 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.970 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.970 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.970 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.970 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.970 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.970 { 00:17:30.970 "cntlid": 107, 00:17:30.970 "qid": 0, 00:17:30.970 "state": "enabled", 00:17:30.970 "thread": "nvmf_tgt_poll_group_000", 00:17:30.970 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:30.970 "listen_address": { 00:17:30.970 "trtype": "TCP", 00:17:30.970 "adrfam": "IPv4", 00:17:30.970 "traddr": "10.0.0.2", 00:17:30.970 "trsvcid": "4420" 00:17:30.970 }, 00:17:30.970 "peer_address": { 00:17:30.970 "trtype": "TCP", 00:17:30.970 "adrfam": "IPv4", 00:17:30.970 "traddr": "10.0.0.1", 00:17:30.970 "trsvcid": "46828" 00:17:30.970 }, 00:17:30.970 "auth": { 00:17:30.970 "state": "completed", 00:17:30.970 "digest": "sha512", 00:17:30.970 "dhgroup": "ffdhe2048" 00:17:30.970 } 00:17:30.970 } 00:17:30.970 ]' 00:17:30.970 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.970 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:30.970 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.970 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:30.970 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.970 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.970 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.970 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.229 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDljMDA5NTBlNjkwMTBkZWU5ODYwNjNhNTYxOGRiNmFZUFSi: --dhchap-ctrl-secret DHHC-1:02:NmJmNWI4MDQ5MDA3Yzg3NDI3OGExYjY2MzFjZDFlZTBkYWI4MmZmMzg0MmVkYWM3B7b5SA==: 00:17:31.229 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:MDljMDA5NTBlNjkwMTBkZWU5ODYwNjNhNTYxOGRiNmFZUFSi: --dhchap-ctrl-secret DHHC-1:02:NmJmNWI4MDQ5MDA3Yzg3NDI3OGExYjY2MzFjZDFlZTBkYWI4MmZmMzg0MmVkYWM3B7b5SA==: 00:17:31.799 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.799 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.799 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:31.799 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.799 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.799 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.799 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:31.799 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:31.799 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:32.059 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:17:32.059 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.059 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:32.059 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:32.059 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:32.059 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.059 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.059 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.059 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.059 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.059 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.059 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.059 16:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.319 00:17:32.320 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:32.320 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:32.320 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.320 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.320 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.320 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.320 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.320 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.580 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:32.580 { 00:17:32.580 "cntlid": 109, 00:17:32.580 "qid": 0, 00:17:32.580 "state": "enabled", 00:17:32.580 "thread": "nvmf_tgt_poll_group_000", 00:17:32.580 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:32.580 "listen_address": { 00:17:32.580 "trtype": "TCP", 00:17:32.580 "adrfam": "IPv4", 00:17:32.580 "traddr": "10.0.0.2", 00:17:32.580 "trsvcid": "4420" 00:17:32.580 }, 00:17:32.580 "peer_address": { 00:17:32.580 "trtype": "TCP", 00:17:32.580 "adrfam": "IPv4", 00:17:32.580 "traddr": "10.0.0.1", 00:17:32.580 "trsvcid": "43224" 00:17:32.580 }, 00:17:32.580 "auth": { 00:17:32.580 "state": "completed", 00:17:32.580 "digest": "sha512", 00:17:32.580 "dhgroup": "ffdhe2048" 00:17:32.580 } 00:17:32.580 } 00:17:32.580 ]' 00:17:32.580 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:32.580 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:32.580 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.580 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:32.580 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.580 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.580 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.580 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.840 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzJhNDBjN2VjY2Q2ZWE3MzQ3NDAyYzE3NTkxMDE1YjU1MjhiMDc1YjQ4YzIzNDgypMXG7w==: --dhchap-ctrl-secret DHHC-1:01:ZDNiN2U1ZTFiMjljNTMwMjFmOWY2ZjBmZTRhNTg3MTUTmxRf: 00:17:32.840 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:YzJhNDBjN2VjY2Q2ZWE3MzQ3NDAyYzE3NTkxMDE1YjU1MjhiMDc1YjQ4YzIzNDgypMXG7w==: --dhchap-ctrl-secret DHHC-1:01:ZDNiN2U1ZTFiMjljNTMwMjFmOWY2ZjBmZTRhNTg3MTUTmxRf: 00:17:33.410 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.410 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.410 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:33.410 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.410 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.410 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.410 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:33.410 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:33.410 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:33.671 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:17:33.671 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:33.671 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:33.671 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:33.671 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:33.671 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.671 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:17:33.671 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.671 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.671 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.671 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:33.671 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:33.671 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:33.932 00:17:33.932 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.932 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.932 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.193 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.193 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.193 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.193 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.193 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.193 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:34.193 { 00:17:34.193 "cntlid": 111, 00:17:34.193 "qid": 0, 00:17:34.193 "state": "enabled", 00:17:34.193 "thread": "nvmf_tgt_poll_group_000", 00:17:34.193 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:34.193 "listen_address": { 00:17:34.193 "trtype": "TCP", 00:17:34.193 "adrfam": "IPv4", 00:17:34.193 "traddr": "10.0.0.2", 00:17:34.193 "trsvcid": "4420" 00:17:34.193 }, 00:17:34.193 "peer_address": { 00:17:34.193 "trtype": "TCP", 00:17:34.193 "adrfam": "IPv4", 00:17:34.193 "traddr": "10.0.0.1", 00:17:34.193 "trsvcid": "43254" 00:17:34.193 }, 00:17:34.193 "auth": { 00:17:34.193 "state": "completed", 00:17:34.193 "digest": "sha512", 00:17:34.193 "dhgroup": "ffdhe2048" 00:17:34.193 } 00:17:34.193 } 00:17:34.193 ]' 00:17:34.193 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:34.193 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:34.193 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.193 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:34.193 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.193 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.193 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.193 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.453 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI3ZjY2M2E0MTc4MDEyMTg4NGE0OGNhMjc3YjE0NThiZGU5NWU5MzIxNTUzZGUyMTJiYzEzYWI1ZDliNWMwYTvXris=: 00:17:34.453 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:ZDI3ZjY2M2E0MTc4MDEyMTg4NGE0OGNhMjc3YjE0NThiZGU5NWU5MzIxNTUzZGUyMTJiYzEzYWI1ZDliNWMwYTvXris=: 00:17:35.395 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.395 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.395 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:35.395 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.395 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.395 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.395 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:35.395 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.395 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:35.395 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:35.395 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:17:35.395 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.395 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:35.395 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:35.395 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:35.395 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.396 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.396 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.396 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.396 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.396 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.396 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.396 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.656 00:17:35.656 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.656 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.656 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.917 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.917 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.917 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.917 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.917 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.917 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.917 { 00:17:35.917 "cntlid": 113, 00:17:35.917 "qid": 0, 00:17:35.917 "state": "enabled", 00:17:35.917 "thread": "nvmf_tgt_poll_group_000", 00:17:35.917 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:35.917 "listen_address": { 00:17:35.917 "trtype": "TCP", 00:17:35.917 "adrfam": "IPv4", 00:17:35.917 "traddr": "10.0.0.2", 00:17:35.917 "trsvcid": "4420" 00:17:35.917 }, 00:17:35.917 "peer_address": { 00:17:35.917 "trtype": "TCP", 00:17:35.917 "adrfam": "IPv4", 00:17:35.917 "traddr": "10.0.0.1", 00:17:35.917 "trsvcid": "43282" 00:17:35.917 }, 00:17:35.917 "auth": { 00:17:35.917 "state": "completed", 00:17:35.917 "digest": "sha512", 00:17:35.917 "dhgroup": "ffdhe3072" 00:17:35.917 } 00:17:35.917 } 00:17:35.917 ]' 00:17:35.917 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:35.917 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:35.917 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:35.917 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:35.917 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:35.917 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.917 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.917 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.178 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTE1YmJjYzE3YjFjOTc5MTUwNjhjMmUzMWVjNzdiM2JjZjlkMTJkMmZjNTk2MzQ40V3t1w==: --dhchap-ctrl-secret DHHC-1:03:NmJiYmRkODI0NmE5ZmZhNzNiNjg3OGI0OWE0YzhlZWI5YWY3NWJiOTU5OWY4MmU0NTZlM2NiMWQ4ZDViN2YyZfnUQlU=: 00:17:36.178 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:OTE1YmJjYzE3YjFjOTc5MTUwNjhjMmUzMWVjNzdiM2JjZjlkMTJkMmZjNTk2MzQ40V3t1w==: --dhchap-ctrl-secret DHHC-1:03:NmJiYmRkODI0NmE5ZmZhNzNiNjg3OGI0OWE0YzhlZWI5YWY3NWJiOTU5OWY4MmU0NTZlM2NiMWQ4ZDViN2YyZfnUQlU=: 00:17:36.836 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.836 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:36.836 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.836 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.836 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.836 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:36.836 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:36.836 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:37.167 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:17:37.167 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:37.167 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:37.167 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:37.167 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:37.167 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.167 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.167 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.167 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.167 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.167 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.167 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.167 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.441 00:17:37.442 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.442 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.442 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.442 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.442 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.442 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.442 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.442 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.442 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:37.442 { 00:17:37.442 "cntlid": 115, 00:17:37.442 "qid": 0, 00:17:37.442 "state": "enabled", 00:17:37.442 "thread": "nvmf_tgt_poll_group_000", 00:17:37.442 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:37.442 "listen_address": { 00:17:37.442 "trtype": "TCP", 00:17:37.442 "adrfam": "IPv4", 00:17:37.442 "traddr": "10.0.0.2", 00:17:37.442 "trsvcid": "4420" 00:17:37.442 }, 00:17:37.442 "peer_address": { 00:17:37.442 "trtype": "TCP", 00:17:37.442 "adrfam": "IPv4", 00:17:37.442 "traddr": "10.0.0.1", 00:17:37.442 "trsvcid": "43304" 00:17:37.442 }, 00:17:37.442 "auth": { 00:17:37.442 "state": "completed", 00:17:37.442 "digest": "sha512", 00:17:37.442 "dhgroup": "ffdhe3072" 00:17:37.442 } 00:17:37.442 } 00:17:37.442 ]' 00:17:37.442 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:37.702 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:37.703 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:37.703 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:37.703 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:37.703 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.703 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.703 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.963 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDljMDA5NTBlNjkwMTBkZWU5ODYwNjNhNTYxOGRiNmFZUFSi: --dhchap-ctrl-secret DHHC-1:02:NmJmNWI4MDQ5MDA3Yzg3NDI3OGExYjY2MzFjZDFlZTBkYWI4MmZmMzg0MmVkYWM3B7b5SA==: 00:17:37.963 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:MDljMDA5NTBlNjkwMTBkZWU5ODYwNjNhNTYxOGRiNmFZUFSi: --dhchap-ctrl-secret DHHC-1:02:NmJmNWI4MDQ5MDA3Yzg3NDI3OGExYjY2MzFjZDFlZTBkYWI4MmZmMzg0MmVkYWM3B7b5SA==: 00:17:38.533 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.533 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.533 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:38.533 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.533 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.533 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.533 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:38.533 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:38.533 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:38.793 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:17:38.793 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:38.793 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:38.793 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:38.793 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:38.793 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.793 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.793 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.794 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.794 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.794 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.794 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.794 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.053 00:17:39.053 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.053 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.053 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.313 16:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.313 16:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.313 16:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.313 16:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.313 16:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.313 16:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:39.313 { 00:17:39.313 "cntlid": 117, 00:17:39.313 "qid": 0, 00:17:39.313 "state": "enabled", 00:17:39.313 "thread": "nvmf_tgt_poll_group_000", 00:17:39.313 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:39.313 "listen_address": { 00:17:39.313 "trtype": "TCP", 00:17:39.313 "adrfam": "IPv4", 00:17:39.313 "traddr": "10.0.0.2", 00:17:39.313 "trsvcid": "4420" 00:17:39.313 }, 00:17:39.313 "peer_address": { 00:17:39.313 "trtype": "TCP", 00:17:39.313 "adrfam": "IPv4", 00:17:39.313 "traddr": "10.0.0.1", 00:17:39.313 "trsvcid": "43334" 00:17:39.313 }, 00:17:39.313 "auth": { 00:17:39.314 "state": "completed", 00:17:39.314 "digest": "sha512", 00:17:39.314 "dhgroup": "ffdhe3072" 00:17:39.314 } 00:17:39.314 } 00:17:39.314 ]' 00:17:39.314 16:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.314 16:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:39.314 16:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.314 16:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:39.314 16:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:39.314 16:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.314 16:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.314 16:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.574 16:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzJhNDBjN2VjY2Q2ZWE3MzQ3NDAyYzE3NTkxMDE1YjU1MjhiMDc1YjQ4YzIzNDgypMXG7w==: --dhchap-ctrl-secret DHHC-1:01:ZDNiN2U1ZTFiMjljNTMwMjFmOWY2ZjBmZTRhNTg3MTUTmxRf: 00:17:39.574 16:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:YzJhNDBjN2VjY2Q2ZWE3MzQ3NDAyYzE3NTkxMDE1YjU1MjhiMDc1YjQ4YzIzNDgypMXG7w==: --dhchap-ctrl-secret DHHC-1:01:ZDNiN2U1ZTFiMjljNTMwMjFmOWY2ZjBmZTRhNTg3MTUTmxRf: 00:17:40.515 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.515 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.515 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:40.515 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.515 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.515 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.515 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:40.515 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:40.515 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:40.515 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:17:40.515 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:40.515 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:40.515 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:40.515 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:40.515 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.515 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:17:40.515 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.515 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.515 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.515 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:40.515 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:40.515 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:40.776 00:17:40.776 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:40.776 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:40.776 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.036 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.036 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.036 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.036 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.036 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.036 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.036 { 00:17:41.036 "cntlid": 119, 00:17:41.036 "qid": 0, 00:17:41.036 "state": "enabled", 00:17:41.036 "thread": "nvmf_tgt_poll_group_000", 00:17:41.036 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:41.036 "listen_address": { 00:17:41.036 "trtype": "TCP", 00:17:41.036 "adrfam": "IPv4", 00:17:41.036 "traddr": "10.0.0.2", 00:17:41.036 "trsvcid": "4420" 00:17:41.036 }, 00:17:41.036 "peer_address": { 00:17:41.036 "trtype": "TCP", 00:17:41.036 "adrfam": "IPv4", 00:17:41.036 "traddr": "10.0.0.1", 00:17:41.036 "trsvcid": "43374" 00:17:41.036 }, 00:17:41.036 "auth": { 00:17:41.036 "state": "completed", 00:17:41.036 "digest": "sha512", 00:17:41.036 "dhgroup": "ffdhe3072" 00:17:41.036 } 00:17:41.036 } 00:17:41.036 ]' 00:17:41.036 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.036 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:41.036 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.036 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:41.036 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:41.036 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.036 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.036 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.296 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI3ZjY2M2E0MTc4MDEyMTg4NGE0OGNhMjc3YjE0NThiZGU5NWU5MzIxNTUzZGUyMTJiYzEzYWI1ZDliNWMwYTvXris=: 00:17:41.297 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:ZDI3ZjY2M2E0MTc4MDEyMTg4NGE0OGNhMjc3YjE0NThiZGU5NWU5MzIxNTUzZGUyMTJiYzEzYWI1ZDliNWMwYTvXris=: 00:17:42.238 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.238 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:42.238 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.238 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.238 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.238 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:42.238 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:42.238 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:42.238 16:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:42.238 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:17:42.238 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.238 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:42.238 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:42.238 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:42.238 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.238 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.238 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.238 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.239 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.239 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.239 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.239 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.498 00:17:42.498 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:42.498 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.498 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.758 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.758 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.758 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.758 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.758 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.758 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:42.758 { 00:17:42.758 "cntlid": 121, 00:17:42.758 "qid": 0, 00:17:42.758 "state": "enabled", 00:17:42.758 "thread": "nvmf_tgt_poll_group_000", 00:17:42.758 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:42.758 "listen_address": { 00:17:42.758 "trtype": "TCP", 00:17:42.758 "adrfam": "IPv4", 00:17:42.758 "traddr": "10.0.0.2", 00:17:42.758 "trsvcid": "4420" 00:17:42.758 }, 00:17:42.758 "peer_address": { 00:17:42.758 "trtype": "TCP", 00:17:42.758 "adrfam": "IPv4", 00:17:42.758 "traddr": "10.0.0.1", 00:17:42.758 "trsvcid": "51472" 00:17:42.758 }, 00:17:42.758 "auth": { 00:17:42.758 "state": "completed", 00:17:42.758 "digest": "sha512", 00:17:42.758 "dhgroup": "ffdhe4096" 00:17:42.758 } 00:17:42.758 } 00:17:42.758 ]' 00:17:42.758 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:42.758 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:42.758 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:42.758 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:42.758 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:42.759 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.759 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.759 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.018 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTE1YmJjYzE3YjFjOTc5MTUwNjhjMmUzMWVjNzdiM2JjZjlkMTJkMmZjNTk2MzQ40V3t1w==: --dhchap-ctrl-secret DHHC-1:03:NmJiYmRkODI0NmE5ZmZhNzNiNjg3OGI0OWE0YzhlZWI5YWY3NWJiOTU5OWY4MmU0NTZlM2NiMWQ4ZDViN2YyZfnUQlU=: 00:17:43.018 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:OTE1YmJjYzE3YjFjOTc5MTUwNjhjMmUzMWVjNzdiM2JjZjlkMTJkMmZjNTk2MzQ40V3t1w==: --dhchap-ctrl-secret DHHC-1:03:NmJiYmRkODI0NmE5ZmZhNzNiNjg3OGI0OWE0YzhlZWI5YWY3NWJiOTU5OWY4MmU0NTZlM2NiMWQ4ZDViN2YyZfnUQlU=: 00:17:43.960 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.960 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.960 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:43.960 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.960 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.960 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.960 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:43.960 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:43.960 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:43.960 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:17:43.960 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:43.960 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:43.960 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:43.960 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:43.960 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.960 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.960 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.960 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.960 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.960 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.960 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.960 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.221 00:17:44.221 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:44.221 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:44.221 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.480 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.480 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.480 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.480 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.480 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.480 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.480 { 00:17:44.480 "cntlid": 123, 00:17:44.480 "qid": 0, 00:17:44.480 "state": "enabled", 00:17:44.480 "thread": "nvmf_tgt_poll_group_000", 00:17:44.480 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:44.480 "listen_address": { 00:17:44.480 "trtype": "TCP", 00:17:44.480 "adrfam": "IPv4", 00:17:44.480 "traddr": "10.0.0.2", 00:17:44.480 "trsvcid": "4420" 00:17:44.480 }, 00:17:44.480 "peer_address": { 00:17:44.480 "trtype": "TCP", 00:17:44.480 "adrfam": "IPv4", 00:17:44.480 "traddr": "10.0.0.1", 00:17:44.480 "trsvcid": "51510" 00:17:44.480 }, 00:17:44.480 "auth": { 00:17:44.480 "state": "completed", 00:17:44.480 "digest": "sha512", 00:17:44.480 "dhgroup": "ffdhe4096" 00:17:44.480 } 00:17:44.480 } 00:17:44.480 ]' 00:17:44.480 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.480 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:44.480 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.480 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:44.480 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.480 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.480 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.480 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.740 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDljMDA5NTBlNjkwMTBkZWU5ODYwNjNhNTYxOGRiNmFZUFSi: --dhchap-ctrl-secret DHHC-1:02:NmJmNWI4MDQ5MDA3Yzg3NDI3OGExYjY2MzFjZDFlZTBkYWI4MmZmMzg0MmVkYWM3B7b5SA==: 00:17:44.741 16:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:MDljMDA5NTBlNjkwMTBkZWU5ODYwNjNhNTYxOGRiNmFZUFSi: --dhchap-ctrl-secret DHHC-1:02:NmJmNWI4MDQ5MDA3Yzg3NDI3OGExYjY2MzFjZDFlZTBkYWI4MmZmMzg0MmVkYWM3B7b5SA==: 00:17:45.310 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.570 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:45.570 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.570 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.570 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.570 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:45.570 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:45.570 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:45.570 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:17:45.570 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.570 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:45.570 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:45.570 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:45.570 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.570 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.570 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.570 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.570 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.570 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.570 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.570 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.828 00:17:45.828 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:45.828 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:45.828 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.088 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.088 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.088 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.088 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.088 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.088 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.088 { 00:17:46.088 "cntlid": 125, 00:17:46.088 "qid": 0, 00:17:46.088 "state": "enabled", 00:17:46.088 "thread": "nvmf_tgt_poll_group_000", 00:17:46.088 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:46.088 "listen_address": { 00:17:46.088 "trtype": "TCP", 00:17:46.088 "adrfam": "IPv4", 00:17:46.088 "traddr": "10.0.0.2", 00:17:46.088 "trsvcid": "4420" 00:17:46.088 }, 00:17:46.088 "peer_address": { 00:17:46.088 "trtype": "TCP", 00:17:46.088 "adrfam": "IPv4", 00:17:46.088 "traddr": "10.0.0.1", 00:17:46.088 "trsvcid": "51546" 00:17:46.088 }, 00:17:46.088 "auth": { 00:17:46.088 "state": "completed", 00:17:46.088 "digest": "sha512", 00:17:46.088 "dhgroup": "ffdhe4096" 00:17:46.088 } 00:17:46.088 } 00:17:46.088 ]' 00:17:46.088 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.088 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:46.088 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:46.088 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:46.088 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.348 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.348 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.348 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.348 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzJhNDBjN2VjY2Q2ZWE3MzQ3NDAyYzE3NTkxMDE1YjU1MjhiMDc1YjQ4YzIzNDgypMXG7w==: --dhchap-ctrl-secret DHHC-1:01:ZDNiN2U1ZTFiMjljNTMwMjFmOWY2ZjBmZTRhNTg3MTUTmxRf: 00:17:46.348 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:YzJhNDBjN2VjY2Q2ZWE3MzQ3NDAyYzE3NTkxMDE1YjU1MjhiMDc1YjQ4YzIzNDgypMXG7w==: --dhchap-ctrl-secret DHHC-1:01:ZDNiN2U1ZTFiMjljNTMwMjFmOWY2ZjBmZTRhNTg3MTUTmxRf: 00:17:47.289 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.289 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.289 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:47.289 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.289 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.289 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.289 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:47.289 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:47.289 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:47.289 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:17:47.289 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:47.289 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:47.289 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:47.289 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:47.289 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.289 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:17:47.289 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.289 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.289 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.289 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:47.289 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:47.290 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:47.550 00:17:47.550 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:47.550 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.550 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:47.811 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.811 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.811 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.811 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.811 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.811 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:47.811 { 00:17:47.811 "cntlid": 127, 00:17:47.811 "qid": 0, 00:17:47.811 "state": "enabled", 00:17:47.811 "thread": "nvmf_tgt_poll_group_000", 00:17:47.811 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:47.811 "listen_address": { 00:17:47.811 "trtype": "TCP", 00:17:47.811 "adrfam": "IPv4", 00:17:47.811 "traddr": "10.0.0.2", 00:17:47.811 "trsvcid": "4420" 00:17:47.811 }, 00:17:47.811 "peer_address": { 00:17:47.811 "trtype": "TCP", 00:17:47.811 "adrfam": "IPv4", 00:17:47.811 "traddr": "10.0.0.1", 00:17:47.811 "trsvcid": "51568" 00:17:47.811 }, 00:17:47.811 "auth": { 00:17:47.811 "state": "completed", 00:17:47.811 "digest": "sha512", 00:17:47.811 "dhgroup": "ffdhe4096" 00:17:47.811 } 00:17:47.811 } 00:17:47.811 ]' 00:17:47.811 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:47.811 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:47.811 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:47.811 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:47.811 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:48.070 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.070 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.071 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.071 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI3ZjY2M2E0MTc4MDEyMTg4NGE0OGNhMjc3YjE0NThiZGU5NWU5MzIxNTUzZGUyMTJiYzEzYWI1ZDliNWMwYTvXris=: 00:17:48.071 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:ZDI3ZjY2M2E0MTc4MDEyMTg4NGE0OGNhMjc3YjE0NThiZGU5NWU5MzIxNTUzZGUyMTJiYzEzYWI1ZDliNWMwYTvXris=: 00:17:49.011 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.011 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.011 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:49.011 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.011 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.011 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.011 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:49.011 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:49.011 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:49.011 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:49.011 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:17:49.011 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:49.011 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:49.011 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:49.011 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:49.011 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.011 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.011 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.011 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.011 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.011 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.011 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.011 16:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.582 00:17:49.582 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:49.582 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:49.582 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.582 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.582 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.582 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.582 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.582 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.582 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:49.582 { 00:17:49.582 "cntlid": 129, 00:17:49.582 "qid": 0, 00:17:49.582 "state": "enabled", 00:17:49.582 "thread": "nvmf_tgt_poll_group_000", 00:17:49.582 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:49.582 "listen_address": { 00:17:49.582 "trtype": "TCP", 00:17:49.582 "adrfam": "IPv4", 00:17:49.582 "traddr": "10.0.0.2", 00:17:49.582 "trsvcid": "4420" 00:17:49.582 }, 00:17:49.582 "peer_address": { 00:17:49.582 "trtype": "TCP", 00:17:49.582 "adrfam": "IPv4", 00:17:49.582 "traddr": "10.0.0.1", 00:17:49.582 "trsvcid": "51600" 00:17:49.582 }, 00:17:49.582 "auth": { 00:17:49.582 "state": "completed", 00:17:49.582 "digest": "sha512", 00:17:49.582 "dhgroup": "ffdhe6144" 00:17:49.582 } 00:17:49.582 } 00:17:49.582 ]' 00:17:49.582 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:49.843 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:49.843 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:49.843 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:49.843 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:49.843 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.843 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.843 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.103 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTE1YmJjYzE3YjFjOTc5MTUwNjhjMmUzMWVjNzdiM2JjZjlkMTJkMmZjNTk2MzQ40V3t1w==: --dhchap-ctrl-secret DHHC-1:03:NmJiYmRkODI0NmE5ZmZhNzNiNjg3OGI0OWE0YzhlZWI5YWY3NWJiOTU5OWY4MmU0NTZlM2NiMWQ4ZDViN2YyZfnUQlU=: 00:17:50.103 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:OTE1YmJjYzE3YjFjOTc5MTUwNjhjMmUzMWVjNzdiM2JjZjlkMTJkMmZjNTk2MzQ40V3t1w==: --dhchap-ctrl-secret DHHC-1:03:NmJiYmRkODI0NmE5ZmZhNzNiNjg3OGI0OWE0YzhlZWI5YWY3NWJiOTU5OWY4MmU0NTZlM2NiMWQ4ZDViN2YyZfnUQlU=: 00:17:50.673 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.673 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.673 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:50.673 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.673 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.673 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.673 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:50.673 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:50.673 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:50.933 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:17:50.933 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:50.933 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:50.933 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:50.933 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:50.933 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.933 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.933 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.933 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.933 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.933 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.933 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.933 16:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.503 00:17:51.503 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:51.503 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:51.503 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.503 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.503 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.503 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.503 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.503 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.503 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:51.503 { 00:17:51.503 "cntlid": 131, 00:17:51.503 "qid": 0, 00:17:51.503 "state": "enabled", 00:17:51.503 "thread": "nvmf_tgt_poll_group_000", 00:17:51.503 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:51.503 "listen_address": { 00:17:51.503 "trtype": "TCP", 00:17:51.503 "adrfam": "IPv4", 00:17:51.503 "traddr": "10.0.0.2", 00:17:51.503 "trsvcid": "4420" 00:17:51.503 }, 00:17:51.503 "peer_address": { 00:17:51.503 "trtype": "TCP", 00:17:51.503 "adrfam": "IPv4", 00:17:51.503 "traddr": "10.0.0.1", 00:17:51.503 "trsvcid": "51628" 00:17:51.503 }, 00:17:51.503 "auth": { 00:17:51.503 "state": "completed", 00:17:51.503 "digest": "sha512", 00:17:51.503 "dhgroup": "ffdhe6144" 00:17:51.503 } 00:17:51.503 } 00:17:51.503 ]' 00:17:51.503 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:51.503 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:51.503 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:51.503 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:51.503 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:51.763 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.763 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.763 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.763 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDljMDA5NTBlNjkwMTBkZWU5ODYwNjNhNTYxOGRiNmFZUFSi: --dhchap-ctrl-secret DHHC-1:02:NmJmNWI4MDQ5MDA3Yzg3NDI3OGExYjY2MzFjZDFlZTBkYWI4MmZmMzg0MmVkYWM3B7b5SA==: 00:17:51.763 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:MDljMDA5NTBlNjkwMTBkZWU5ODYwNjNhNTYxOGRiNmFZUFSi: --dhchap-ctrl-secret DHHC-1:02:NmJmNWI4MDQ5MDA3Yzg3NDI3OGExYjY2MzFjZDFlZTBkYWI4MmZmMzg0MmVkYWM3B7b5SA==: 00:17:52.704 16:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.704 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.704 16:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:52.704 16:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.704 16:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.704 16:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.704 16:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:52.704 16:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:52.704 16:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:52.704 16:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:17:52.704 16:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:52.704 16:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:52.704 16:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:52.704 16:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:52.704 16:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.704 16:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.704 16:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.704 16:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.704 16:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.704 16:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.705 16:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.705 16:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.275 00:17:53.275 16:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:53.275 16:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:53.275 16:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.275 16:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.275 16:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.275 16:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.275 16:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.275 16:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.275 16:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:53.275 { 00:17:53.275 "cntlid": 133, 00:17:53.275 "qid": 0, 00:17:53.275 "state": "enabled", 00:17:53.275 "thread": "nvmf_tgt_poll_group_000", 00:17:53.275 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:53.275 "listen_address": { 00:17:53.275 "trtype": "TCP", 00:17:53.275 "adrfam": "IPv4", 00:17:53.275 "traddr": "10.0.0.2", 00:17:53.275 "trsvcid": "4420" 00:17:53.275 }, 00:17:53.275 "peer_address": { 00:17:53.275 "trtype": "TCP", 00:17:53.275 "adrfam": "IPv4", 00:17:53.275 "traddr": "10.0.0.1", 00:17:53.275 "trsvcid": "47772" 00:17:53.275 }, 00:17:53.275 "auth": { 00:17:53.275 "state": "completed", 00:17:53.275 "digest": "sha512", 00:17:53.275 "dhgroup": "ffdhe6144" 00:17:53.275 } 00:17:53.275 } 00:17:53.275 ]' 00:17:53.275 16:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:53.535 16:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:53.535 16:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:53.535 16:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:53.535 16:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:53.535 16:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.535 16:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.535 16:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.794 16:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzJhNDBjN2VjY2Q2ZWE3MzQ3NDAyYzE3NTkxMDE1YjU1MjhiMDc1YjQ4YzIzNDgypMXG7w==: --dhchap-ctrl-secret DHHC-1:01:ZDNiN2U1ZTFiMjljNTMwMjFmOWY2ZjBmZTRhNTg3MTUTmxRf: 00:17:53.794 16:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:YzJhNDBjN2VjY2Q2ZWE3MzQ3NDAyYzE3NTkxMDE1YjU1MjhiMDc1YjQ4YzIzNDgypMXG7w==: --dhchap-ctrl-secret DHHC-1:01:ZDNiN2U1ZTFiMjljNTMwMjFmOWY2ZjBmZTRhNTg3MTUTmxRf: 00:17:54.365 16:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.365 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.365 16:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:54.365 16:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.365 16:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.365 16:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.365 16:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:54.365 16:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:54.365 16:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:54.625 16:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:17:54.625 16:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:54.625 16:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:54.625 16:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:54.625 16:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:54.625 16:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.625 16:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:17:54.625 16:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.625 16:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.625 16:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.625 16:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:54.625 16:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:54.625 16:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:54.885 00:17:55.145 16:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:55.145 16:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:55.145 16:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.145 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.145 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.145 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.145 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.145 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.145 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:55.145 { 00:17:55.145 "cntlid": 135, 00:17:55.145 "qid": 0, 00:17:55.145 "state": "enabled", 00:17:55.145 "thread": "nvmf_tgt_poll_group_000", 00:17:55.145 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:55.145 "listen_address": { 00:17:55.145 "trtype": "TCP", 00:17:55.145 "adrfam": "IPv4", 00:17:55.145 "traddr": "10.0.0.2", 00:17:55.145 "trsvcid": "4420" 00:17:55.145 }, 00:17:55.145 "peer_address": { 00:17:55.145 "trtype": "TCP", 00:17:55.145 "adrfam": "IPv4", 00:17:55.145 "traddr": "10.0.0.1", 00:17:55.145 "trsvcid": "47794" 00:17:55.146 }, 00:17:55.146 "auth": { 00:17:55.146 "state": "completed", 00:17:55.146 "digest": "sha512", 00:17:55.146 "dhgroup": "ffdhe6144" 00:17:55.146 } 00:17:55.146 } 00:17:55.146 ]' 00:17:55.146 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:55.146 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:55.146 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:55.406 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:55.406 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:55.406 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.406 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.406 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.406 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI3ZjY2M2E0MTc4MDEyMTg4NGE0OGNhMjc3YjE0NThiZGU5NWU5MzIxNTUzZGUyMTJiYzEzYWI1ZDliNWMwYTvXris=: 00:17:55.406 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:ZDI3ZjY2M2E0MTc4MDEyMTg4NGE0OGNhMjc3YjE0NThiZGU5NWU5MzIxNTUzZGUyMTJiYzEzYWI1ZDliNWMwYTvXris=: 00:17:56.346 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.346 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.346 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:56.346 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.346 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.346 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.346 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:56.346 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:56.346 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:56.346 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:56.346 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:17:56.346 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:56.346 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:56.346 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:56.346 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:56.346 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.346 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.346 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.346 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.346 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.346 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.346 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.346 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.915 00:17:56.915 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:56.915 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.915 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:57.176 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.176 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.176 16:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.176 16:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.176 16:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.176 16:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:57.176 { 00:17:57.176 "cntlid": 137, 00:17:57.176 "qid": 0, 00:17:57.176 "state": "enabled", 00:17:57.176 "thread": "nvmf_tgt_poll_group_000", 00:17:57.176 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:57.176 "listen_address": { 00:17:57.176 "trtype": "TCP", 00:17:57.176 "adrfam": "IPv4", 00:17:57.176 "traddr": "10.0.0.2", 00:17:57.176 "trsvcid": "4420" 00:17:57.176 }, 00:17:57.176 "peer_address": { 00:17:57.176 "trtype": "TCP", 00:17:57.176 "adrfam": "IPv4", 00:17:57.176 "traddr": "10.0.0.1", 00:17:57.176 "trsvcid": "47822" 00:17:57.176 }, 00:17:57.176 "auth": { 00:17:57.176 "state": "completed", 00:17:57.176 "digest": "sha512", 00:17:57.176 "dhgroup": "ffdhe8192" 00:17:57.176 } 00:17:57.176 } 00:17:57.176 ]' 00:17:57.176 16:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:57.176 16:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:57.176 16:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:57.176 16:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:57.177 16:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:57.437 16:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.437 16:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.437 16:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.437 16:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTE1YmJjYzE3YjFjOTc5MTUwNjhjMmUzMWVjNzdiM2JjZjlkMTJkMmZjNTk2MzQ40V3t1w==: --dhchap-ctrl-secret DHHC-1:03:NmJiYmRkODI0NmE5ZmZhNzNiNjg3OGI0OWE0YzhlZWI5YWY3NWJiOTU5OWY4MmU0NTZlM2NiMWQ4ZDViN2YyZfnUQlU=: 00:17:57.437 16:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:OTE1YmJjYzE3YjFjOTc5MTUwNjhjMmUzMWVjNzdiM2JjZjlkMTJkMmZjNTk2MzQ40V3t1w==: --dhchap-ctrl-secret DHHC-1:03:NmJiYmRkODI0NmE5ZmZhNzNiNjg3OGI0OWE0YzhlZWI5YWY3NWJiOTU5OWY4MmU0NTZlM2NiMWQ4ZDViN2YyZfnUQlU=: 00:17:58.380 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.380 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.380 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:58.380 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.380 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.380 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.380 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:58.380 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:58.381 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:58.642 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:17:58.642 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:58.642 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:58.642 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:58.642 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:58.642 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.642 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.642 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.642 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.642 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.642 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.642 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.642 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.213 00:17:59.213 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:59.213 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:59.213 16:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.213 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.213 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.213 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.213 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.213 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.213 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:59.213 { 00:17:59.213 "cntlid": 139, 00:17:59.213 "qid": 0, 00:17:59.213 "state": "enabled", 00:17:59.213 "thread": "nvmf_tgt_poll_group_000", 00:17:59.213 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:17:59.213 "listen_address": { 00:17:59.213 "trtype": "TCP", 00:17:59.213 "adrfam": "IPv4", 00:17:59.213 "traddr": "10.0.0.2", 00:17:59.213 "trsvcid": "4420" 00:17:59.213 }, 00:17:59.213 "peer_address": { 00:17:59.213 "trtype": "TCP", 00:17:59.213 "adrfam": "IPv4", 00:17:59.213 "traddr": "10.0.0.1", 00:17:59.213 "trsvcid": "47846" 00:17:59.213 }, 00:17:59.213 "auth": { 00:17:59.213 "state": "completed", 00:17:59.213 "digest": "sha512", 00:17:59.213 "dhgroup": "ffdhe8192" 00:17:59.213 } 00:17:59.213 } 00:17:59.213 ]' 00:17:59.213 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:59.213 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:59.213 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.213 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:59.213 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.474 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.474 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.474 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.474 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDljMDA5NTBlNjkwMTBkZWU5ODYwNjNhNTYxOGRiNmFZUFSi: --dhchap-ctrl-secret DHHC-1:02:NmJmNWI4MDQ5MDA3Yzg3NDI3OGExYjY2MzFjZDFlZTBkYWI4MmZmMzg0MmVkYWM3B7b5SA==: 00:17:59.474 16:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:01:MDljMDA5NTBlNjkwMTBkZWU5ODYwNjNhNTYxOGRiNmFZUFSi: --dhchap-ctrl-secret DHHC-1:02:NmJmNWI4MDQ5MDA3Yzg3NDI3OGExYjY2MzFjZDFlZTBkYWI4MmZmMzg0MmVkYWM3B7b5SA==: 00:18:00.415 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.415 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.416 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:00.416 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.416 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.416 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.416 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:00.416 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:00.416 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:00.416 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:18:00.416 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.416 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:00.416 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:00.416 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:00.416 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.416 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.416 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.416 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.416 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.416 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.416 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.416 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.987 00:18:00.987 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:00.987 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:00.987 16:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.248 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.248 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.248 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.248 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.248 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.248 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:01.248 { 00:18:01.248 "cntlid": 141, 00:18:01.248 "qid": 0, 00:18:01.248 "state": "enabled", 00:18:01.248 "thread": "nvmf_tgt_poll_group_000", 00:18:01.248 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:01.248 "listen_address": { 00:18:01.248 "trtype": "TCP", 00:18:01.248 "adrfam": "IPv4", 00:18:01.248 "traddr": "10.0.0.2", 00:18:01.248 "trsvcid": "4420" 00:18:01.248 }, 00:18:01.248 "peer_address": { 00:18:01.248 "trtype": "TCP", 00:18:01.248 "adrfam": "IPv4", 00:18:01.248 "traddr": "10.0.0.1", 00:18:01.248 "trsvcid": "47862" 00:18:01.248 }, 00:18:01.248 "auth": { 00:18:01.248 "state": "completed", 00:18:01.248 "digest": "sha512", 00:18:01.248 "dhgroup": "ffdhe8192" 00:18:01.248 } 00:18:01.248 } 00:18:01.248 ]' 00:18:01.248 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:01.248 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:01.248 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:01.248 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:01.248 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:01.248 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.248 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.248 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.509 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzJhNDBjN2VjY2Q2ZWE3MzQ3NDAyYzE3NTkxMDE1YjU1MjhiMDc1YjQ4YzIzNDgypMXG7w==: --dhchap-ctrl-secret DHHC-1:01:ZDNiN2U1ZTFiMjljNTMwMjFmOWY2ZjBmZTRhNTg3MTUTmxRf: 00:18:01.509 16:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:YzJhNDBjN2VjY2Q2ZWE3MzQ3NDAyYzE3NTkxMDE1YjU1MjhiMDc1YjQ4YzIzNDgypMXG7w==: --dhchap-ctrl-secret DHHC-1:01:ZDNiN2U1ZTFiMjljNTMwMjFmOWY2ZjBmZTRhNTg3MTUTmxRf: 00:18:02.452 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.452 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:02.452 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.452 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.452 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.452 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:02.452 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:02.452 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:02.452 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:18:02.452 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:02.452 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:02.452 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:02.452 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:02.452 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.452 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:18:02.452 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.452 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.452 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.452 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:02.452 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:02.452 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:03.025 00:18:03.025 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:03.025 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:03.025 16:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.286 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.286 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.286 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.286 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.286 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.286 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:03.286 { 00:18:03.286 "cntlid": 143, 00:18:03.286 "qid": 0, 00:18:03.286 "state": "enabled", 00:18:03.286 "thread": "nvmf_tgt_poll_group_000", 00:18:03.286 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:03.286 "listen_address": { 00:18:03.286 "trtype": "TCP", 00:18:03.286 "adrfam": "IPv4", 00:18:03.286 "traddr": "10.0.0.2", 00:18:03.286 "trsvcid": "4420" 00:18:03.286 }, 00:18:03.286 "peer_address": { 00:18:03.286 "trtype": "TCP", 00:18:03.286 "adrfam": "IPv4", 00:18:03.286 "traddr": "10.0.0.1", 00:18:03.286 "trsvcid": "36096" 00:18:03.286 }, 00:18:03.286 "auth": { 00:18:03.286 "state": "completed", 00:18:03.286 "digest": "sha512", 00:18:03.286 "dhgroup": "ffdhe8192" 00:18:03.286 } 00:18:03.286 } 00:18:03.286 ]' 00:18:03.286 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:03.286 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:03.286 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:03.286 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:03.286 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:03.286 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.286 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.286 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.547 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI3ZjY2M2E0MTc4MDEyMTg4NGE0OGNhMjc3YjE0NThiZGU5NWU5MzIxNTUzZGUyMTJiYzEzYWI1ZDliNWMwYTvXris=: 00:18:03.547 16:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:ZDI3ZjY2M2E0MTc4MDEyMTg4NGE0OGNhMjc3YjE0NThiZGU5NWU5MzIxNTUzZGUyMTJiYzEzYWI1ZDliNWMwYTvXris=: 00:18:04.489 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.489 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.489 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:04.489 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.489 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.489 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.489 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:04.489 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:18:04.489 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:04.489 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:04.489 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:04.489 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:04.489 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:18:04.489 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:04.489 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:04.489 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:04.489 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:04.489 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.489 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.489 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.489 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.489 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.489 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.489 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.489 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.061 00:18:05.061 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:05.061 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:05.061 16:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.322 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.322 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.322 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.322 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.322 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.322 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:05.322 { 00:18:05.322 "cntlid": 145, 00:18:05.322 "qid": 0, 00:18:05.322 "state": "enabled", 00:18:05.322 "thread": "nvmf_tgt_poll_group_000", 00:18:05.322 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:05.322 "listen_address": { 00:18:05.322 "trtype": "TCP", 00:18:05.322 "adrfam": "IPv4", 00:18:05.323 "traddr": "10.0.0.2", 00:18:05.323 "trsvcid": "4420" 00:18:05.323 }, 00:18:05.323 "peer_address": { 00:18:05.323 "trtype": "TCP", 00:18:05.323 "adrfam": "IPv4", 00:18:05.323 "traddr": "10.0.0.1", 00:18:05.323 "trsvcid": "36120" 00:18:05.323 }, 00:18:05.323 "auth": { 00:18:05.323 "state": "completed", 00:18:05.323 "digest": "sha512", 00:18:05.323 "dhgroup": "ffdhe8192" 00:18:05.323 } 00:18:05.323 } 00:18:05.323 ]' 00:18:05.323 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:05.323 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:05.323 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:05.323 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:05.323 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:05.323 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.323 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.323 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.584 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OTE1YmJjYzE3YjFjOTc5MTUwNjhjMmUzMWVjNzdiM2JjZjlkMTJkMmZjNTk2MzQ40V3t1w==: --dhchap-ctrl-secret DHHC-1:03:NmJiYmRkODI0NmE5ZmZhNzNiNjg3OGI0OWE0YzhlZWI5YWY3NWJiOTU5OWY4MmU0NTZlM2NiMWQ4ZDViN2YyZfnUQlU=: 00:18:05.584 16:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:00:OTE1YmJjYzE3YjFjOTc5MTUwNjhjMmUzMWVjNzdiM2JjZjlkMTJkMmZjNTk2MzQ40V3t1w==: --dhchap-ctrl-secret DHHC-1:03:NmJiYmRkODI0NmE5ZmZhNzNiNjg3OGI0OWE0YzhlZWI5YWY3NWJiOTU5OWY4MmU0NTZlM2NiMWQ4ZDViN2YyZfnUQlU=: 00:18:06.155 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.155 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.155 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:06.155 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.155 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.155 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.155 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 00:18:06.155 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.155 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.415 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.415 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:18:06.415 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:06.415 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:18:06.415 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:06.415 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:06.415 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:06.415 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:06.415 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:18:06.415 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:06.415 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:06.676 request: 00:18:06.676 { 00:18:06.676 "name": "nvme0", 00:18:06.676 "trtype": "tcp", 00:18:06.676 "traddr": "10.0.0.2", 00:18:06.676 "adrfam": "ipv4", 00:18:06.676 "trsvcid": "4420", 00:18:06.676 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:06.676 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:06.676 "prchk_reftag": false, 00:18:06.676 "prchk_guard": false, 00:18:06.676 "hdgst": false, 00:18:06.676 "ddgst": false, 00:18:06.676 "dhchap_key": "key2", 00:18:06.676 "allow_unrecognized_csi": false, 00:18:06.676 "method": "bdev_nvme_attach_controller", 00:18:06.676 "req_id": 1 00:18:06.676 } 00:18:06.676 Got JSON-RPC error response 00:18:06.676 response: 00:18:06.676 { 00:18:06.676 "code": -5, 00:18:06.676 "message": "Input/output error" 00:18:06.676 } 00:18:06.676 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:06.676 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:06.676 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:06.676 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:06.676 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:06.676 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.676 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.676 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.676 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.676 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.676 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.676 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.676 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:06.676 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:06.676 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:06.676 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:06.937 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:06.937 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:06.937 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:06.937 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:06.937 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:06.937 16:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:07.197 request: 00:18:07.197 { 00:18:07.197 "name": "nvme0", 00:18:07.197 "trtype": "tcp", 00:18:07.197 "traddr": "10.0.0.2", 00:18:07.197 "adrfam": "ipv4", 00:18:07.197 "trsvcid": "4420", 00:18:07.197 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:07.197 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:07.197 "prchk_reftag": false, 00:18:07.197 "prchk_guard": false, 00:18:07.197 "hdgst": false, 00:18:07.197 "ddgst": false, 00:18:07.197 "dhchap_key": "key1", 00:18:07.197 "dhchap_ctrlr_key": "ckey2", 00:18:07.197 "allow_unrecognized_csi": false, 00:18:07.197 "method": "bdev_nvme_attach_controller", 00:18:07.197 "req_id": 1 00:18:07.197 } 00:18:07.197 Got JSON-RPC error response 00:18:07.197 response: 00:18:07.197 { 00:18:07.197 "code": -5, 00:18:07.197 "message": "Input/output error" 00:18:07.197 } 00:18:07.197 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:07.197 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:07.197 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:07.197 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:07.197 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:07.197 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.197 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.197 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.197 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 00:18:07.197 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.197 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.197 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.197 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.457 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:07.457 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.457 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:07.457 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:07.457 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:07.457 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:07.457 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.457 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.457 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.719 request: 00:18:07.719 { 00:18:07.719 "name": "nvme0", 00:18:07.719 "trtype": "tcp", 00:18:07.719 "traddr": "10.0.0.2", 00:18:07.719 "adrfam": "ipv4", 00:18:07.719 "trsvcid": "4420", 00:18:07.719 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:07.719 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:07.719 "prchk_reftag": false, 00:18:07.719 "prchk_guard": false, 00:18:07.719 "hdgst": false, 00:18:07.719 "ddgst": false, 00:18:07.719 "dhchap_key": "key1", 00:18:07.719 "dhchap_ctrlr_key": "ckey1", 00:18:07.719 "allow_unrecognized_csi": false, 00:18:07.719 "method": "bdev_nvme_attach_controller", 00:18:07.719 "req_id": 1 00:18:07.719 } 00:18:07.719 Got JSON-RPC error response 00:18:07.719 response: 00:18:07.719 { 00:18:07.719 "code": -5, 00:18:07.719 "message": "Input/output error" 00:18:07.719 } 00:18:07.719 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:07.719 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:07.719 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:07.719 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:07.719 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:07.719 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.719 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.719 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.719 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2175906 00:18:07.719 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2175906 ']' 00:18:07.719 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2175906 00:18:07.719 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:07.719 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:07.719 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2175906 00:18:07.980 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:07.980 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:07.980 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2175906' 00:18:07.980 killing process with pid 2175906 00:18:07.980 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2175906 00:18:07.980 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2175906 00:18:07.980 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:07.980 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:07.980 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:07.980 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.980 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2202801 00:18:07.980 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2202801 00:18:07.980 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:07.980 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2202801 ']' 00:18:07.980 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:07.980 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:07.980 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:07.980 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:07.980 16:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.921 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:08.921 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:08.921 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:08.921 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:08.921 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.921 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:08.921 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:08.921 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2202801 00:18:08.921 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2202801 ']' 00:18:08.921 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:08.921 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:08.921 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:08.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:08.921 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:08.921 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.182 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:09.182 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:09.182 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:09.182 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.182 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.182 null0 00:18:09.182 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.182 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:09.182 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.lAH 00:18:09.182 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.182 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.182 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.182 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.Vy2 ]] 00:18:09.182 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Vy2 00:18:09.182 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.182 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.182 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.182 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:09.182 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.RBx 00:18:09.182 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.182 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.182 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.182 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.knV ]] 00:18:09.182 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.knV 00:18:09.182 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.182 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.182 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.182 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:09.182 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.ZhF 00:18:09.182 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.182 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.182 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.182 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.Bf1 ]] 00:18:09.182 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Bf1 00:18:09.182 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.182 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.182 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.182 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:09.182 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.2L0 00:18:09.182 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.182 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.182 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.182 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:18:09.182 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:18:09.182 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:09.182 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:09.182 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:09.182 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:09.182 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.182 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:18:09.182 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.182 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.182 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.182 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:09.182 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:09.182 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:10.124 nvme0n1 00:18:10.124 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:10.124 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:10.124 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.385 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.385 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.385 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.385 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.385 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.385 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:10.385 { 00:18:10.385 "cntlid": 1, 00:18:10.385 "qid": 0, 00:18:10.385 "state": "enabled", 00:18:10.385 "thread": "nvmf_tgt_poll_group_000", 00:18:10.385 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:10.385 "listen_address": { 00:18:10.385 "trtype": "TCP", 00:18:10.385 "adrfam": "IPv4", 00:18:10.385 "traddr": "10.0.0.2", 00:18:10.385 "trsvcid": "4420" 00:18:10.385 }, 00:18:10.385 "peer_address": { 00:18:10.385 "trtype": "TCP", 00:18:10.385 "adrfam": "IPv4", 00:18:10.385 "traddr": "10.0.0.1", 00:18:10.385 "trsvcid": "36168" 00:18:10.385 }, 00:18:10.385 "auth": { 00:18:10.385 "state": "completed", 00:18:10.385 "digest": "sha512", 00:18:10.385 "dhgroup": "ffdhe8192" 00:18:10.385 } 00:18:10.385 } 00:18:10.385 ]' 00:18:10.385 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:10.385 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:10.385 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:10.385 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:10.385 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:10.385 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.385 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.385 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.646 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDI3ZjY2M2E0MTc4MDEyMTg4NGE0OGNhMjc3YjE0NThiZGU5NWU5MzIxNTUzZGUyMTJiYzEzYWI1ZDliNWMwYTvXris=: 00:18:10.646 16:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:03:ZDI3ZjY2M2E0MTc4MDEyMTg4NGE0OGNhMjc3YjE0NThiZGU5NWU5MzIxNTUzZGUyMTJiYzEzYWI1ZDliNWMwYTvXris=: 00:18:11.588 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.588 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.588 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:11.588 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.588 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.588 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.588 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:18:11.588 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.588 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.588 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.588 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:11.588 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:11.588 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:11.588 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:11.588 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:11.588 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:11.588 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:11.588 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:11.588 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:11.588 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:11.588 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:11.588 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:11.849 request: 00:18:11.849 { 00:18:11.849 "name": "nvme0", 00:18:11.849 "trtype": "tcp", 00:18:11.849 "traddr": "10.0.0.2", 00:18:11.849 "adrfam": "ipv4", 00:18:11.849 "trsvcid": "4420", 00:18:11.849 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:11.849 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:11.849 "prchk_reftag": false, 00:18:11.849 "prchk_guard": false, 00:18:11.849 "hdgst": false, 00:18:11.849 "ddgst": false, 00:18:11.849 "dhchap_key": "key3", 00:18:11.849 "allow_unrecognized_csi": false, 00:18:11.849 "method": "bdev_nvme_attach_controller", 00:18:11.849 "req_id": 1 00:18:11.849 } 00:18:11.849 Got JSON-RPC error response 00:18:11.849 response: 00:18:11.849 { 00:18:11.849 "code": -5, 00:18:11.849 "message": "Input/output error" 00:18:11.849 } 00:18:11.849 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:11.849 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:11.849 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:11.849 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:11.849 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:18:11.849 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:18:11.849 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:11.849 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:11.849 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:11.849 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:11.849 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:11.849 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:11.849 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:11.849 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:11.849 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:11.849 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:11.849 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:11.849 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:12.110 request: 00:18:12.110 { 00:18:12.110 "name": "nvme0", 00:18:12.110 "trtype": "tcp", 00:18:12.110 "traddr": "10.0.0.2", 00:18:12.110 "adrfam": "ipv4", 00:18:12.110 "trsvcid": "4420", 00:18:12.110 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:12.110 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:12.110 "prchk_reftag": false, 00:18:12.110 "prchk_guard": false, 00:18:12.110 "hdgst": false, 00:18:12.110 "ddgst": false, 00:18:12.110 "dhchap_key": "key3", 00:18:12.110 "allow_unrecognized_csi": false, 00:18:12.110 "method": "bdev_nvme_attach_controller", 00:18:12.110 "req_id": 1 00:18:12.110 } 00:18:12.110 Got JSON-RPC error response 00:18:12.110 response: 00:18:12.110 { 00:18:12.110 "code": -5, 00:18:12.110 "message": "Input/output error" 00:18:12.110 } 00:18:12.110 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:12.110 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:12.110 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:12.110 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:12.110 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:12.110 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:18:12.110 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:12.110 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:12.110 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:12.110 16:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:12.371 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:12.371 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.371 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.371 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.371 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:12.371 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.371 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.371 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.371 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:12.371 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:12.371 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:12.371 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:12.371 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:12.371 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:12.371 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:12.371 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:12.371 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:12.371 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:12.631 request: 00:18:12.631 { 00:18:12.631 "name": "nvme0", 00:18:12.631 "trtype": "tcp", 00:18:12.631 "traddr": "10.0.0.2", 00:18:12.631 "adrfam": "ipv4", 00:18:12.631 "trsvcid": "4420", 00:18:12.631 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:12.631 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:12.631 "prchk_reftag": false, 00:18:12.631 "prchk_guard": false, 00:18:12.631 "hdgst": false, 00:18:12.631 "ddgst": false, 00:18:12.631 "dhchap_key": "key0", 00:18:12.631 "dhchap_ctrlr_key": "key1", 00:18:12.631 "allow_unrecognized_csi": false, 00:18:12.631 "method": "bdev_nvme_attach_controller", 00:18:12.631 "req_id": 1 00:18:12.631 } 00:18:12.631 Got JSON-RPC error response 00:18:12.631 response: 00:18:12.631 { 00:18:12.631 "code": -5, 00:18:12.631 "message": "Input/output error" 00:18:12.631 } 00:18:12.631 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:12.631 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:12.631 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:12.631 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:12.631 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:18:12.631 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:12.631 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:12.892 nvme0n1 00:18:12.892 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:18:12.892 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:18:12.892 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.154 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.154 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.154 16:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.154 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 00:18:13.154 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.154 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.154 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.154 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:13.154 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:13.154 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:14.099 nvme0n1 00:18:14.099 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:18:14.099 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:18:14.099 16:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.099 16:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.099 16:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:14.099 16:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.099 16:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.099 16:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.099 16:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:18:14.099 16:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:18:14.099 16:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.360 16:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.360 16:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YzJhNDBjN2VjY2Q2ZWE3MzQ3NDAyYzE3NTkxMDE1YjU1MjhiMDc1YjQ4YzIzNDgypMXG7w==: --dhchap-ctrl-secret DHHC-1:03:ZDI3ZjY2M2E0MTc4MDEyMTg4NGE0OGNhMjc3YjE0NThiZGU5NWU5MzIxNTUzZGUyMTJiYzEzYWI1ZDliNWMwYTvXris=: 00:18:14.360 16:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -l 0 --dhchap-secret DHHC-1:02:YzJhNDBjN2VjY2Q2ZWE3MzQ3NDAyYzE3NTkxMDE1YjU1MjhiMDc1YjQ4YzIzNDgypMXG7w==: --dhchap-ctrl-secret DHHC-1:03:ZDI3ZjY2M2E0MTc4MDEyMTg4NGE0OGNhMjc3YjE0NThiZGU5NWU5MzIxNTUzZGUyMTJiYzEzYWI1ZDliNWMwYTvXris=: 00:18:15.301 16:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:18:15.301 16:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:18:15.301 16:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:18:15.301 16:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:18:15.301 16:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:18:15.301 16:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:18:15.301 16:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:18:15.301 16:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.301 16:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.301 16:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:18:15.301 16:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:15.301 16:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:18:15.301 16:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:15.301 16:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:15.301 16:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:15.302 16:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:15.302 16:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:15.302 16:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:15.302 16:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:15.874 request: 00:18:15.874 { 00:18:15.874 "name": "nvme0", 00:18:15.874 "trtype": "tcp", 00:18:15.874 "traddr": "10.0.0.2", 00:18:15.874 "adrfam": "ipv4", 00:18:15.874 "trsvcid": "4420", 00:18:15.874 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:15.874 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:18:15.874 "prchk_reftag": false, 00:18:15.874 "prchk_guard": false, 00:18:15.874 "hdgst": false, 00:18:15.874 "ddgst": false, 00:18:15.874 "dhchap_key": "key1", 00:18:15.874 "allow_unrecognized_csi": false, 00:18:15.874 "method": "bdev_nvme_attach_controller", 00:18:15.874 "req_id": 1 00:18:15.874 } 00:18:15.874 Got JSON-RPC error response 00:18:15.874 response: 00:18:15.874 { 00:18:15.874 "code": -5, 00:18:15.874 "message": "Input/output error" 00:18:15.874 } 00:18:15.874 16:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:15.874 16:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:15.874 16:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:15.874 16:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:15.874 16:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:15.874 16:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:15.875 16:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:16.855 nvme0n1 00:18:16.855 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:18:16.855 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:18:16.855 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.855 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.855 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.855 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.166 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:17.166 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.166 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.166 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.166 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:18:17.166 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:17.166 16:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:17.166 nvme0n1 00:18:17.166 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:18:17.166 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:18:17.166 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.451 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.451 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.451 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.712 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:17.712 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.712 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.712 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.712 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MDljMDA5NTBlNjkwMTBkZWU5ODYwNjNhNTYxOGRiNmFZUFSi: '' 2s 00:18:17.712 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:17.712 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:17.712 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MDljMDA5NTBlNjkwMTBkZWU5ODYwNjNhNTYxOGRiNmFZUFSi: 00:18:17.712 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:18:17.712 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:17.712 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:17.712 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MDljMDA5NTBlNjkwMTBkZWU5ODYwNjNhNTYxOGRiNmFZUFSi: ]] 00:18:17.712 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MDljMDA5NTBlNjkwMTBkZWU5ODYwNjNhNTYxOGRiNmFZUFSi: 00:18:17.712 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:18:17.712 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:17.712 16:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:19.657 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:18:19.657 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:19.657 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:19.657 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:19.657 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:19.657 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:19.657 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:19.657 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key key2 00:18:19.657 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.657 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.657 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.657 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YzJhNDBjN2VjY2Q2ZWE3MzQ3NDAyYzE3NTkxMDE1YjU1MjhiMDc1YjQ4YzIzNDgypMXG7w==: 2s 00:18:19.657 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:19.657 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:19.657 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:18:19.657 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YzJhNDBjN2VjY2Q2ZWE3MzQ3NDAyYzE3NTkxMDE1YjU1MjhiMDc1YjQ4YzIzNDgypMXG7w==: 00:18:19.657 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:19.657 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:19.657 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:18:19.657 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YzJhNDBjN2VjY2Q2ZWE3MzQ3NDAyYzE3NTkxMDE1YjU1MjhiMDc1YjQ4YzIzNDgypMXG7w==: ]] 00:18:19.657 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YzJhNDBjN2VjY2Q2ZWE3MzQ3NDAyYzE3NTkxMDE1YjU1MjhiMDc1YjQ4YzIzNDgypMXG7w==: 00:18:19.657 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:19.657 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:21.569 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:21.569 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:21.569 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:21.569 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:21.569 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:21.569 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:21.569 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:21.569 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.830 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:21.830 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.830 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.830 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.830 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:21.830 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:21.830 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:22.771 nvme0n1 00:18:22.771 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:22.771 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.771 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.771 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.771 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:22.771 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:23.031 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:23.031 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:23.031 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.292 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.292 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:23.292 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.292 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.292 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.292 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:23.292 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:23.552 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:23.552 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.552 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:23.552 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.552 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:23.552 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.552 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.552 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.552 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:23.552 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:23.552 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:23.552 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:23.552 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:23.552 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:23.552 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:23.552 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:23.552 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:24.122 request: 00:18:24.122 { 00:18:24.122 "name": "nvme0", 00:18:24.122 "dhchap_key": "key1", 00:18:24.122 "dhchap_ctrlr_key": "key3", 00:18:24.122 "method": "bdev_nvme_set_keys", 00:18:24.122 "req_id": 1 00:18:24.122 } 00:18:24.122 Got JSON-RPC error response 00:18:24.122 response: 00:18:24.122 { 00:18:24.122 "code": -13, 00:18:24.122 "message": "Permission denied" 00:18:24.122 } 00:18:24.122 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:24.122 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:24.122 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:24.122 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:24.122 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:24.122 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:24.122 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.382 16:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:24.382 16:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:25.322 16:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:25.322 16:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:25.322 16:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.583 16:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:25.583 16:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:25.583 16:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.583 16:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.583 16:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.583 16:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:25.583 16:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:25.583 16:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:26.524 nvme0n1 00:18:26.524 16:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:26.524 16:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.524 16:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.524 16:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.524 16:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:26.524 16:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:26.524 16:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:26.524 16:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:26.524 16:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.524 16:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:26.524 16:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.524 16:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:26.524 16:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:26.802 request: 00:18:26.802 { 00:18:26.802 "name": "nvme0", 00:18:26.802 "dhchap_key": "key2", 00:18:26.802 "dhchap_ctrlr_key": "key0", 00:18:26.802 "method": "bdev_nvme_set_keys", 00:18:26.802 "req_id": 1 00:18:26.802 } 00:18:26.802 Got JSON-RPC error response 00:18:26.802 response: 00:18:26.802 { 00:18:26.802 "code": -13, 00:18:26.802 "message": "Permission denied" 00:18:26.802 } 00:18:26.802 16:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:26.802 16:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:26.802 16:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:26.802 16:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:26.802 16:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:26.802 16:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:26.802 16:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.062 16:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:27.062 16:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:28.001 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:28.001 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:28.001 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.262 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:28.262 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:28.262 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:28.262 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2176044 00:18:28.262 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2176044 ']' 00:18:28.262 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2176044 00:18:28.262 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:28.262 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:28.262 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2176044 00:18:28.262 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:28.262 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:28.262 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2176044' 00:18:28.262 killing process with pid 2176044 00:18:28.262 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2176044 00:18:28.262 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2176044 00:18:28.543 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:28.543 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:28.543 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:28.543 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:28.543 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:28.543 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:28.543 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:28.543 rmmod nvme_tcp 00:18:28.543 rmmod nvme_fabrics 00:18:28.543 rmmod nvme_keyring 00:18:28.543 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:28.543 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:28.543 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:28.543 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2202801 ']' 00:18:28.543 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2202801 00:18:28.543 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2202801 ']' 00:18:28.543 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2202801 00:18:28.543 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:28.543 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:28.543 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2202801 00:18:28.543 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:28.543 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:28.543 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2202801' 00:18:28.543 killing process with pid 2202801 00:18:28.543 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2202801 00:18:28.543 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2202801 00:18:28.804 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:28.804 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:28.804 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:28.804 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:18:28.804 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:18:28.804 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:28.804 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:18:28.804 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:28.804 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:28.804 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:28.804 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:28.804 16:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:30.719 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:30.719 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.lAH /tmp/spdk.key-sha256.RBx /tmp/spdk.key-sha384.ZhF /tmp/spdk.key-sha512.2L0 /tmp/spdk.key-sha512.Vy2 /tmp/spdk.key-sha384.knV /tmp/spdk.key-sha256.Bf1 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:30.719 00:18:30.719 real 2m44.121s 00:18:30.719 user 6m5.641s 00:18:30.719 sys 0m23.916s 00:18:30.979 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:30.979 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.979 ************************************ 00:18:30.979 END TEST nvmf_auth_target 00:18:30.979 ************************************ 00:18:30.979 16:30:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:30.979 16:30:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:30.979 16:30:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:30.979 16:30:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:30.979 16:30:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:30.979 ************************************ 00:18:30.979 START TEST nvmf_bdevio_no_huge 00:18:30.979 ************************************ 00:18:30.979 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:30.979 * Looking for test storage... 00:18:30.979 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:30.979 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:30.979 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:18:30.979 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:30.979 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:30.979 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:30.979 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:31.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.241 --rc genhtml_branch_coverage=1 00:18:31.241 --rc genhtml_function_coverage=1 00:18:31.241 --rc genhtml_legend=1 00:18:31.241 --rc geninfo_all_blocks=1 00:18:31.241 --rc geninfo_unexecuted_blocks=1 00:18:31.241 00:18:31.241 ' 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:31.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.241 --rc genhtml_branch_coverage=1 00:18:31.241 --rc genhtml_function_coverage=1 00:18:31.241 --rc genhtml_legend=1 00:18:31.241 --rc geninfo_all_blocks=1 00:18:31.241 --rc geninfo_unexecuted_blocks=1 00:18:31.241 00:18:31.241 ' 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:31.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.241 --rc genhtml_branch_coverage=1 00:18:31.241 --rc genhtml_function_coverage=1 00:18:31.241 --rc genhtml_legend=1 00:18:31.241 --rc geninfo_all_blocks=1 00:18:31.241 --rc geninfo_unexecuted_blocks=1 00:18:31.241 00:18:31.241 ' 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:31.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.241 --rc genhtml_branch_coverage=1 00:18:31.241 --rc genhtml_function_coverage=1 00:18:31.241 --rc genhtml_legend=1 00:18:31.241 --rc geninfo_all_blocks=1 00:18:31.241 --rc geninfo_unexecuted_blocks=1 00:18:31.241 00:18:31.241 ' 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.241 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.242 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.242 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:31.242 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.242 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:31.242 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:31.242 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:31.242 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:31.242 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:31.242 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:31.242 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:31.242 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:31.242 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:31.242 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:31.242 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:31.242 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:31.242 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:31.242 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:31.242 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:31.242 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:31.242 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:31.242 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:31.242 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:31.242 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.242 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:31.242 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.242 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:31.242 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:31.242 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:18:31.242 16:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:39.383 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:39.383 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:18:39.383 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:39.383 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:39.383 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:39.383 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:39.383 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:39.383 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:18:39.383 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:39.383 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:18:39.383 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:18:39.383 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:18:39.383 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:18:39.383 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:18:39.383 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:18:39.383 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:39.383 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:39.383 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:39.383 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:39.383 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:39.383 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:39.383 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:39.383 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:39.383 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:39.383 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:39.383 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:39.383 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:39.383 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:39.383 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:39.383 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:39.383 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:39.383 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:39.383 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:39.383 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:39.383 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:39.383 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:39.383 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:39.383 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:39.383 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:39.383 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:39.383 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:39.383 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:39.383 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:39.383 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:39.383 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:39.383 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:39.383 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:39.383 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:39.383 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:39.383 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:39.383 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:39.383 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:39.383 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:39.384 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:39.384 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:39.384 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:39.384 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:39.384 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:39.384 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:39.384 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:39.384 Found net devices under 0000:31:00.0: cvl_0_0 00:18:39.384 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:39.384 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:39.384 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:39.384 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:39.384 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:39.384 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:39.384 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:39.384 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:39.384 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:39.384 Found net devices under 0000:31:00.1: cvl_0_1 00:18:39.384 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:39.384 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:39.384 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:18:39.384 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:39.384 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:39.384 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:39.384 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:39.384 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:39.384 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:39.384 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:39.384 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:39.384 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:39.384 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:39.384 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:39.384 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:39.384 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:39.384 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:39.384 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:39.384 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:39.384 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:39.384 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:39.384 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:39.384 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:39.384 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:39.384 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:39.384 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:39.384 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:39.384 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:39.384 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:39.384 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:39.384 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:18:39.384 00:18:39.384 --- 10.0.0.2 ping statistics --- 00:18:39.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:39.384 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:18:39.384 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:39.384 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:39.384 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:18:39.384 00:18:39.384 --- 10.0.0.1 ping statistics --- 00:18:39.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:39.384 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:18:39.384 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:39.384 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:18:39.384 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:39.384 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:39.384 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:39.384 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:39.384 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:39.384 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:39.384 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:39.384 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:39.384 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:39.384 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:39.384 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:39.384 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=2211754 00:18:39.384 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 2211754 00:18:39.384 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:39.384 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 2211754 ']' 00:18:39.384 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.384 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:39.384 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:39.384 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:39.384 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:39.384 [2024-11-20 16:30:24.377781] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:18:39.384 [2024-11-20 16:30:24.377849] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:39.384 [2024-11-20 16:30:24.484606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:39.384 [2024-11-20 16:30:24.544678] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:39.384 [2024-11-20 16:30:24.544721] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:39.384 [2024-11-20 16:30:24.544729] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:39.384 [2024-11-20 16:30:24.544737] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:39.384 [2024-11-20 16:30:24.544743] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:39.384 [2024-11-20 16:30:24.546285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:18:39.384 [2024-11-20 16:30:24.546448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:18:39.384 [2024-11-20 16:30:24.546607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:39.384 [2024-11-20 16:30:24.546607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:18:39.384 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:39.384 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:18:39.384 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:39.384 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:39.384 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:39.384 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:39.384 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:39.384 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.384 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:39.384 [2024-11-20 16:30:25.256320] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:39.384 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.385 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:39.385 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.385 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:39.385 Malloc0 00:18:39.385 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.385 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:39.385 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.385 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:39.385 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.385 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:39.385 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.385 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:39.385 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.385 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:39.385 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.385 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:39.385 [2024-11-20 16:30:25.310144] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:39.385 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.385 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:39.385 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:39.385 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:18:39.385 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:18:39.385 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:39.385 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:39.385 { 00:18:39.385 "params": { 00:18:39.385 "name": "Nvme$subsystem", 00:18:39.385 "trtype": "$TEST_TRANSPORT", 00:18:39.385 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:39.385 "adrfam": "ipv4", 00:18:39.385 "trsvcid": "$NVMF_PORT", 00:18:39.385 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:39.385 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:39.385 "hdgst": ${hdgst:-false}, 00:18:39.385 "ddgst": ${ddgst:-false} 00:18:39.385 }, 00:18:39.385 "method": "bdev_nvme_attach_controller" 00:18:39.385 } 00:18:39.385 EOF 00:18:39.385 )") 00:18:39.385 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:18:39.385 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:18:39.385 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:18:39.385 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:39.385 "params": { 00:18:39.385 "name": "Nvme1", 00:18:39.385 "trtype": "tcp", 00:18:39.385 "traddr": "10.0.0.2", 00:18:39.385 "adrfam": "ipv4", 00:18:39.385 "trsvcid": "4420", 00:18:39.385 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.385 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:39.385 "hdgst": false, 00:18:39.385 "ddgst": false 00:18:39.385 }, 00:18:39.385 "method": "bdev_nvme_attach_controller" 00:18:39.385 }' 00:18:39.646 [2024-11-20 16:30:25.369144] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:18:39.646 [2024-11-20 16:30:25.369218] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2211921 ] 00:18:39.646 [2024-11-20 16:30:25.452571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:39.646 [2024-11-20 16:30:25.508016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:39.646 [2024-11-20 16:30:25.508088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:39.646 [2024-11-20 16:30:25.508284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.907 I/O targets: 00:18:39.907 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:39.907 00:18:39.907 00:18:39.907 CUnit - A unit testing framework for C - Version 2.1-3 00:18:39.907 http://cunit.sourceforge.net/ 00:18:39.907 00:18:39.907 00:18:39.907 Suite: bdevio tests on: Nvme1n1 00:18:39.907 Test: blockdev write read block ...passed 00:18:40.168 Test: blockdev write zeroes read block ...passed 00:18:40.168 Test: blockdev write zeroes read no split ...passed 00:18:40.168 Test: blockdev write zeroes read split ...passed 00:18:40.168 Test: blockdev write zeroes read split partial ...passed 00:18:40.168 Test: blockdev reset ...[2024-11-20 16:30:25.929772] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:40.168 [2024-11-20 16:30:25.929837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x767400 (9): Bad file descriptor 00:18:40.168 [2024-11-20 16:30:26.039253] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:18:40.168 passed 00:18:40.168 Test: blockdev write read 8 blocks ...passed 00:18:40.168 Test: blockdev write read size > 128k ...passed 00:18:40.168 Test: blockdev write read invalid size ...passed 00:18:40.429 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:40.429 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:40.429 Test: blockdev write read max offset ...passed 00:18:40.429 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:40.429 Test: blockdev writev readv 8 blocks ...passed 00:18:40.429 Test: blockdev writev readv 30 x 1block ...passed 00:18:40.429 Test: blockdev writev readv block ...passed 00:18:40.429 Test: blockdev writev readv size > 128k ...passed 00:18:40.429 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:40.429 Test: blockdev comparev and writev ...[2024-11-20 16:30:26.297994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:40.429 [2024-11-20 16:30:26.298019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:40.429 [2024-11-20 16:30:26.298030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:40.429 [2024-11-20 16:30:26.298036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:40.429 [2024-11-20 16:30:26.298386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:40.429 [2024-11-20 16:30:26.298395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:40.429 [2024-11-20 16:30:26.298404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:40.429 [2024-11-20 16:30:26.298410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:40.429 [2024-11-20 16:30:26.298738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:40.429 [2024-11-20 16:30:26.298747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:40.429 [2024-11-20 16:30:26.298756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:40.429 [2024-11-20 16:30:26.298769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:40.429 [2024-11-20 16:30:26.299142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:40.429 [2024-11-20 16:30:26.299150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:40.429 [2024-11-20 16:30:26.299160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:40.429 [2024-11-20 16:30:26.299165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:40.429 passed 00:18:40.429 Test: blockdev nvme passthru rw ...passed 00:18:40.429 Test: blockdev nvme passthru vendor specific ...[2024-11-20 16:30:26.382376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:40.429 [2024-11-20 16:30:26.382386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:40.429 [2024-11-20 16:30:26.382596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:40.429 [2024-11-20 16:30:26.382603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:40.429 [2024-11-20 16:30:26.382843] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:40.429 [2024-11-20 16:30:26.382849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:40.429 [2024-11-20 16:30:26.383070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:40.430 [2024-11-20 16:30:26.383077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:40.430 passed 00:18:40.690 Test: blockdev nvme admin passthru ...passed 00:18:40.690 Test: blockdev copy ...passed 00:18:40.690 00:18:40.690 Run Summary: Type Total Ran Passed Failed Inactive 00:18:40.690 suites 1 1 n/a 0 0 00:18:40.690 tests 23 23 23 0 0 00:18:40.690 asserts 152 152 152 0 n/a 00:18:40.690 00:18:40.690 Elapsed time = 1.298 seconds 00:18:40.950 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:40.950 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.950 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:40.950 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.950 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:40.950 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:40.950 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:40.950 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:18:40.950 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:40.950 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:18:40.951 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:40.951 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:40.951 rmmod nvme_tcp 00:18:40.951 rmmod nvme_fabrics 00:18:40.951 rmmod nvme_keyring 00:18:40.951 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:40.951 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:18:40.951 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:18:40.951 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 2211754 ']' 00:18:40.951 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 2211754 00:18:40.951 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 2211754 ']' 00:18:40.951 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 2211754 00:18:40.951 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:18:40.951 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:40.951 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2211754 00:18:40.951 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:18:40.951 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:18:40.951 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2211754' 00:18:40.951 killing process with pid 2211754 00:18:40.951 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 2211754 00:18:40.951 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 2211754 00:18:41.211 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:41.211 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:41.211 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:41.211 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:18:41.211 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:41.211 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:18:41.211 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:18:41.211 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:41.211 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:41.211 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:41.211 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:41.211 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:43.758 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:43.758 00:18:43.758 real 0m12.463s 00:18:43.758 user 0m14.757s 00:18:43.758 sys 0m6.532s 00:18:43.758 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:43.758 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:43.758 ************************************ 00:18:43.758 END TEST nvmf_bdevio_no_huge 00:18:43.758 ************************************ 00:18:43.758 16:30:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:43.758 16:30:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:43.758 16:30:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:43.758 16:30:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:43.758 ************************************ 00:18:43.758 START TEST nvmf_tls 00:18:43.758 ************************************ 00:18:43.758 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:43.758 * Looking for test storage... 00:18:43.758 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:43.758 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:43.758 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:18:43.758 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:43.758 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:43.758 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:43.758 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:43.758 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:43.758 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:18:43.758 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:18:43.758 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:18:43.758 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:18:43.758 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:18:43.758 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:18:43.758 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:18:43.758 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:43.758 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:18:43.758 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:18:43.758 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:43.758 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:43.758 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:18:43.758 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:18:43.758 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:43.758 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:18:43.758 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:18:43.758 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:18:43.758 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:18:43.758 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:43.758 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:18:43.758 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:18:43.758 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:43.758 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:43.758 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:43.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.759 --rc genhtml_branch_coverage=1 00:18:43.759 --rc genhtml_function_coverage=1 00:18:43.759 --rc genhtml_legend=1 00:18:43.759 --rc geninfo_all_blocks=1 00:18:43.759 --rc geninfo_unexecuted_blocks=1 00:18:43.759 00:18:43.759 ' 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:43.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.759 --rc genhtml_branch_coverage=1 00:18:43.759 --rc genhtml_function_coverage=1 00:18:43.759 --rc genhtml_legend=1 00:18:43.759 --rc geninfo_all_blocks=1 00:18:43.759 --rc geninfo_unexecuted_blocks=1 00:18:43.759 00:18:43.759 ' 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:43.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.759 --rc genhtml_branch_coverage=1 00:18:43.759 --rc genhtml_function_coverage=1 00:18:43.759 --rc genhtml_legend=1 00:18:43.759 --rc geninfo_all_blocks=1 00:18:43.759 --rc geninfo_unexecuted_blocks=1 00:18:43.759 00:18:43.759 ' 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:43.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.759 --rc genhtml_branch_coverage=1 00:18:43.759 --rc genhtml_function_coverage=1 00:18:43.759 --rc genhtml_legend=1 00:18:43.759 --rc geninfo_all_blocks=1 00:18:43.759 --rc geninfo_unexecuted_blocks=1 00:18:43.759 00:18:43.759 ' 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:43.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:18:43.759 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:51.899 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:51.899 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:51.899 Found net devices under 0000:31:00.0: cvl_0_0 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:51.899 Found net devices under 0000:31:00.1: cvl_0_1 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:51.899 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:51.899 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:51.899 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.591 ms 00:18:51.899 00:18:51.899 --- 10.0.0.2 ping statistics --- 00:18:51.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:51.900 rtt min/avg/max/mdev = 0.591/0.591/0.591/0.000 ms 00:18:51.900 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:51.900 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:51.900 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:18:51.900 00:18:51.900 --- 10.0.0.1 ping statistics --- 00:18:51.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:51.900 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:18:51.900 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:51.900 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:18:51.900 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:51.900 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:51.900 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:51.900 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:51.900 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:51.900 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:51.900 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:51.900 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:51.900 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:51.900 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:51.900 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:51.900 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2216477 00:18:51.900 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2216477 00:18:51.900 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2216477 ']' 00:18:51.900 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:51.900 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:51.900 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:51.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:51.900 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:51.900 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:51.900 16:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:51.900 [2024-11-20 16:30:36.863175] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:18:51.900 [2024-11-20 16:30:36.863236] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:51.900 [2024-11-20 16:30:36.966221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.900 [2024-11-20 16:30:37.014538] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:51.900 [2024-11-20 16:30:37.014581] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:51.900 [2024-11-20 16:30:37.014590] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:51.900 [2024-11-20 16:30:37.014596] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:51.900 [2024-11-20 16:30:37.014602] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:51.900 [2024-11-20 16:30:37.015321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:51.900 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:51.900 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:51.900 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:51.900 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:51.900 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:51.900 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:51.900 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:18:51.900 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:52.160 true 00:18:52.160 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:52.160 16:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:18:52.160 16:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:18:52.160 16:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:18:52.160 16:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:52.420 16:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:52.421 16:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:18:52.680 16:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:18:52.680 16:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:18:52.680 16:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:52.680 16:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:52.680 16:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:18:52.941 16:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:18:52.941 16:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:18:52.941 16:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:52.941 16:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:18:53.201 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:18:53.202 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:18:53.202 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:53.462 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:53.462 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:18:53.462 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:18:53.462 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:18:53.462 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:53.721 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:53.721 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:18:53.982 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:18:53.982 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:18:53.982 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:53.982 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:53.982 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:53.982 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:53.982 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:18:53.982 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:53.982 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:53.982 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:53.982 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:53.982 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:53.982 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:53.982 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:53.982 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:18:53.982 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:53.982 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:53.982 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:53.982 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:53.982 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.RlR1dpsSFh 00:18:53.982 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:18:53.982 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.dB1S6I3Hz4 00:18:53.982 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:53.982 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:53.982 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.RlR1dpsSFh 00:18:53.982 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.dB1S6I3Hz4 00:18:53.982 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:54.242 16:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:54.502 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.RlR1dpsSFh 00:18:54.502 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.RlR1dpsSFh 00:18:54.502 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:54.502 [2024-11-20 16:30:40.374758] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:54.502 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:54.762 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:54.762 [2024-11-20 16:30:40.711585] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:54.762 [2024-11-20 16:30:40.711787] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:55.022 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:55.022 malloc0 00:18:55.022 16:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:55.283 16:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.RlR1dpsSFh 00:18:55.283 16:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:55.543 16:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.RlR1dpsSFh 00:19:05.540 Initializing NVMe Controllers 00:19:05.540 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:05.540 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:05.540 Initialization complete. Launching workers. 00:19:05.540 ======================================================== 00:19:05.540 Latency(us) 00:19:05.540 Device Information : IOPS MiB/s Average min max 00:19:05.540 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18641.31 72.82 3433.24 1110.32 4142.98 00:19:05.540 ======================================================== 00:19:05.540 Total : 18641.31 72.82 3433.24 1110.32 4142.98 00:19:05.540 00:19:05.800 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RlR1dpsSFh 00:19:05.800 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:05.800 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:05.800 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:05.800 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.RlR1dpsSFh 00:19:05.800 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:05.800 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2219353 00:19:05.800 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:05.800 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2219353 /var/tmp/bdevperf.sock 00:19:05.800 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:05.800 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2219353 ']' 00:19:05.800 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:05.800 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:05.800 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:05.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:05.800 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:05.800 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:05.800 [2024-11-20 16:30:51.555275] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:19:05.800 [2024-11-20 16:30:51.555329] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2219353 ] 00:19:05.800 [2024-11-20 16:30:51.612615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.800 [2024-11-20 16:30:51.641620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:05.800 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:05.801 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:05.801 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.RlR1dpsSFh 00:19:06.062 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:06.321 [2024-11-20 16:30:52.035754] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:06.321 TLSTESTn1 00:19:06.321 16:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:06.321 Running I/O for 10 seconds... 00:19:08.276 5857.00 IOPS, 22.88 MiB/s [2024-11-20T15:30:55.618Z] 6119.50 IOPS, 23.90 MiB/s [2024-11-20T15:30:56.558Z] 6159.67 IOPS, 24.06 MiB/s [2024-11-20T15:30:57.499Z] 6071.00 IOPS, 23.71 MiB/s [2024-11-20T15:30:58.440Z] 6135.40 IOPS, 23.97 MiB/s [2024-11-20T15:30:59.384Z] 6148.33 IOPS, 24.02 MiB/s [2024-11-20T15:31:00.324Z] 6160.29 IOPS, 24.06 MiB/s [2024-11-20T15:31:01.264Z] 6157.00 IOPS, 24.05 MiB/s [2024-11-20T15:31:02.681Z] 6201.78 IOPS, 24.23 MiB/s [2024-11-20T15:31:02.681Z] 6214.50 IOPS, 24.28 MiB/s 00:19:16.722 Latency(us) 00:19:16.722 [2024-11-20T15:31:02.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.722 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:16.722 Verification LBA range: start 0x0 length 0x2000 00:19:16.722 TLSTESTn1 : 10.01 6219.08 24.29 0.00 0.00 20553.33 4751.36 41724.59 00:19:16.722 [2024-11-20T15:31:02.681Z] =================================================================================================================== 00:19:16.722 [2024-11-20T15:31:02.681Z] Total : 6219.08 24.29 0.00 0.00 20553.33 4751.36 41724.59 00:19:16.722 { 00:19:16.722 "results": [ 00:19:16.722 { 00:19:16.722 "job": "TLSTESTn1", 00:19:16.722 "core_mask": "0x4", 00:19:16.722 "workload": "verify", 00:19:16.722 "status": "finished", 00:19:16.722 "verify_range": { 00:19:16.722 "start": 0, 00:19:16.722 "length": 8192 00:19:16.722 }, 00:19:16.722 "queue_depth": 128, 00:19:16.722 "io_size": 4096, 00:19:16.722 "runtime": 10.013057, 00:19:16.722 "iops": 6219.0797475735935, 00:19:16.722 "mibps": 24.29328026395935, 00:19:16.722 "io_failed": 0, 00:19:16.722 "io_timeout": 0, 00:19:16.722 "avg_latency_us": 20553.325495032543, 00:19:16.722 "min_latency_us": 4751.36, 00:19:16.722 "max_latency_us": 41724.58666666667 00:19:16.722 } 00:19:16.722 ], 00:19:16.722 "core_count": 1 00:19:16.722 } 00:19:16.722 16:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:16.722 16:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2219353 00:19:16.722 16:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2219353 ']' 00:19:16.722 16:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2219353 00:19:16.722 16:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:16.722 16:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:16.722 16:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2219353 00:19:16.722 16:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:16.722 16:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:16.722 16:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2219353' 00:19:16.722 killing process with pid 2219353 00:19:16.722 16:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2219353 00:19:16.722 Received shutdown signal, test time was about 10.000000 seconds 00:19:16.722 00:19:16.722 Latency(us) 00:19:16.722 [2024-11-20T15:31:02.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.722 [2024-11-20T15:31:02.681Z] =================================================================================================================== 00:19:16.722 [2024-11-20T15:31:02.681Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:16.722 16:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2219353 00:19:16.722 16:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dB1S6I3Hz4 00:19:16.722 16:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:16.722 16:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dB1S6I3Hz4 00:19:16.722 16:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:16.722 16:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:16.722 16:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:16.722 16:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:16.722 16:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dB1S6I3Hz4 00:19:16.722 16:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:16.722 16:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:16.722 16:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:16.722 16:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.dB1S6I3Hz4 00:19:16.722 16:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:16.722 16:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2221371 00:19:16.722 16:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:16.722 16:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2221371 /var/tmp/bdevperf.sock 00:19:16.722 16:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:16.722 16:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2221371 ']' 00:19:16.722 16:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:16.722 16:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:16.722 16:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:16.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:16.722 16:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:16.722 16:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:16.722 [2024-11-20 16:31:02.501794] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:19:16.722 [2024-11-20 16:31:02.501850] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2221371 ] 00:19:16.722 [2024-11-20 16:31:02.560556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.722 [2024-11-20 16:31:02.588657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:16.722 16:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:16.723 16:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:16.723 16:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.dB1S6I3Hz4 00:19:17.061 16:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:17.061 [2024-11-20 16:31:02.974841] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:17.061 [2024-11-20 16:31:02.980301] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:17.061 [2024-11-20 16:31:02.980929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1adcbc0 (107): Transport endpoint is not connected 00:19:17.061 [2024-11-20 16:31:02.981926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1adcbc0 (9): Bad file descriptor 00:19:17.061 [2024-11-20 16:31:02.982927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:17.061 [2024-11-20 16:31:02.982935] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:17.061 [2024-11-20 16:31:02.982941] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:17.061 [2024-11-20 16:31:02.982948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:17.061 request: 00:19:17.061 { 00:19:17.061 "name": "TLSTEST", 00:19:17.061 "trtype": "tcp", 00:19:17.061 "traddr": "10.0.0.2", 00:19:17.061 "adrfam": "ipv4", 00:19:17.061 "trsvcid": "4420", 00:19:17.061 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:17.061 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:17.061 "prchk_reftag": false, 00:19:17.061 "prchk_guard": false, 00:19:17.061 "hdgst": false, 00:19:17.061 "ddgst": false, 00:19:17.061 "psk": "key0", 00:19:17.061 "allow_unrecognized_csi": false, 00:19:17.061 "method": "bdev_nvme_attach_controller", 00:19:17.061 "req_id": 1 00:19:17.061 } 00:19:17.061 Got JSON-RPC error response 00:19:17.061 response: 00:19:17.061 { 00:19:17.061 "code": -5, 00:19:17.061 "message": "Input/output error" 00:19:17.061 } 00:19:17.061 16:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2221371 00:19:17.061 16:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2221371 ']' 00:19:17.061 16:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2221371 00:19:17.061 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:17.339 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:17.339 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2221371 00:19:17.339 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:17.339 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:17.339 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2221371' 00:19:17.339 killing process with pid 2221371 00:19:17.339 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2221371 00:19:17.339 Received shutdown signal, test time was about 10.000000 seconds 00:19:17.339 00:19:17.339 Latency(us) 00:19:17.339 [2024-11-20T15:31:03.298Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.339 [2024-11-20T15:31:03.298Z] =================================================================================================================== 00:19:17.339 [2024-11-20T15:31:03.298Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:17.339 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2221371 00:19:17.339 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:17.339 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:17.339 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:17.339 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:17.339 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:17.339 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.RlR1dpsSFh 00:19:17.339 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:17.339 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.RlR1dpsSFh 00:19:17.339 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:17.339 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:17.339 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:17.339 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:17.339 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.RlR1dpsSFh 00:19:17.339 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:17.339 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:17.339 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:17.339 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.RlR1dpsSFh 00:19:17.339 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:17.339 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2221657 00:19:17.339 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:17.339 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2221657 /var/tmp/bdevperf.sock 00:19:17.339 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:17.339 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2221657 ']' 00:19:17.339 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:17.339 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:17.339 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:17.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:17.339 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:17.339 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:17.339 [2024-11-20 16:31:03.213744] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:19:17.339 [2024-11-20 16:31:03.213802] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2221657 ] 00:19:17.339 [2024-11-20 16:31:03.271383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.599 [2024-11-20 16:31:03.300104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:17.599 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:17.599 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:17.599 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.RlR1dpsSFh 00:19:17.599 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:19:17.860 [2024-11-20 16:31:03.694252] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:17.860 [2024-11-20 16:31:03.701946] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:17.860 [2024-11-20 16:31:03.701965] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:17.860 [2024-11-20 16:31:03.701991] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:17.860 [2024-11-20 16:31:03.702440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d6bc0 (107): Transport endpoint is not connected 00:19:17.860 [2024-11-20 16:31:03.703436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d6bc0 (9): Bad file descriptor 00:19:17.860 [2024-11-20 16:31:03.704438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:17.860 [2024-11-20 16:31:03.704445] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:17.860 [2024-11-20 16:31:03.704450] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:17.860 [2024-11-20 16:31:03.704461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:17.860 request: 00:19:17.860 { 00:19:17.860 "name": "TLSTEST", 00:19:17.860 "trtype": "tcp", 00:19:17.860 "traddr": "10.0.0.2", 00:19:17.860 "adrfam": "ipv4", 00:19:17.860 "trsvcid": "4420", 00:19:17.860 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:17.860 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:17.860 "prchk_reftag": false, 00:19:17.860 "prchk_guard": false, 00:19:17.860 "hdgst": false, 00:19:17.860 "ddgst": false, 00:19:17.860 "psk": "key0", 00:19:17.860 "allow_unrecognized_csi": false, 00:19:17.860 "method": "bdev_nvme_attach_controller", 00:19:17.860 "req_id": 1 00:19:17.860 } 00:19:17.860 Got JSON-RPC error response 00:19:17.860 response: 00:19:17.860 { 00:19:17.860 "code": -5, 00:19:17.860 "message": "Input/output error" 00:19:17.860 } 00:19:17.860 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2221657 00:19:17.860 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2221657 ']' 00:19:17.860 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2221657 00:19:17.860 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:17.860 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:17.860 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2221657 00:19:17.860 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:17.860 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:17.860 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2221657' 00:19:17.860 killing process with pid 2221657 00:19:17.860 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2221657 00:19:17.860 Received shutdown signal, test time was about 10.000000 seconds 00:19:17.860 00:19:17.860 Latency(us) 00:19:17.860 [2024-11-20T15:31:03.819Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.860 [2024-11-20T15:31:03.819Z] =================================================================================================================== 00:19:17.860 [2024-11-20T15:31:03.819Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:17.860 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2221657 00:19:18.121 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:18.121 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:18.121 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:18.121 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:18.121 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:18.121 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.RlR1dpsSFh 00:19:18.121 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:18.121 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.RlR1dpsSFh 00:19:18.121 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:18.121 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:18.121 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:18.121 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:18.121 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.RlR1dpsSFh 00:19:18.121 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:18.121 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:18.121 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:18.121 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.RlR1dpsSFh 00:19:18.121 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:18.121 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2221726 00:19:18.121 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:18.121 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2221726 /var/tmp/bdevperf.sock 00:19:18.121 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:18.121 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2221726 ']' 00:19:18.121 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:18.121 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:18.121 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:18.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:18.121 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:18.121 16:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.121 [2024-11-20 16:31:03.933956] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:19:18.121 [2024-11-20 16:31:03.934016] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2221726 ] 00:19:18.121 [2024-11-20 16:31:03.991591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.121 [2024-11-20 16:31:04.020887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:18.382 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:18.382 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:18.382 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.RlR1dpsSFh 00:19:18.382 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:18.643 [2024-11-20 16:31:04.398991] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:18.643 [2024-11-20 16:31:04.403661] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:18.643 [2024-11-20 16:31:04.403678] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:18.644 [2024-11-20 16:31:04.403696] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:18.644 [2024-11-20 16:31:04.404117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ecdbc0 (107): Transport endpoint is not connected 00:19:18.644 [2024-11-20 16:31:04.405112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ecdbc0 (9): Bad file descriptor 00:19:18.644 [2024-11-20 16:31:04.406114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:19:18.644 [2024-11-20 16:31:04.406123] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:18.644 [2024-11-20 16:31:04.406129] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:19:18.644 [2024-11-20 16:31:04.406137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:19:18.644 request: 00:19:18.644 { 00:19:18.644 "name": "TLSTEST", 00:19:18.644 "trtype": "tcp", 00:19:18.644 "traddr": "10.0.0.2", 00:19:18.644 "adrfam": "ipv4", 00:19:18.644 "trsvcid": "4420", 00:19:18.644 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:18.644 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:18.644 "prchk_reftag": false, 00:19:18.644 "prchk_guard": false, 00:19:18.644 "hdgst": false, 00:19:18.644 "ddgst": false, 00:19:18.644 "psk": "key0", 00:19:18.644 "allow_unrecognized_csi": false, 00:19:18.644 "method": "bdev_nvme_attach_controller", 00:19:18.644 "req_id": 1 00:19:18.644 } 00:19:18.644 Got JSON-RPC error response 00:19:18.644 response: 00:19:18.644 { 00:19:18.644 "code": -5, 00:19:18.644 "message": "Input/output error" 00:19:18.644 } 00:19:18.644 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2221726 00:19:18.644 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2221726 ']' 00:19:18.644 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2221726 00:19:18.644 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:18.644 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:18.644 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2221726 00:19:18.644 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:18.644 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:18.644 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2221726' 00:19:18.644 killing process with pid 2221726 00:19:18.644 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2221726 00:19:18.644 Received shutdown signal, test time was about 10.000000 seconds 00:19:18.644 00:19:18.644 Latency(us) 00:19:18.644 [2024-11-20T15:31:04.603Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:18.644 [2024-11-20T15:31:04.603Z] =================================================================================================================== 00:19:18.644 [2024-11-20T15:31:04.603Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:18.644 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2221726 00:19:18.644 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:18.644 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:18.644 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:18.644 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:18.644 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:18.644 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:18.644 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:18.644 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:18.644 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:18.644 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:18.644 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:18.644 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:18.644 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:18.644 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:18.644 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:18.644 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:18.644 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:18.644 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:18.644 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2221869 00:19:18.644 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:18.644 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2221869 /var/tmp/bdevperf.sock 00:19:18.644 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:18.644 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2221869 ']' 00:19:18.644 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:18.644 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:18.644 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:18.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:18.644 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:18.644 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.905 [2024-11-20 16:31:04.630834] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:19:18.905 [2024-11-20 16:31:04.630890] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2221869 ] 00:19:18.905 [2024-11-20 16:31:04.688312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.905 [2024-11-20 16:31:04.717233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:18.905 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:18.905 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:18.905 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:19:19.165 [2024-11-20 16:31:04.942985] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:19:19.165 [2024-11-20 16:31:04.943005] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:19.165 request: 00:19:19.165 { 00:19:19.165 "name": "key0", 00:19:19.165 "path": "", 00:19:19.165 "method": "keyring_file_add_key", 00:19:19.165 "req_id": 1 00:19:19.165 } 00:19:19.165 Got JSON-RPC error response 00:19:19.165 response: 00:19:19.165 { 00:19:19.165 "code": -1, 00:19:19.165 "message": "Operation not permitted" 00:19:19.165 } 00:19:19.165 16:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:19.165 [2024-11-20 16:31:05.095443] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:19.165 [2024-11-20 16:31:05.095467] bdev_nvme.c:6717:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:19.165 request: 00:19:19.165 { 00:19:19.165 "name": "TLSTEST", 00:19:19.165 "trtype": "tcp", 00:19:19.165 "traddr": "10.0.0.2", 00:19:19.165 "adrfam": "ipv4", 00:19:19.165 "trsvcid": "4420", 00:19:19.165 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:19.165 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:19.165 "prchk_reftag": false, 00:19:19.165 "prchk_guard": false, 00:19:19.165 "hdgst": false, 00:19:19.165 "ddgst": false, 00:19:19.165 "psk": "key0", 00:19:19.165 "allow_unrecognized_csi": false, 00:19:19.165 "method": "bdev_nvme_attach_controller", 00:19:19.165 "req_id": 1 00:19:19.165 } 00:19:19.165 Got JSON-RPC error response 00:19:19.165 response: 00:19:19.165 { 00:19:19.165 "code": -126, 00:19:19.165 "message": "Required key not available" 00:19:19.165 } 00:19:19.165 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2221869 00:19:19.165 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2221869 ']' 00:19:19.165 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2221869 00:19:19.165 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:19.165 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:19.165 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2221869 00:19:19.426 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:19.426 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:19.426 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2221869' 00:19:19.426 killing process with pid 2221869 00:19:19.426 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2221869 00:19:19.426 Received shutdown signal, test time was about 10.000000 seconds 00:19:19.426 00:19:19.426 Latency(us) 00:19:19.426 [2024-11-20T15:31:05.385Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:19.426 [2024-11-20T15:31:05.385Z] =================================================================================================================== 00:19:19.426 [2024-11-20T15:31:05.385Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:19.426 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2221869 00:19:19.426 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:19.426 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:19.426 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:19.426 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:19.426 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:19.426 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2216477 00:19:19.426 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2216477 ']' 00:19:19.426 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2216477 00:19:19.426 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:19.426 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:19.426 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2216477 00:19:19.426 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:19.426 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:19.426 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2216477' 00:19:19.426 killing process with pid 2216477 00:19:19.426 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2216477 00:19:19.426 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2216477 00:19:19.687 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:19.687 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:19.687 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:19.687 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:19.687 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:19.687 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:19:19.687 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:19.687 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:19.687 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:19:19.687 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.egsM6D515H 00:19:19.687 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:19.687 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.egsM6D515H 00:19:19.687 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:19:19.687 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:19.687 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:19.687 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:19.687 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2222092 00:19:19.687 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2222092 00:19:19.687 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:19.687 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2222092 ']' 00:19:19.687 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.687 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:19.687 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:19.687 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:19.687 16:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:19.687 [2024-11-20 16:31:05.559768] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:19:19.687 [2024-11-20 16:31:05.559830] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:19.947 [2024-11-20 16:31:05.652915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.947 [2024-11-20 16:31:05.685188] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:19.947 [2024-11-20 16:31:05.685218] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:19.947 [2024-11-20 16:31:05.685224] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:19.947 [2024-11-20 16:31:05.685229] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:19.947 [2024-11-20 16:31:05.685233] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:19.947 [2024-11-20 16:31:05.685720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:20.519 16:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:20.519 16:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:20.519 16:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:20.519 16:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:20.519 16:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:20.519 16:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:20.519 16:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.egsM6D515H 00:19:20.519 16:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.egsM6D515H 00:19:20.519 16:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:20.780 [2024-11-20 16:31:06.540469] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:20.780 16:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:20.780 16:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:21.042 [2024-11-20 16:31:06.853228] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:21.042 [2024-11-20 16:31:06.853414] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:21.042 16:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:21.303 malloc0 00:19:21.303 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:21.303 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.egsM6D515H 00:19:21.563 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:21.563 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.egsM6D515H 00:19:21.563 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:21.563 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:21.563 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:21.563 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.egsM6D515H 00:19:21.563 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:21.563 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:21.564 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2222454 00:19:21.564 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:21.564 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2222454 /var/tmp/bdevperf.sock 00:19:21.564 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2222454 ']' 00:19:21.564 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:21.564 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:21.564 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:21.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:21.564 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:21.564 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:21.564 [2024-11-20 16:31:07.518642] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:19:21.564 [2024-11-20 16:31:07.518698] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2222454 ] 00:19:21.823 [2024-11-20 16:31:07.577423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.823 [2024-11-20 16:31:07.606292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:21.823 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:21.823 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:21.823 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.egsM6D515H 00:19:22.083 16:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:22.083 [2024-11-20 16:31:08.024641] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:22.343 TLSTESTn1 00:19:22.343 16:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:22.343 Running I/O for 10 seconds... 00:19:24.668 6058.00 IOPS, 23.66 MiB/s [2024-11-20T15:31:11.578Z] 5230.00 IOPS, 20.43 MiB/s [2024-11-20T15:31:12.518Z] 5241.67 IOPS, 20.48 MiB/s [2024-11-20T15:31:13.459Z] 5278.00 IOPS, 20.62 MiB/s [2024-11-20T15:31:14.399Z] 5521.40 IOPS, 21.57 MiB/s [2024-11-20T15:31:15.339Z] 5601.00 IOPS, 21.88 MiB/s [2024-11-20T15:31:16.278Z] 5649.00 IOPS, 22.07 MiB/s [2024-11-20T15:31:17.658Z] 5649.88 IOPS, 22.07 MiB/s [2024-11-20T15:31:18.599Z] 5660.11 IOPS, 22.11 MiB/s [2024-11-20T15:31:18.599Z] 5676.40 IOPS, 22.17 MiB/s 00:19:32.640 Latency(us) 00:19:32.640 [2024-11-20T15:31:18.599Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.640 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:32.640 Verification LBA range: start 0x0 length 0x2000 00:19:32.640 TLSTESTn1 : 10.01 5680.68 22.19 0.00 0.00 22501.63 4642.13 31675.73 00:19:32.640 [2024-11-20T15:31:18.599Z] =================================================================================================================== 00:19:32.640 [2024-11-20T15:31:18.599Z] Total : 5680.68 22.19 0.00 0.00 22501.63 4642.13 31675.73 00:19:32.640 { 00:19:32.640 "results": [ 00:19:32.640 { 00:19:32.640 "job": "TLSTESTn1", 00:19:32.640 "core_mask": "0x4", 00:19:32.640 "workload": "verify", 00:19:32.640 "status": "finished", 00:19:32.640 "verify_range": { 00:19:32.640 "start": 0, 00:19:32.640 "length": 8192 00:19:32.640 }, 00:19:32.640 "queue_depth": 128, 00:19:32.640 "io_size": 4096, 00:19:32.640 "runtime": 10.014999, 00:19:32.640 "iops": 5680.679548744838, 00:19:32.640 "mibps": 22.190154487284524, 00:19:32.640 "io_failed": 0, 00:19:32.640 "io_timeout": 0, 00:19:32.640 "avg_latency_us": 22501.62784222738, 00:19:32.640 "min_latency_us": 4642.133333333333, 00:19:32.640 "max_latency_us": 31675.733333333334 00:19:32.640 } 00:19:32.640 ], 00:19:32.640 "core_count": 1 00:19:32.640 } 00:19:32.640 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:32.640 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2222454 00:19:32.640 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2222454 ']' 00:19:32.640 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2222454 00:19:32.640 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:32.640 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:32.640 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2222454 00:19:32.640 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:32.640 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:32.640 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2222454' 00:19:32.640 killing process with pid 2222454 00:19:32.640 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2222454 00:19:32.640 Received shutdown signal, test time was about 10.000000 seconds 00:19:32.640 00:19:32.640 Latency(us) 00:19:32.640 [2024-11-20T15:31:18.599Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.640 [2024-11-20T15:31:18.599Z] =================================================================================================================== 00:19:32.640 [2024-11-20T15:31:18.599Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:32.640 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2222454 00:19:32.640 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.egsM6D515H 00:19:32.640 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.egsM6D515H 00:19:32.640 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:32.640 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.egsM6D515H 00:19:32.640 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:32.640 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:32.640 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:32.640 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:32.640 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.egsM6D515H 00:19:32.640 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:32.640 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:32.640 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:32.640 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.egsM6D515H 00:19:32.640 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:32.640 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2224599 00:19:32.640 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:32.640 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:32.640 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2224599 /var/tmp/bdevperf.sock 00:19:32.640 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2224599 ']' 00:19:32.640 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:32.640 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:32.641 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:32.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:32.641 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:32.641 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.641 [2024-11-20 16:31:18.505709] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:19:32.641 [2024-11-20 16:31:18.505768] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2224599 ] 00:19:32.641 [2024-11-20 16:31:18.565464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.641 [2024-11-20 16:31:18.593687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:32.900 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:32.900 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:32.900 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.egsM6D515H 00:19:32.900 [2024-11-20 16:31:18.831549] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.egsM6D515H': 0100666 00:19:32.900 [2024-11-20 16:31:18.831573] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:32.900 request: 00:19:32.900 { 00:19:32.900 "name": "key0", 00:19:32.900 "path": "/tmp/tmp.egsM6D515H", 00:19:32.900 "method": "keyring_file_add_key", 00:19:32.900 "req_id": 1 00:19:32.900 } 00:19:32.900 Got JSON-RPC error response 00:19:32.900 response: 00:19:32.900 { 00:19:32.900 "code": -1, 00:19:32.900 "message": "Operation not permitted" 00:19:32.900 } 00:19:33.160 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:33.160 [2024-11-20 16:31:19.012079] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:33.160 [2024-11-20 16:31:19.012100] bdev_nvme.c:6717:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:33.160 request: 00:19:33.160 { 00:19:33.160 "name": "TLSTEST", 00:19:33.160 "trtype": "tcp", 00:19:33.160 "traddr": "10.0.0.2", 00:19:33.160 "adrfam": "ipv4", 00:19:33.161 "trsvcid": "4420", 00:19:33.161 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:33.161 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:33.161 "prchk_reftag": false, 00:19:33.161 "prchk_guard": false, 00:19:33.161 "hdgst": false, 00:19:33.161 "ddgst": false, 00:19:33.161 "psk": "key0", 00:19:33.161 "allow_unrecognized_csi": false, 00:19:33.161 "method": "bdev_nvme_attach_controller", 00:19:33.161 "req_id": 1 00:19:33.161 } 00:19:33.161 Got JSON-RPC error response 00:19:33.161 response: 00:19:33.161 { 00:19:33.161 "code": -126, 00:19:33.161 "message": "Required key not available" 00:19:33.161 } 00:19:33.161 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2224599 00:19:33.161 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2224599 ']' 00:19:33.161 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2224599 00:19:33.161 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:33.161 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:33.161 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2224599 00:19:33.161 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:33.161 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:33.161 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2224599' 00:19:33.161 killing process with pid 2224599 00:19:33.161 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2224599 00:19:33.161 Received shutdown signal, test time was about 10.000000 seconds 00:19:33.161 00:19:33.161 Latency(us) 00:19:33.161 [2024-11-20T15:31:19.120Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.161 [2024-11-20T15:31:19.120Z] =================================================================================================================== 00:19:33.161 [2024-11-20T15:31:19.120Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:33.161 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2224599 00:19:33.422 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:33.422 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:33.422 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:33.422 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:33.422 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:33.422 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2222092 00:19:33.422 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2222092 ']' 00:19:33.422 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2222092 00:19:33.422 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:33.422 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:33.422 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2222092 00:19:33.422 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:33.422 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:33.422 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2222092' 00:19:33.422 killing process with pid 2222092 00:19:33.422 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2222092 00:19:33.422 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2222092 00:19:33.422 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:19:33.422 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:33.422 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:33.422 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:33.683 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2224809 00:19:33.683 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2224809 00:19:33.683 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:33.683 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2224809 ']' 00:19:33.683 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:33.683 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:33.683 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:33.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:33.683 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:33.683 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:33.683 [2024-11-20 16:31:19.444069] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:19:33.683 [2024-11-20 16:31:19.444133] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:33.683 [2024-11-20 16:31:19.535230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.683 [2024-11-20 16:31:19.562779] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:33.683 [2024-11-20 16:31:19.562810] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:33.683 [2024-11-20 16:31:19.562816] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:33.683 [2024-11-20 16:31:19.562821] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:33.683 [2024-11-20 16:31:19.562825] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:33.683 [2024-11-20 16:31:19.563308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:33.683 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:33.683 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:33.683 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:33.683 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:33.683 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:33.944 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:33.944 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.egsM6D515H 00:19:33.944 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:33.944 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.egsM6D515H 00:19:33.944 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:19:33.944 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:33.944 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:19:33.944 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:33.944 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.egsM6D515H 00:19:33.944 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.egsM6D515H 00:19:33.944 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:33.944 [2024-11-20 16:31:19.834864] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:33.944 16:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:34.204 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:34.204 [2024-11-20 16:31:20.159679] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:34.204 [2024-11-20 16:31:20.159888] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:34.465 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:34.465 malloc0 00:19:34.465 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:34.725 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.egsM6D515H 00:19:34.725 [2024-11-20 16:31:20.646544] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.egsM6D515H': 0100666 00:19:34.725 [2024-11-20 16:31:20.646566] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:34.725 request: 00:19:34.725 { 00:19:34.725 "name": "key0", 00:19:34.725 "path": "/tmp/tmp.egsM6D515H", 00:19:34.725 "method": "keyring_file_add_key", 00:19:34.725 "req_id": 1 00:19:34.725 } 00:19:34.725 Got JSON-RPC error response 00:19:34.725 response: 00:19:34.725 { 00:19:34.725 "code": -1, 00:19:34.726 "message": "Operation not permitted" 00:19:34.726 } 00:19:34.726 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:34.986 [2024-11-20 16:31:20.827013] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:19:34.986 [2024-11-20 16:31:20.827040] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:34.986 request: 00:19:34.986 { 00:19:34.986 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:34.986 "host": "nqn.2016-06.io.spdk:host1", 00:19:34.986 "psk": "key0", 00:19:34.986 "method": "nvmf_subsystem_add_host", 00:19:34.986 "req_id": 1 00:19:34.986 } 00:19:34.986 Got JSON-RPC error response 00:19:34.986 response: 00:19:34.986 { 00:19:34.986 "code": -32603, 00:19:34.986 "message": "Internal error" 00:19:34.986 } 00:19:34.986 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:34.986 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:34.986 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:34.986 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:34.986 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2224809 00:19:34.986 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2224809 ']' 00:19:34.986 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2224809 00:19:34.986 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:34.986 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:34.986 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2224809 00:19:34.986 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:34.986 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:34.986 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2224809' 00:19:34.986 killing process with pid 2224809 00:19:34.986 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2224809 00:19:34.986 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2224809 00:19:35.246 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.egsM6D515H 00:19:35.246 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:19:35.246 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:35.246 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:35.247 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:35.247 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2225177 00:19:35.247 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2225177 00:19:35.247 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:35.247 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2225177 ']' 00:19:35.247 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:35.247 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:35.247 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:35.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:35.247 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:35.247 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:35.247 [2024-11-20 16:31:21.100721] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:19:35.247 [2024-11-20 16:31:21.100771] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:35.247 [2024-11-20 16:31:21.192560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.507 [2024-11-20 16:31:21.220068] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:35.507 [2024-11-20 16:31:21.220105] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:35.507 [2024-11-20 16:31:21.220111] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:35.507 [2024-11-20 16:31:21.220115] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:35.507 [2024-11-20 16:31:21.220119] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:35.507 [2024-11-20 16:31:21.220589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:36.077 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:36.077 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:36.077 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:36.077 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:36.077 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.077 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:36.077 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.egsM6D515H 00:19:36.077 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.egsM6D515H 00:19:36.077 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:36.338 [2024-11-20 16:31:22.065534] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:36.338 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:36.338 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:36.599 [2024-11-20 16:31:22.386318] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:36.599 [2024-11-20 16:31:22.386513] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:36.599 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:36.860 malloc0 00:19:36.860 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:36.860 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.egsM6D515H 00:19:37.121 16:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:37.382 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2225543 00:19:37.382 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:37.382 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:37.382 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2225543 /var/tmp/bdevperf.sock 00:19:37.382 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2225543 ']' 00:19:37.382 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:37.382 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:37.382 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:37.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:37.382 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:37.382 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:37.382 [2024-11-20 16:31:23.164815] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:19:37.382 [2024-11-20 16:31:23.164865] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2225543 ] 00:19:37.382 [2024-11-20 16:31:23.223688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.382 [2024-11-20 16:31:23.252718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:37.382 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:37.382 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:37.382 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.egsM6D515H 00:19:37.644 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:37.905 [2024-11-20 16:31:23.643076] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:37.905 TLSTESTn1 00:19:37.905 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:38.166 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:19:38.166 "subsystems": [ 00:19:38.166 { 00:19:38.166 "subsystem": "keyring", 00:19:38.166 "config": [ 00:19:38.166 { 00:19:38.166 "method": "keyring_file_add_key", 00:19:38.166 "params": { 00:19:38.166 "name": "key0", 00:19:38.166 "path": "/tmp/tmp.egsM6D515H" 00:19:38.166 } 00:19:38.166 } 00:19:38.166 ] 00:19:38.166 }, 00:19:38.166 { 00:19:38.166 "subsystem": "iobuf", 00:19:38.166 "config": [ 00:19:38.166 { 00:19:38.166 "method": "iobuf_set_options", 00:19:38.166 "params": { 00:19:38.167 "small_pool_count": 8192, 00:19:38.167 "large_pool_count": 1024, 00:19:38.167 "small_bufsize": 8192, 00:19:38.167 "large_bufsize": 135168, 00:19:38.167 "enable_numa": false 00:19:38.167 } 00:19:38.167 } 00:19:38.167 ] 00:19:38.167 }, 00:19:38.167 { 00:19:38.167 "subsystem": "sock", 00:19:38.167 "config": [ 00:19:38.167 { 00:19:38.167 "method": "sock_set_default_impl", 00:19:38.167 "params": { 00:19:38.167 "impl_name": "posix" 00:19:38.167 } 00:19:38.167 }, 00:19:38.167 { 00:19:38.167 "method": "sock_impl_set_options", 00:19:38.167 "params": { 00:19:38.167 "impl_name": "ssl", 00:19:38.167 "recv_buf_size": 4096, 00:19:38.167 "send_buf_size": 4096, 00:19:38.167 "enable_recv_pipe": true, 00:19:38.167 "enable_quickack": false, 00:19:38.167 "enable_placement_id": 0, 00:19:38.167 "enable_zerocopy_send_server": true, 00:19:38.167 "enable_zerocopy_send_client": false, 00:19:38.167 "zerocopy_threshold": 0, 00:19:38.167 "tls_version": 0, 00:19:38.167 "enable_ktls": false 00:19:38.167 } 00:19:38.167 }, 00:19:38.167 { 00:19:38.167 "method": "sock_impl_set_options", 00:19:38.167 "params": { 00:19:38.167 "impl_name": "posix", 00:19:38.167 "recv_buf_size": 2097152, 00:19:38.167 "send_buf_size": 2097152, 00:19:38.167 "enable_recv_pipe": true, 00:19:38.167 "enable_quickack": false, 00:19:38.167 "enable_placement_id": 0, 00:19:38.167 "enable_zerocopy_send_server": true, 00:19:38.167 "enable_zerocopy_send_client": false, 00:19:38.167 "zerocopy_threshold": 0, 00:19:38.167 "tls_version": 0, 00:19:38.167 "enable_ktls": false 00:19:38.167 } 00:19:38.167 } 00:19:38.167 ] 00:19:38.167 }, 00:19:38.167 { 00:19:38.167 "subsystem": "vmd", 00:19:38.167 "config": [] 00:19:38.167 }, 00:19:38.167 { 00:19:38.167 "subsystem": "accel", 00:19:38.167 "config": [ 00:19:38.167 { 00:19:38.167 "method": "accel_set_options", 00:19:38.167 "params": { 00:19:38.167 "small_cache_size": 128, 00:19:38.167 "large_cache_size": 16, 00:19:38.167 "task_count": 2048, 00:19:38.167 "sequence_count": 2048, 00:19:38.167 "buf_count": 2048 00:19:38.167 } 00:19:38.167 } 00:19:38.167 ] 00:19:38.167 }, 00:19:38.167 { 00:19:38.167 "subsystem": "bdev", 00:19:38.167 "config": [ 00:19:38.167 { 00:19:38.167 "method": "bdev_set_options", 00:19:38.167 "params": { 00:19:38.167 "bdev_io_pool_size": 65535, 00:19:38.167 "bdev_io_cache_size": 256, 00:19:38.167 "bdev_auto_examine": true, 00:19:38.167 "iobuf_small_cache_size": 128, 00:19:38.167 "iobuf_large_cache_size": 16 00:19:38.167 } 00:19:38.167 }, 00:19:38.167 { 00:19:38.167 "method": "bdev_raid_set_options", 00:19:38.167 "params": { 00:19:38.167 "process_window_size_kb": 1024, 00:19:38.167 "process_max_bandwidth_mb_sec": 0 00:19:38.167 } 00:19:38.167 }, 00:19:38.167 { 00:19:38.167 "method": "bdev_iscsi_set_options", 00:19:38.167 "params": { 00:19:38.167 "timeout_sec": 30 00:19:38.167 } 00:19:38.167 }, 00:19:38.167 { 00:19:38.167 "method": "bdev_nvme_set_options", 00:19:38.167 "params": { 00:19:38.167 "action_on_timeout": "none", 00:19:38.167 "timeout_us": 0, 00:19:38.167 "timeout_admin_us": 0, 00:19:38.167 "keep_alive_timeout_ms": 10000, 00:19:38.167 "arbitration_burst": 0, 00:19:38.167 "low_priority_weight": 0, 00:19:38.167 "medium_priority_weight": 0, 00:19:38.167 "high_priority_weight": 0, 00:19:38.167 "nvme_adminq_poll_period_us": 10000, 00:19:38.167 "nvme_ioq_poll_period_us": 0, 00:19:38.167 "io_queue_requests": 0, 00:19:38.167 "delay_cmd_submit": true, 00:19:38.167 "transport_retry_count": 4, 00:19:38.167 "bdev_retry_count": 3, 00:19:38.167 "transport_ack_timeout": 0, 00:19:38.167 "ctrlr_loss_timeout_sec": 0, 00:19:38.167 "reconnect_delay_sec": 0, 00:19:38.167 "fast_io_fail_timeout_sec": 0, 00:19:38.167 "disable_auto_failback": false, 00:19:38.167 "generate_uuids": false, 00:19:38.167 "transport_tos": 0, 00:19:38.167 "nvme_error_stat": false, 00:19:38.167 "rdma_srq_size": 0, 00:19:38.167 "io_path_stat": false, 00:19:38.167 "allow_accel_sequence": false, 00:19:38.167 "rdma_max_cq_size": 0, 00:19:38.167 "rdma_cm_event_timeout_ms": 0, 00:19:38.167 "dhchap_digests": [ 00:19:38.167 "sha256", 00:19:38.167 "sha384", 00:19:38.167 "sha512" 00:19:38.167 ], 00:19:38.167 "dhchap_dhgroups": [ 00:19:38.167 "null", 00:19:38.167 "ffdhe2048", 00:19:38.167 "ffdhe3072", 00:19:38.167 "ffdhe4096", 00:19:38.167 "ffdhe6144", 00:19:38.167 "ffdhe8192" 00:19:38.167 ] 00:19:38.167 } 00:19:38.167 }, 00:19:38.167 { 00:19:38.167 "method": "bdev_nvme_set_hotplug", 00:19:38.167 "params": { 00:19:38.167 "period_us": 100000, 00:19:38.167 "enable": false 00:19:38.167 } 00:19:38.167 }, 00:19:38.167 { 00:19:38.167 "method": "bdev_malloc_create", 00:19:38.167 "params": { 00:19:38.167 "name": "malloc0", 00:19:38.167 "num_blocks": 8192, 00:19:38.167 "block_size": 4096, 00:19:38.167 "physical_block_size": 4096, 00:19:38.167 "uuid": "dd231e01-2c08-40b8-8974-31f0635689a0", 00:19:38.167 "optimal_io_boundary": 0, 00:19:38.167 "md_size": 0, 00:19:38.167 "dif_type": 0, 00:19:38.167 "dif_is_head_of_md": false, 00:19:38.167 "dif_pi_format": 0 00:19:38.167 } 00:19:38.167 }, 00:19:38.167 { 00:19:38.167 "method": "bdev_wait_for_examine" 00:19:38.167 } 00:19:38.167 ] 00:19:38.167 }, 00:19:38.167 { 00:19:38.167 "subsystem": "nbd", 00:19:38.167 "config": [] 00:19:38.167 }, 00:19:38.167 { 00:19:38.167 "subsystem": "scheduler", 00:19:38.167 "config": [ 00:19:38.167 { 00:19:38.167 "method": "framework_set_scheduler", 00:19:38.167 "params": { 00:19:38.167 "name": "static" 00:19:38.167 } 00:19:38.167 } 00:19:38.167 ] 00:19:38.167 }, 00:19:38.168 { 00:19:38.168 "subsystem": "nvmf", 00:19:38.168 "config": [ 00:19:38.168 { 00:19:38.168 "method": "nvmf_set_config", 00:19:38.168 "params": { 00:19:38.168 "discovery_filter": "match_any", 00:19:38.168 "admin_cmd_passthru": { 00:19:38.168 "identify_ctrlr": false 00:19:38.168 }, 00:19:38.168 "dhchap_digests": [ 00:19:38.168 "sha256", 00:19:38.168 "sha384", 00:19:38.168 "sha512" 00:19:38.168 ], 00:19:38.168 "dhchap_dhgroups": [ 00:19:38.168 "null", 00:19:38.168 "ffdhe2048", 00:19:38.168 "ffdhe3072", 00:19:38.168 "ffdhe4096", 00:19:38.168 "ffdhe6144", 00:19:38.168 "ffdhe8192" 00:19:38.168 ] 00:19:38.168 } 00:19:38.168 }, 00:19:38.168 { 00:19:38.168 "method": "nvmf_set_max_subsystems", 00:19:38.168 "params": { 00:19:38.168 "max_subsystems": 1024 00:19:38.168 } 00:19:38.168 }, 00:19:38.168 { 00:19:38.168 "method": "nvmf_set_crdt", 00:19:38.168 "params": { 00:19:38.168 "crdt1": 0, 00:19:38.168 "crdt2": 0, 00:19:38.168 "crdt3": 0 00:19:38.168 } 00:19:38.168 }, 00:19:38.168 { 00:19:38.168 "method": "nvmf_create_transport", 00:19:38.168 "params": { 00:19:38.168 "trtype": "TCP", 00:19:38.168 "max_queue_depth": 128, 00:19:38.168 "max_io_qpairs_per_ctrlr": 127, 00:19:38.168 "in_capsule_data_size": 4096, 00:19:38.168 "max_io_size": 131072, 00:19:38.168 "io_unit_size": 131072, 00:19:38.168 "max_aq_depth": 128, 00:19:38.168 "num_shared_buffers": 511, 00:19:38.168 "buf_cache_size": 4294967295, 00:19:38.168 "dif_insert_or_strip": false, 00:19:38.168 "zcopy": false, 00:19:38.168 "c2h_success": false, 00:19:38.168 "sock_priority": 0, 00:19:38.168 "abort_timeout_sec": 1, 00:19:38.168 "ack_timeout": 0, 00:19:38.168 "data_wr_pool_size": 0 00:19:38.168 } 00:19:38.168 }, 00:19:38.168 { 00:19:38.168 "method": "nvmf_create_subsystem", 00:19:38.168 "params": { 00:19:38.168 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.168 "allow_any_host": false, 00:19:38.168 "serial_number": "SPDK00000000000001", 00:19:38.168 "model_number": "SPDK bdev Controller", 00:19:38.168 "max_namespaces": 10, 00:19:38.168 "min_cntlid": 1, 00:19:38.168 "max_cntlid": 65519, 00:19:38.168 "ana_reporting": false 00:19:38.168 } 00:19:38.168 }, 00:19:38.168 { 00:19:38.168 "method": "nvmf_subsystem_add_host", 00:19:38.168 "params": { 00:19:38.168 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.168 "host": "nqn.2016-06.io.spdk:host1", 00:19:38.168 "psk": "key0" 00:19:38.168 } 00:19:38.168 }, 00:19:38.168 { 00:19:38.168 "method": "nvmf_subsystem_add_ns", 00:19:38.168 "params": { 00:19:38.168 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.168 "namespace": { 00:19:38.168 "nsid": 1, 00:19:38.168 "bdev_name": "malloc0", 00:19:38.168 "nguid": "DD231E012C0840B8897431F0635689A0", 00:19:38.168 "uuid": "dd231e01-2c08-40b8-8974-31f0635689a0", 00:19:38.168 "no_auto_visible": false 00:19:38.168 } 00:19:38.168 } 00:19:38.168 }, 00:19:38.168 { 00:19:38.168 "method": "nvmf_subsystem_add_listener", 00:19:38.168 "params": { 00:19:38.168 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.168 "listen_address": { 00:19:38.168 "trtype": "TCP", 00:19:38.168 "adrfam": "IPv4", 00:19:38.168 "traddr": "10.0.0.2", 00:19:38.168 "trsvcid": "4420" 00:19:38.168 }, 00:19:38.168 "secure_channel": true 00:19:38.168 } 00:19:38.168 } 00:19:38.168 ] 00:19:38.168 } 00:19:38.168 ] 00:19:38.168 }' 00:19:38.168 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:38.429 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:19:38.429 "subsystems": [ 00:19:38.429 { 00:19:38.429 "subsystem": "keyring", 00:19:38.429 "config": [ 00:19:38.429 { 00:19:38.429 "method": "keyring_file_add_key", 00:19:38.429 "params": { 00:19:38.429 "name": "key0", 00:19:38.429 "path": "/tmp/tmp.egsM6D515H" 00:19:38.429 } 00:19:38.429 } 00:19:38.429 ] 00:19:38.429 }, 00:19:38.429 { 00:19:38.429 "subsystem": "iobuf", 00:19:38.429 "config": [ 00:19:38.429 { 00:19:38.429 "method": "iobuf_set_options", 00:19:38.430 "params": { 00:19:38.430 "small_pool_count": 8192, 00:19:38.430 "large_pool_count": 1024, 00:19:38.430 "small_bufsize": 8192, 00:19:38.430 "large_bufsize": 135168, 00:19:38.430 "enable_numa": false 00:19:38.430 } 00:19:38.430 } 00:19:38.430 ] 00:19:38.430 }, 00:19:38.430 { 00:19:38.430 "subsystem": "sock", 00:19:38.430 "config": [ 00:19:38.430 { 00:19:38.430 "method": "sock_set_default_impl", 00:19:38.430 "params": { 00:19:38.430 "impl_name": "posix" 00:19:38.430 } 00:19:38.430 }, 00:19:38.430 { 00:19:38.430 "method": "sock_impl_set_options", 00:19:38.430 "params": { 00:19:38.430 "impl_name": "ssl", 00:19:38.430 "recv_buf_size": 4096, 00:19:38.430 "send_buf_size": 4096, 00:19:38.430 "enable_recv_pipe": true, 00:19:38.430 "enable_quickack": false, 00:19:38.430 "enable_placement_id": 0, 00:19:38.430 "enable_zerocopy_send_server": true, 00:19:38.430 "enable_zerocopy_send_client": false, 00:19:38.430 "zerocopy_threshold": 0, 00:19:38.430 "tls_version": 0, 00:19:38.430 "enable_ktls": false 00:19:38.430 } 00:19:38.430 }, 00:19:38.430 { 00:19:38.430 "method": "sock_impl_set_options", 00:19:38.430 "params": { 00:19:38.430 "impl_name": "posix", 00:19:38.430 "recv_buf_size": 2097152, 00:19:38.430 "send_buf_size": 2097152, 00:19:38.430 "enable_recv_pipe": true, 00:19:38.430 "enable_quickack": false, 00:19:38.430 "enable_placement_id": 0, 00:19:38.430 "enable_zerocopy_send_server": true, 00:19:38.430 "enable_zerocopy_send_client": false, 00:19:38.430 "zerocopy_threshold": 0, 00:19:38.430 "tls_version": 0, 00:19:38.430 "enable_ktls": false 00:19:38.430 } 00:19:38.430 } 00:19:38.430 ] 00:19:38.430 }, 00:19:38.430 { 00:19:38.430 "subsystem": "vmd", 00:19:38.430 "config": [] 00:19:38.430 }, 00:19:38.430 { 00:19:38.430 "subsystem": "accel", 00:19:38.430 "config": [ 00:19:38.430 { 00:19:38.430 "method": "accel_set_options", 00:19:38.430 "params": { 00:19:38.430 "small_cache_size": 128, 00:19:38.430 "large_cache_size": 16, 00:19:38.430 "task_count": 2048, 00:19:38.430 "sequence_count": 2048, 00:19:38.430 "buf_count": 2048 00:19:38.430 } 00:19:38.430 } 00:19:38.430 ] 00:19:38.430 }, 00:19:38.430 { 00:19:38.430 "subsystem": "bdev", 00:19:38.430 "config": [ 00:19:38.430 { 00:19:38.430 "method": "bdev_set_options", 00:19:38.430 "params": { 00:19:38.430 "bdev_io_pool_size": 65535, 00:19:38.430 "bdev_io_cache_size": 256, 00:19:38.430 "bdev_auto_examine": true, 00:19:38.430 "iobuf_small_cache_size": 128, 00:19:38.430 "iobuf_large_cache_size": 16 00:19:38.430 } 00:19:38.430 }, 00:19:38.430 { 00:19:38.430 "method": "bdev_raid_set_options", 00:19:38.430 "params": { 00:19:38.430 "process_window_size_kb": 1024, 00:19:38.430 "process_max_bandwidth_mb_sec": 0 00:19:38.430 } 00:19:38.430 }, 00:19:38.430 { 00:19:38.430 "method": "bdev_iscsi_set_options", 00:19:38.430 "params": { 00:19:38.430 "timeout_sec": 30 00:19:38.430 } 00:19:38.430 }, 00:19:38.430 { 00:19:38.430 "method": "bdev_nvme_set_options", 00:19:38.430 "params": { 00:19:38.430 "action_on_timeout": "none", 00:19:38.430 "timeout_us": 0, 00:19:38.430 "timeout_admin_us": 0, 00:19:38.430 "keep_alive_timeout_ms": 10000, 00:19:38.430 "arbitration_burst": 0, 00:19:38.430 "low_priority_weight": 0, 00:19:38.430 "medium_priority_weight": 0, 00:19:38.430 "high_priority_weight": 0, 00:19:38.430 "nvme_adminq_poll_period_us": 10000, 00:19:38.430 "nvme_ioq_poll_period_us": 0, 00:19:38.430 "io_queue_requests": 512, 00:19:38.430 "delay_cmd_submit": true, 00:19:38.430 "transport_retry_count": 4, 00:19:38.430 "bdev_retry_count": 3, 00:19:38.430 "transport_ack_timeout": 0, 00:19:38.430 "ctrlr_loss_timeout_sec": 0, 00:19:38.430 "reconnect_delay_sec": 0, 00:19:38.430 "fast_io_fail_timeout_sec": 0, 00:19:38.430 "disable_auto_failback": false, 00:19:38.430 "generate_uuids": false, 00:19:38.430 "transport_tos": 0, 00:19:38.430 "nvme_error_stat": false, 00:19:38.430 "rdma_srq_size": 0, 00:19:38.430 "io_path_stat": false, 00:19:38.430 "allow_accel_sequence": false, 00:19:38.430 "rdma_max_cq_size": 0, 00:19:38.430 "rdma_cm_event_timeout_ms": 0, 00:19:38.430 "dhchap_digests": [ 00:19:38.430 "sha256", 00:19:38.430 "sha384", 00:19:38.430 "sha512" 00:19:38.430 ], 00:19:38.430 "dhchap_dhgroups": [ 00:19:38.430 "null", 00:19:38.430 "ffdhe2048", 00:19:38.430 "ffdhe3072", 00:19:38.430 "ffdhe4096", 00:19:38.430 "ffdhe6144", 00:19:38.430 "ffdhe8192" 00:19:38.430 ] 00:19:38.430 } 00:19:38.430 }, 00:19:38.430 { 00:19:38.430 "method": "bdev_nvme_attach_controller", 00:19:38.430 "params": { 00:19:38.430 "name": "TLSTEST", 00:19:38.430 "trtype": "TCP", 00:19:38.430 "adrfam": "IPv4", 00:19:38.430 "traddr": "10.0.0.2", 00:19:38.430 "trsvcid": "4420", 00:19:38.430 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.430 "prchk_reftag": false, 00:19:38.430 "prchk_guard": false, 00:19:38.430 "ctrlr_loss_timeout_sec": 0, 00:19:38.430 "reconnect_delay_sec": 0, 00:19:38.430 "fast_io_fail_timeout_sec": 0, 00:19:38.430 "psk": "key0", 00:19:38.430 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:38.430 "hdgst": false, 00:19:38.430 "ddgst": false, 00:19:38.430 "multipath": "multipath" 00:19:38.430 } 00:19:38.430 }, 00:19:38.430 { 00:19:38.430 "method": "bdev_nvme_set_hotplug", 00:19:38.430 "params": { 00:19:38.430 "period_us": 100000, 00:19:38.430 "enable": false 00:19:38.430 } 00:19:38.430 }, 00:19:38.430 { 00:19:38.430 "method": "bdev_wait_for_examine" 00:19:38.430 } 00:19:38.430 ] 00:19:38.430 }, 00:19:38.430 { 00:19:38.430 "subsystem": "nbd", 00:19:38.430 "config": [] 00:19:38.430 } 00:19:38.430 ] 00:19:38.430 }' 00:19:38.430 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2225543 00:19:38.430 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2225543 ']' 00:19:38.430 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2225543 00:19:38.430 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:38.430 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:38.430 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2225543 00:19:38.430 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:38.430 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:38.430 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2225543' 00:19:38.430 killing process with pid 2225543 00:19:38.430 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2225543 00:19:38.430 Received shutdown signal, test time was about 10.000000 seconds 00:19:38.430 00:19:38.430 Latency(us) 00:19:38.430 [2024-11-20T15:31:24.389Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:38.430 [2024-11-20T15:31:24.389Z] =================================================================================================================== 00:19:38.430 [2024-11-20T15:31:24.389Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:38.430 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2225543 00:19:38.692 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2225177 00:19:38.692 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2225177 ']' 00:19:38.692 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2225177 00:19:38.692 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:38.692 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:38.692 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2225177 00:19:38.692 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:38.692 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:38.692 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2225177' 00:19:38.692 killing process with pid 2225177 00:19:38.692 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2225177 00:19:38.692 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2225177 00:19:38.692 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:38.692 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:38.692 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:38.692 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:38.692 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:19:38.692 "subsystems": [ 00:19:38.692 { 00:19:38.692 "subsystem": "keyring", 00:19:38.692 "config": [ 00:19:38.692 { 00:19:38.692 "method": "keyring_file_add_key", 00:19:38.692 "params": { 00:19:38.692 "name": "key0", 00:19:38.692 "path": "/tmp/tmp.egsM6D515H" 00:19:38.692 } 00:19:38.692 } 00:19:38.692 ] 00:19:38.692 }, 00:19:38.692 { 00:19:38.692 "subsystem": "iobuf", 00:19:38.692 "config": [ 00:19:38.692 { 00:19:38.692 "method": "iobuf_set_options", 00:19:38.692 "params": { 00:19:38.692 "small_pool_count": 8192, 00:19:38.692 "large_pool_count": 1024, 00:19:38.692 "small_bufsize": 8192, 00:19:38.692 "large_bufsize": 135168, 00:19:38.692 "enable_numa": false 00:19:38.692 } 00:19:38.692 } 00:19:38.692 ] 00:19:38.692 }, 00:19:38.692 { 00:19:38.692 "subsystem": "sock", 00:19:38.692 "config": [ 00:19:38.692 { 00:19:38.692 "method": "sock_set_default_impl", 00:19:38.692 "params": { 00:19:38.692 "impl_name": "posix" 00:19:38.692 } 00:19:38.692 }, 00:19:38.692 { 00:19:38.692 "method": "sock_impl_set_options", 00:19:38.692 "params": { 00:19:38.692 "impl_name": "ssl", 00:19:38.692 "recv_buf_size": 4096, 00:19:38.692 "send_buf_size": 4096, 00:19:38.692 "enable_recv_pipe": true, 00:19:38.692 "enable_quickack": false, 00:19:38.692 "enable_placement_id": 0, 00:19:38.692 "enable_zerocopy_send_server": true, 00:19:38.692 "enable_zerocopy_send_client": false, 00:19:38.692 "zerocopy_threshold": 0, 00:19:38.692 "tls_version": 0, 00:19:38.692 "enable_ktls": false 00:19:38.692 } 00:19:38.692 }, 00:19:38.692 { 00:19:38.692 "method": "sock_impl_set_options", 00:19:38.692 "params": { 00:19:38.692 "impl_name": "posix", 00:19:38.692 "recv_buf_size": 2097152, 00:19:38.692 "send_buf_size": 2097152, 00:19:38.692 "enable_recv_pipe": true, 00:19:38.692 "enable_quickack": false, 00:19:38.692 "enable_placement_id": 0, 00:19:38.692 "enable_zerocopy_send_server": true, 00:19:38.693 "enable_zerocopy_send_client": false, 00:19:38.693 "zerocopy_threshold": 0, 00:19:38.693 "tls_version": 0, 00:19:38.693 "enable_ktls": false 00:19:38.693 } 00:19:38.693 } 00:19:38.693 ] 00:19:38.693 }, 00:19:38.693 { 00:19:38.693 "subsystem": "vmd", 00:19:38.693 "config": [] 00:19:38.693 }, 00:19:38.693 { 00:19:38.693 "subsystem": "accel", 00:19:38.693 "config": [ 00:19:38.693 { 00:19:38.693 "method": "accel_set_options", 00:19:38.693 "params": { 00:19:38.693 "small_cache_size": 128, 00:19:38.693 "large_cache_size": 16, 00:19:38.693 "task_count": 2048, 00:19:38.693 "sequence_count": 2048, 00:19:38.693 "buf_count": 2048 00:19:38.693 } 00:19:38.693 } 00:19:38.693 ] 00:19:38.693 }, 00:19:38.693 { 00:19:38.693 "subsystem": "bdev", 00:19:38.693 "config": [ 00:19:38.693 { 00:19:38.693 "method": "bdev_set_options", 00:19:38.693 "params": { 00:19:38.693 "bdev_io_pool_size": 65535, 00:19:38.693 "bdev_io_cache_size": 256, 00:19:38.693 "bdev_auto_examine": true, 00:19:38.693 "iobuf_small_cache_size": 128, 00:19:38.693 "iobuf_large_cache_size": 16 00:19:38.693 } 00:19:38.693 }, 00:19:38.693 { 00:19:38.693 "method": "bdev_raid_set_options", 00:19:38.693 "params": { 00:19:38.693 "process_window_size_kb": 1024, 00:19:38.693 "process_max_bandwidth_mb_sec": 0 00:19:38.693 } 00:19:38.693 }, 00:19:38.693 { 00:19:38.693 "method": "bdev_iscsi_set_options", 00:19:38.693 "params": { 00:19:38.693 "timeout_sec": 30 00:19:38.693 } 00:19:38.693 }, 00:19:38.693 { 00:19:38.693 "method": "bdev_nvme_set_options", 00:19:38.693 "params": { 00:19:38.693 "action_on_timeout": "none", 00:19:38.693 "timeout_us": 0, 00:19:38.693 "timeout_admin_us": 0, 00:19:38.693 "keep_alive_timeout_ms": 10000, 00:19:38.693 "arbitration_burst": 0, 00:19:38.693 "low_priority_weight": 0, 00:19:38.693 "medium_priority_weight": 0, 00:19:38.693 "high_priority_weight": 0, 00:19:38.693 "nvme_adminq_poll_period_us": 10000, 00:19:38.693 "nvme_ioq_poll_period_us": 0, 00:19:38.693 "io_queue_requests": 0, 00:19:38.693 "delay_cmd_submit": true, 00:19:38.693 "transport_retry_count": 4, 00:19:38.693 "bdev_retry_count": 3, 00:19:38.693 "transport_ack_timeout": 0, 00:19:38.693 "ctrlr_loss_timeout_sec": 0, 00:19:38.693 "reconnect_delay_sec": 0, 00:19:38.693 "fast_io_fail_timeout_sec": 0, 00:19:38.693 "disable_auto_failback": false, 00:19:38.693 "generate_uuids": false, 00:19:38.693 "transport_tos": 0, 00:19:38.693 "nvme_error_stat": false, 00:19:38.693 "rdma_srq_size": 0, 00:19:38.693 "io_path_stat": false, 00:19:38.693 "allow_accel_sequence": false, 00:19:38.693 "rdma_max_cq_size": 0, 00:19:38.693 "rdma_cm_event_timeout_ms": 0, 00:19:38.693 "dhchap_digests": [ 00:19:38.693 "sha256", 00:19:38.693 "sha384", 00:19:38.693 "sha512" 00:19:38.693 ], 00:19:38.693 "dhchap_dhgroups": [ 00:19:38.693 "null", 00:19:38.693 "ffdhe2048", 00:19:38.693 "ffdhe3072", 00:19:38.693 "ffdhe4096", 00:19:38.693 "ffdhe6144", 00:19:38.693 "ffdhe8192" 00:19:38.693 ] 00:19:38.693 } 00:19:38.693 }, 00:19:38.693 { 00:19:38.693 "method": "bdev_nvme_set_hotplug", 00:19:38.693 "params": { 00:19:38.693 "period_us": 100000, 00:19:38.693 "enable": false 00:19:38.693 } 00:19:38.693 }, 00:19:38.693 { 00:19:38.693 "method": "bdev_malloc_create", 00:19:38.693 "params": { 00:19:38.693 "name": "malloc0", 00:19:38.693 "num_blocks": 8192, 00:19:38.693 "block_size": 4096, 00:19:38.693 "physical_block_size": 4096, 00:19:38.693 "uuid": "dd231e01-2c08-40b8-8974-31f0635689a0", 00:19:38.693 "optimal_io_boundary": 0, 00:19:38.693 "md_size": 0, 00:19:38.693 "dif_type": 0, 00:19:38.693 "dif_is_head_of_md": false, 00:19:38.693 "dif_pi_format": 0 00:19:38.693 } 00:19:38.693 }, 00:19:38.693 { 00:19:38.693 "method": "bdev_wait_for_examine" 00:19:38.693 } 00:19:38.693 ] 00:19:38.693 }, 00:19:38.693 { 00:19:38.693 "subsystem": "nbd", 00:19:38.693 "config": [] 00:19:38.693 }, 00:19:38.693 { 00:19:38.693 "subsystem": "scheduler", 00:19:38.693 "config": [ 00:19:38.693 { 00:19:38.693 "method": "framework_set_scheduler", 00:19:38.693 "params": { 00:19:38.693 "name": "static" 00:19:38.693 } 00:19:38.693 } 00:19:38.693 ] 00:19:38.693 }, 00:19:38.693 { 00:19:38.693 "subsystem": "nvmf", 00:19:38.693 "config": [ 00:19:38.693 { 00:19:38.693 "method": "nvmf_set_config", 00:19:38.693 "params": { 00:19:38.693 "discovery_filter": "match_any", 00:19:38.693 "admin_cmd_passthru": { 00:19:38.693 "identify_ctrlr": false 00:19:38.693 }, 00:19:38.693 "dhchap_digests": [ 00:19:38.693 "sha256", 00:19:38.693 "sha384", 00:19:38.693 "sha512" 00:19:38.693 ], 00:19:38.693 "dhchap_dhgroups": [ 00:19:38.693 "null", 00:19:38.693 "ffdhe2048", 00:19:38.693 "ffdhe3072", 00:19:38.693 "ffdhe4096", 00:19:38.693 "ffdhe6144", 00:19:38.693 "ffdhe8192" 00:19:38.693 ] 00:19:38.693 } 00:19:38.693 }, 00:19:38.693 { 00:19:38.693 "method": "nvmf_set_max_subsystems", 00:19:38.693 "params": { 00:19:38.693 "max_subsystems": 1024 00:19:38.693 } 00:19:38.693 }, 00:19:38.693 { 00:19:38.693 "method": "nvmf_set_crdt", 00:19:38.693 "params": { 00:19:38.693 "crdt1": 0, 00:19:38.693 "crdt2": 0, 00:19:38.693 "crdt3": 0 00:19:38.693 } 00:19:38.693 }, 00:19:38.693 { 00:19:38.693 "method": "nvmf_create_transport", 00:19:38.693 "params": { 00:19:38.693 "trtype": "TCP", 00:19:38.693 "max_queue_depth": 128, 00:19:38.693 "max_io_qpairs_per_ctrlr": 127, 00:19:38.693 "in_capsule_data_size": 4096, 00:19:38.693 "max_io_size": 131072, 00:19:38.693 "io_unit_size": 131072, 00:19:38.693 "max_aq_depth": 128, 00:19:38.693 "num_shared_buffers": 511, 00:19:38.693 "buf_cache_size": 4294967295, 00:19:38.693 "dif_insert_or_strip": false, 00:19:38.693 "zcopy": false, 00:19:38.693 "c2h_success": false, 00:19:38.693 "sock_priority": 0, 00:19:38.693 "abort_timeout_sec": 1, 00:19:38.693 "ack_timeout": 0, 00:19:38.693 "data_wr_pool_size": 0 00:19:38.693 } 00:19:38.693 }, 00:19:38.693 { 00:19:38.693 "method": "nvmf_create_subsystem", 00:19:38.693 "params": { 00:19:38.693 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.693 "allow_any_host": false, 00:19:38.693 "serial_number": "SPDK00000000000001", 00:19:38.693 "model_number": "SPDK bdev Controller", 00:19:38.693 "max_namespaces": 10, 00:19:38.693 "min_cntlid": 1, 00:19:38.693 "max_cntlid": 65519, 00:19:38.693 "ana_reporting": false 00:19:38.693 } 00:19:38.693 }, 00:19:38.693 { 00:19:38.693 "method": "nvmf_subsystem_add_host", 00:19:38.693 "params": { 00:19:38.693 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.693 "host": "nqn.2016-06.io.spdk:host1", 00:19:38.693 "psk": "key0" 00:19:38.693 } 00:19:38.693 }, 00:19:38.693 { 00:19:38.693 "method": "nvmf_subsystem_add_ns", 00:19:38.693 "params": { 00:19:38.693 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.693 "namespace": { 00:19:38.693 "nsid": 1, 00:19:38.693 "bdev_name": "malloc0", 00:19:38.693 "nguid": "DD231E012C0840B8897431F0635689A0", 00:19:38.693 "uuid": "dd231e01-2c08-40b8-8974-31f0635689a0", 00:19:38.693 "no_auto_visible": false 00:19:38.693 } 00:19:38.693 } 00:19:38.693 }, 00:19:38.693 { 00:19:38.693 "method": "nvmf_subsystem_add_listener", 00:19:38.693 "params": { 00:19:38.693 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.693 "listen_address": { 00:19:38.693 "trtype": "TCP", 00:19:38.693 "adrfam": "IPv4", 00:19:38.694 "traddr": "10.0.0.2", 00:19:38.694 "trsvcid": "4420" 00:19:38.694 }, 00:19:38.694 "secure_channel": true 00:19:38.694 } 00:19:38.694 } 00:19:38.694 ] 00:19:38.694 } 00:19:38.694 ] 00:19:38.694 }' 00:19:38.694 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2225892 00:19:38.694 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2225892 00:19:38.694 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:38.694 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2225892 ']' 00:19:38.694 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:38.694 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:38.694 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:38.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:38.694 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:38.694 16:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:38.955 [2024-11-20 16:31:24.653851] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:19:38.955 [2024-11-20 16:31:24.653902] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:38.955 [2024-11-20 16:31:24.722026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.955 [2024-11-20 16:31:24.749825] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:38.955 [2024-11-20 16:31:24.749855] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:38.955 [2024-11-20 16:31:24.749861] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:38.955 [2024-11-20 16:31:24.749866] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:38.955 [2024-11-20 16:31:24.749870] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:38.955 [2024-11-20 16:31:24.750383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:39.216 [2024-11-20 16:31:24.944578] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:39.216 [2024-11-20 16:31:24.976603] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:39.216 [2024-11-20 16:31:24.976794] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:39.790 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:39.790 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:39.790 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:39.790 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:39.790 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:39.790 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:39.790 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2226090 00:19:39.790 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2226090 /var/tmp/bdevperf.sock 00:19:39.790 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2226090 ']' 00:19:39.790 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:39.790 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:39.790 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:39.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:39.790 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:39.790 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:39.790 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:39.790 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:19:39.790 "subsystems": [ 00:19:39.790 { 00:19:39.790 "subsystem": "keyring", 00:19:39.790 "config": [ 00:19:39.790 { 00:19:39.790 "method": "keyring_file_add_key", 00:19:39.790 "params": { 00:19:39.790 "name": "key0", 00:19:39.790 "path": "/tmp/tmp.egsM6D515H" 00:19:39.790 } 00:19:39.790 } 00:19:39.790 ] 00:19:39.790 }, 00:19:39.790 { 00:19:39.790 "subsystem": "iobuf", 00:19:39.790 "config": [ 00:19:39.790 { 00:19:39.790 "method": "iobuf_set_options", 00:19:39.790 "params": { 00:19:39.790 "small_pool_count": 8192, 00:19:39.791 "large_pool_count": 1024, 00:19:39.791 "small_bufsize": 8192, 00:19:39.791 "large_bufsize": 135168, 00:19:39.791 "enable_numa": false 00:19:39.791 } 00:19:39.791 } 00:19:39.791 ] 00:19:39.791 }, 00:19:39.791 { 00:19:39.791 "subsystem": "sock", 00:19:39.791 "config": [ 00:19:39.791 { 00:19:39.791 "method": "sock_set_default_impl", 00:19:39.791 "params": { 00:19:39.791 "impl_name": "posix" 00:19:39.791 } 00:19:39.791 }, 00:19:39.791 { 00:19:39.791 "method": "sock_impl_set_options", 00:19:39.791 "params": { 00:19:39.791 "impl_name": "ssl", 00:19:39.791 "recv_buf_size": 4096, 00:19:39.791 "send_buf_size": 4096, 00:19:39.791 "enable_recv_pipe": true, 00:19:39.791 "enable_quickack": false, 00:19:39.791 "enable_placement_id": 0, 00:19:39.791 "enable_zerocopy_send_server": true, 00:19:39.791 "enable_zerocopy_send_client": false, 00:19:39.791 "zerocopy_threshold": 0, 00:19:39.791 "tls_version": 0, 00:19:39.791 "enable_ktls": false 00:19:39.791 } 00:19:39.791 }, 00:19:39.791 { 00:19:39.791 "method": "sock_impl_set_options", 00:19:39.791 "params": { 00:19:39.791 "impl_name": "posix", 00:19:39.791 "recv_buf_size": 2097152, 00:19:39.791 "send_buf_size": 2097152, 00:19:39.791 "enable_recv_pipe": true, 00:19:39.791 "enable_quickack": false, 00:19:39.791 "enable_placement_id": 0, 00:19:39.791 "enable_zerocopy_send_server": true, 00:19:39.791 "enable_zerocopy_send_client": false, 00:19:39.791 "zerocopy_threshold": 0, 00:19:39.791 "tls_version": 0, 00:19:39.791 "enable_ktls": false 00:19:39.791 } 00:19:39.791 } 00:19:39.791 ] 00:19:39.791 }, 00:19:39.791 { 00:19:39.791 "subsystem": "vmd", 00:19:39.791 "config": [] 00:19:39.791 }, 00:19:39.791 { 00:19:39.791 "subsystem": "accel", 00:19:39.791 "config": [ 00:19:39.791 { 00:19:39.791 "method": "accel_set_options", 00:19:39.791 "params": { 00:19:39.791 "small_cache_size": 128, 00:19:39.791 "large_cache_size": 16, 00:19:39.791 "task_count": 2048, 00:19:39.791 "sequence_count": 2048, 00:19:39.791 "buf_count": 2048 00:19:39.791 } 00:19:39.791 } 00:19:39.791 ] 00:19:39.791 }, 00:19:39.791 { 00:19:39.791 "subsystem": "bdev", 00:19:39.791 "config": [ 00:19:39.791 { 00:19:39.791 "method": "bdev_set_options", 00:19:39.791 "params": { 00:19:39.791 "bdev_io_pool_size": 65535, 00:19:39.791 "bdev_io_cache_size": 256, 00:19:39.791 "bdev_auto_examine": true, 00:19:39.791 "iobuf_small_cache_size": 128, 00:19:39.791 "iobuf_large_cache_size": 16 00:19:39.791 } 00:19:39.791 }, 00:19:39.791 { 00:19:39.791 "method": "bdev_raid_set_options", 00:19:39.791 "params": { 00:19:39.791 "process_window_size_kb": 1024, 00:19:39.791 "process_max_bandwidth_mb_sec": 0 00:19:39.791 } 00:19:39.791 }, 00:19:39.791 { 00:19:39.791 "method": "bdev_iscsi_set_options", 00:19:39.791 "params": { 00:19:39.791 "timeout_sec": 30 00:19:39.791 } 00:19:39.791 }, 00:19:39.791 { 00:19:39.791 "method": "bdev_nvme_set_options", 00:19:39.791 "params": { 00:19:39.791 "action_on_timeout": "none", 00:19:39.791 "timeout_us": 0, 00:19:39.791 "timeout_admin_us": 0, 00:19:39.791 "keep_alive_timeout_ms": 10000, 00:19:39.791 "arbitration_burst": 0, 00:19:39.791 "low_priority_weight": 0, 00:19:39.791 "medium_priority_weight": 0, 00:19:39.791 "high_priority_weight": 0, 00:19:39.791 "nvme_adminq_poll_period_us": 10000, 00:19:39.791 "nvme_ioq_poll_period_us": 0, 00:19:39.791 "io_queue_requests": 512, 00:19:39.791 "delay_cmd_submit": true, 00:19:39.791 "transport_retry_count": 4, 00:19:39.791 "bdev_retry_count": 3, 00:19:39.791 "transport_ack_timeout": 0, 00:19:39.791 "ctrlr_loss_timeout_sec": 0, 00:19:39.791 "reconnect_delay_sec": 0, 00:19:39.791 "fast_io_fail_timeout_sec": 0, 00:19:39.791 "disable_auto_failback": false, 00:19:39.791 "generate_uuids": false, 00:19:39.791 "transport_tos": 0, 00:19:39.791 "nvme_error_stat": false, 00:19:39.791 "rdma_srq_size": 0, 00:19:39.791 "io_path_stat": false, 00:19:39.791 "allow_accel_sequence": false, 00:19:39.791 "rdma_max_cq_size": 0, 00:19:39.791 "rdma_cm_event_timeout_ms": 0, 00:19:39.791 "dhchap_digests": [ 00:19:39.791 "sha256", 00:19:39.791 "sha384", 00:19:39.791 "sha512" 00:19:39.791 ], 00:19:39.791 "dhchap_dhgroups": [ 00:19:39.791 "null", 00:19:39.791 "ffdhe2048", 00:19:39.791 "ffdhe3072", 00:19:39.791 "ffdhe4096", 00:19:39.791 "ffdhe6144", 00:19:39.791 "ffdhe8192" 00:19:39.791 ] 00:19:39.791 } 00:19:39.791 }, 00:19:39.791 { 00:19:39.791 "method": "bdev_nvme_attach_controller", 00:19:39.791 "params": { 00:19:39.791 "name": "TLSTEST", 00:19:39.791 "trtype": "TCP", 00:19:39.791 "adrfam": "IPv4", 00:19:39.791 "traddr": "10.0.0.2", 00:19:39.791 "trsvcid": "4420", 00:19:39.791 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:39.791 "prchk_reftag": false, 00:19:39.791 "prchk_guard": false, 00:19:39.791 "ctrlr_loss_timeout_sec": 0, 00:19:39.791 "reconnect_delay_sec": 0, 00:19:39.791 "fast_io_fail_timeout_sec": 0, 00:19:39.791 "psk": "key0", 00:19:39.791 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:39.791 "hdgst": false, 00:19:39.791 "ddgst": false, 00:19:39.791 "multipath": "multipath" 00:19:39.791 } 00:19:39.791 }, 00:19:39.791 { 00:19:39.791 "method": "bdev_nvme_set_hotplug", 00:19:39.791 "params": { 00:19:39.791 "period_us": 100000, 00:19:39.791 "enable": false 00:19:39.791 } 00:19:39.791 }, 00:19:39.791 { 00:19:39.791 "method": "bdev_wait_for_examine" 00:19:39.791 } 00:19:39.791 ] 00:19:39.791 }, 00:19:39.791 { 00:19:39.791 "subsystem": "nbd", 00:19:39.791 "config": [] 00:19:39.791 } 00:19:39.791 ] 00:19:39.791 }' 00:19:39.791 [2024-11-20 16:31:25.532156] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:19:39.791 [2024-11-20 16:31:25.532205] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2226090 ] 00:19:39.791 [2024-11-20 16:31:25.593338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.791 [2024-11-20 16:31:25.622569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:40.052 [2024-11-20 16:31:25.757726] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:40.623 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:40.623 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:40.623 16:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:40.623 Running I/O for 10 seconds... 00:19:42.508 6280.00 IOPS, 24.53 MiB/s [2024-11-20T15:31:29.478Z] 5649.50 IOPS, 22.07 MiB/s [2024-11-20T15:31:30.865Z] 5729.33 IOPS, 22.38 MiB/s [2024-11-20T15:31:31.438Z] 5451.25 IOPS, 21.29 MiB/s [2024-11-20T15:31:32.822Z] 5206.60 IOPS, 20.34 MiB/s [2024-11-20T15:31:33.764Z] 5308.33 IOPS, 20.74 MiB/s [2024-11-20T15:31:34.708Z] 5436.14 IOPS, 21.23 MiB/s [2024-11-20T15:31:35.650Z] 5510.88 IOPS, 21.53 MiB/s [2024-11-20T15:31:36.593Z] 5427.56 IOPS, 21.20 MiB/s [2024-11-20T15:31:36.593Z] 5481.10 IOPS, 21.41 MiB/s 00:19:50.634 Latency(us) 00:19:50.634 [2024-11-20T15:31:36.593Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.634 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:50.634 Verification LBA range: start 0x0 length 0x2000 00:19:50.634 TLSTESTn1 : 10.01 5487.48 21.44 0.00 0.00 23295.52 4396.37 23483.73 00:19:50.634 [2024-11-20T15:31:36.593Z] =================================================================================================================== 00:19:50.634 [2024-11-20T15:31:36.593Z] Total : 5487.48 21.44 0.00 0.00 23295.52 4396.37 23483.73 00:19:50.634 { 00:19:50.634 "results": [ 00:19:50.634 { 00:19:50.634 "job": "TLSTESTn1", 00:19:50.634 "core_mask": "0x4", 00:19:50.634 "workload": "verify", 00:19:50.634 "status": "finished", 00:19:50.634 "verify_range": { 00:19:50.634 "start": 0, 00:19:50.634 "length": 8192 00:19:50.634 }, 00:19:50.634 "queue_depth": 128, 00:19:50.634 "io_size": 4096, 00:19:50.634 "runtime": 10.011333, 00:19:50.634 "iops": 5487.481037739929, 00:19:50.634 "mibps": 21.435472803671598, 00:19:50.634 "io_failed": 0, 00:19:50.634 "io_timeout": 0, 00:19:50.634 "avg_latency_us": 23295.516055603082, 00:19:50.634 "min_latency_us": 4396.373333333333, 00:19:50.634 "max_latency_us": 23483.733333333334 00:19:50.634 } 00:19:50.634 ], 00:19:50.634 "core_count": 1 00:19:50.634 } 00:19:50.634 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:50.634 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2226090 00:19:50.634 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2226090 ']' 00:19:50.634 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2226090 00:19:50.634 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:50.634 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:50.634 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2226090 00:19:50.634 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:50.634 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:50.634 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2226090' 00:19:50.634 killing process with pid 2226090 00:19:50.634 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2226090 00:19:50.634 Received shutdown signal, test time was about 10.000000 seconds 00:19:50.634 00:19:50.634 Latency(us) 00:19:50.634 [2024-11-20T15:31:36.594Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.635 [2024-11-20T15:31:36.594Z] =================================================================================================================== 00:19:50.635 [2024-11-20T15:31:36.594Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:50.635 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2226090 00:19:50.896 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2225892 00:19:50.896 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2225892 ']' 00:19:50.896 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2225892 00:19:50.896 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:50.896 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:50.896 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2225892 00:19:50.896 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:50.896 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:50.896 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2225892' 00:19:50.896 killing process with pid 2225892 00:19:50.896 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2225892 00:19:50.896 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2225892 00:19:50.896 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:19:50.896 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:50.896 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:50.896 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.896 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2228267 00:19:50.896 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2228267 00:19:50.896 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:50.896 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2228267 ']' 00:19:50.896 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:50.896 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:50.896 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:50.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:50.896 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:50.896 16:31:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.158 [2024-11-20 16:31:36.888350] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:19:51.158 [2024-11-20 16:31:36.888404] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:51.158 [2024-11-20 16:31:36.965729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.158 [2024-11-20 16:31:36.998188] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:51.158 [2024-11-20 16:31:36.998222] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:51.158 [2024-11-20 16:31:36.998230] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:51.158 [2024-11-20 16:31:36.998237] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:51.158 [2024-11-20 16:31:36.998242] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:51.158 [2024-11-20 16:31:36.998821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.731 16:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:51.731 16:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:51.731 16:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:51.731 16:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:51.731 16:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.992 16:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:51.992 16:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.egsM6D515H 00:19:51.992 16:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.egsM6D515H 00:19:51.992 16:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:51.992 [2024-11-20 16:31:37.868528] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:51.992 16:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:52.253 16:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:52.513 [2024-11-20 16:31:38.221416] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:52.513 [2024-11-20 16:31:38.221631] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:52.513 16:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:52.513 malloc0 00:19:52.513 16:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:52.774 16:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.egsM6D515H 00:19:53.035 16:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:53.035 16:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:53.035 16:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2228632 00:19:53.035 16:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:53.035 16:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2228632 /var/tmp/bdevperf.sock 00:19:53.035 16:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2228632 ']' 00:19:53.035 16:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:53.035 16:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:53.035 16:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:53.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:53.035 16:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:53.035 16:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.035 [2024-11-20 16:31:38.970805] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:19:53.035 [2024-11-20 16:31:38.970903] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2228632 ] 00:19:53.296 [2024-11-20 16:31:39.059871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.296 [2024-11-20 16:31:39.089929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:53.296 16:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:53.296 16:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:53.296 16:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.egsM6D515H 00:19:53.556 16:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:53.556 [2024-11-20 16:31:39.477358] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:53.817 nvme0n1 00:19:53.817 16:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:53.817 Running I/O for 1 seconds... 00:19:54.759 4241.00 IOPS, 16.57 MiB/s 00:19:54.759 Latency(us) 00:19:54.759 [2024-11-20T15:31:40.718Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:54.759 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:54.759 Verification LBA range: start 0x0 length 0x2000 00:19:54.759 nvme0n1 : 1.02 4297.05 16.79 0.00 0.00 29576.48 5952.85 38666.24 00:19:54.759 [2024-11-20T15:31:40.718Z] =================================================================================================================== 00:19:54.759 [2024-11-20T15:31:40.718Z] Total : 4297.05 16.79 0.00 0.00 29576.48 5952.85 38666.24 00:19:54.759 { 00:19:54.759 "results": [ 00:19:54.759 { 00:19:54.759 "job": "nvme0n1", 00:19:54.759 "core_mask": "0x2", 00:19:54.759 "workload": "verify", 00:19:54.759 "status": "finished", 00:19:54.759 "verify_range": { 00:19:54.759 "start": 0, 00:19:54.759 "length": 8192 00:19:54.759 }, 00:19:54.759 "queue_depth": 128, 00:19:54.759 "io_size": 4096, 00:19:54.759 "runtime": 1.016744, 00:19:54.759 "iops": 4297.050191591984, 00:19:54.759 "mibps": 16.785352310906188, 00:19:54.759 "io_failed": 0, 00:19:54.759 "io_timeout": 0, 00:19:54.759 "avg_latency_us": 29576.479426260776, 00:19:54.759 "min_latency_us": 5952.8533333333335, 00:19:54.759 "max_latency_us": 38666.24 00:19:54.759 } 00:19:54.759 ], 00:19:54.759 "core_count": 1 00:19:54.759 } 00:19:54.759 16:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2228632 00:19:54.759 16:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2228632 ']' 00:19:54.759 16:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2228632 00:19:54.759 16:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:54.759 16:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:54.759 16:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2228632 00:19:55.020 16:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:55.020 16:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:55.020 16:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2228632' 00:19:55.020 killing process with pid 2228632 00:19:55.020 16:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2228632 00:19:55.020 Received shutdown signal, test time was about 1.000000 seconds 00:19:55.020 00:19:55.020 Latency(us) 00:19:55.020 [2024-11-20T15:31:40.979Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.021 [2024-11-20T15:31:40.980Z] =================================================================================================================== 00:19:55.021 [2024-11-20T15:31:40.980Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:55.021 16:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2228632 00:19:55.021 16:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2228267 00:19:55.021 16:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2228267 ']' 00:19:55.021 16:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2228267 00:19:55.021 16:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:55.021 16:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:55.021 16:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2228267 00:19:55.021 16:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:55.021 16:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:55.021 16:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2228267' 00:19:55.021 killing process with pid 2228267 00:19:55.021 16:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2228267 00:19:55.021 16:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2228267 00:19:55.283 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:19:55.283 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:55.283 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:55.283 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:55.283 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2229077 00:19:55.283 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2229077 00:19:55.283 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:55.283 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2229077 ']' 00:19:55.283 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:55.283 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:55.283 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:55.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:55.283 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:55.283 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:55.283 [2024-11-20 16:31:41.106122] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:19:55.283 [2024-11-20 16:31:41.106182] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:55.283 [2024-11-20 16:31:41.188014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.283 [2024-11-20 16:31:41.222669] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:55.283 [2024-11-20 16:31:41.222707] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:55.283 [2024-11-20 16:31:41.222715] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:55.283 [2024-11-20 16:31:41.222722] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:55.283 [2024-11-20 16:31:41.222728] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:55.283 [2024-11-20 16:31:41.223321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:56.226 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:56.226 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:56.226 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:56.226 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:56.226 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.227 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:56.227 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:19:56.227 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.227 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.227 [2024-11-20 16:31:41.940606] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:56.227 malloc0 00:19:56.227 [2024-11-20 16:31:41.967296] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:56.227 [2024-11-20 16:31:41.967519] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:56.227 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.227 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2229331 00:19:56.227 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2229331 /var/tmp/bdevperf.sock 00:19:56.227 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:56.227 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2229331 ']' 00:19:56.227 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:56.227 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:56.227 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:56.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:56.227 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:56.227 16:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.227 [2024-11-20 16:31:42.046282] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:19:56.227 [2024-11-20 16:31:42.046334] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2229331 ] 00:19:56.227 [2024-11-20 16:31:42.128849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.227 [2024-11-20 16:31:42.158607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:57.170 16:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:57.170 16:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:57.170 16:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.egsM6D515H 00:19:57.170 16:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:57.430 [2024-11-20 16:31:43.167360] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:57.430 nvme0n1 00:19:57.430 16:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:57.430 Running I/O for 1 seconds... 00:19:58.816 4968.00 IOPS, 19.41 MiB/s 00:19:58.816 Latency(us) 00:19:58.816 [2024-11-20T15:31:44.775Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.816 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:58.816 Verification LBA range: start 0x0 length 0x2000 00:19:58.816 nvme0n1 : 1.02 4998.99 19.53 0.00 0.00 25372.02 4587.52 25559.04 00:19:58.816 [2024-11-20T15:31:44.775Z] =================================================================================================================== 00:19:58.816 [2024-11-20T15:31:44.775Z] Total : 4998.99 19.53 0.00 0.00 25372.02 4587.52 25559.04 00:19:58.816 { 00:19:58.816 "results": [ 00:19:58.816 { 00:19:58.816 "job": "nvme0n1", 00:19:58.816 "core_mask": "0x2", 00:19:58.816 "workload": "verify", 00:19:58.816 "status": "finished", 00:19:58.816 "verify_range": { 00:19:58.816 "start": 0, 00:19:58.816 "length": 8192 00:19:58.816 }, 00:19:58.816 "queue_depth": 128, 00:19:58.816 "io_size": 4096, 00:19:58.816 "runtime": 1.019405, 00:19:58.816 "iops": 4998.99451150426, 00:19:58.816 "mibps": 19.527322310563516, 00:19:58.816 "io_failed": 0, 00:19:58.816 "io_timeout": 0, 00:19:58.816 "avg_latency_us": 25372.016661433805, 00:19:58.816 "min_latency_us": 4587.52, 00:19:58.816 "max_latency_us": 25559.04 00:19:58.816 } 00:19:58.816 ], 00:19:58.816 "core_count": 1 00:19:58.816 } 00:19:58.816 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:19:58.816 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.816 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:58.816 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.816 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:19:58.816 "subsystems": [ 00:19:58.816 { 00:19:58.816 "subsystem": "keyring", 00:19:58.816 "config": [ 00:19:58.816 { 00:19:58.816 "method": "keyring_file_add_key", 00:19:58.816 "params": { 00:19:58.816 "name": "key0", 00:19:58.816 "path": "/tmp/tmp.egsM6D515H" 00:19:58.816 } 00:19:58.816 } 00:19:58.816 ] 00:19:58.816 }, 00:19:58.816 { 00:19:58.816 "subsystem": "iobuf", 00:19:58.816 "config": [ 00:19:58.816 { 00:19:58.816 "method": "iobuf_set_options", 00:19:58.816 "params": { 00:19:58.816 "small_pool_count": 8192, 00:19:58.816 "large_pool_count": 1024, 00:19:58.816 "small_bufsize": 8192, 00:19:58.816 "large_bufsize": 135168, 00:19:58.816 "enable_numa": false 00:19:58.816 } 00:19:58.816 } 00:19:58.816 ] 00:19:58.816 }, 00:19:58.816 { 00:19:58.816 "subsystem": "sock", 00:19:58.816 "config": [ 00:19:58.816 { 00:19:58.816 "method": "sock_set_default_impl", 00:19:58.816 "params": { 00:19:58.816 "impl_name": "posix" 00:19:58.816 } 00:19:58.816 }, 00:19:58.816 { 00:19:58.816 "method": "sock_impl_set_options", 00:19:58.816 "params": { 00:19:58.816 "impl_name": "ssl", 00:19:58.816 "recv_buf_size": 4096, 00:19:58.816 "send_buf_size": 4096, 00:19:58.816 "enable_recv_pipe": true, 00:19:58.816 "enable_quickack": false, 00:19:58.816 "enable_placement_id": 0, 00:19:58.816 "enable_zerocopy_send_server": true, 00:19:58.816 "enable_zerocopy_send_client": false, 00:19:58.816 "zerocopy_threshold": 0, 00:19:58.816 "tls_version": 0, 00:19:58.816 "enable_ktls": false 00:19:58.816 } 00:19:58.816 }, 00:19:58.816 { 00:19:58.816 "method": "sock_impl_set_options", 00:19:58.816 "params": { 00:19:58.816 "impl_name": "posix", 00:19:58.816 "recv_buf_size": 2097152, 00:19:58.816 "send_buf_size": 2097152, 00:19:58.816 "enable_recv_pipe": true, 00:19:58.816 "enable_quickack": false, 00:19:58.816 "enable_placement_id": 0, 00:19:58.816 "enable_zerocopy_send_server": true, 00:19:58.816 "enable_zerocopy_send_client": false, 00:19:58.816 "zerocopy_threshold": 0, 00:19:58.816 "tls_version": 0, 00:19:58.816 "enable_ktls": false 00:19:58.816 } 00:19:58.816 } 00:19:58.816 ] 00:19:58.816 }, 00:19:58.816 { 00:19:58.816 "subsystem": "vmd", 00:19:58.816 "config": [] 00:19:58.816 }, 00:19:58.816 { 00:19:58.816 "subsystem": "accel", 00:19:58.816 "config": [ 00:19:58.816 { 00:19:58.816 "method": "accel_set_options", 00:19:58.816 "params": { 00:19:58.816 "small_cache_size": 128, 00:19:58.816 "large_cache_size": 16, 00:19:58.816 "task_count": 2048, 00:19:58.816 "sequence_count": 2048, 00:19:58.816 "buf_count": 2048 00:19:58.816 } 00:19:58.816 } 00:19:58.816 ] 00:19:58.816 }, 00:19:58.816 { 00:19:58.816 "subsystem": "bdev", 00:19:58.816 "config": [ 00:19:58.816 { 00:19:58.816 "method": "bdev_set_options", 00:19:58.816 "params": { 00:19:58.816 "bdev_io_pool_size": 65535, 00:19:58.816 "bdev_io_cache_size": 256, 00:19:58.816 "bdev_auto_examine": true, 00:19:58.816 "iobuf_small_cache_size": 128, 00:19:58.816 "iobuf_large_cache_size": 16 00:19:58.816 } 00:19:58.816 }, 00:19:58.816 { 00:19:58.816 "method": "bdev_raid_set_options", 00:19:58.816 "params": { 00:19:58.816 "process_window_size_kb": 1024, 00:19:58.816 "process_max_bandwidth_mb_sec": 0 00:19:58.816 } 00:19:58.816 }, 00:19:58.816 { 00:19:58.816 "method": "bdev_iscsi_set_options", 00:19:58.816 "params": { 00:19:58.816 "timeout_sec": 30 00:19:58.816 } 00:19:58.816 }, 00:19:58.816 { 00:19:58.816 "method": "bdev_nvme_set_options", 00:19:58.816 "params": { 00:19:58.816 "action_on_timeout": "none", 00:19:58.816 "timeout_us": 0, 00:19:58.816 "timeout_admin_us": 0, 00:19:58.816 "keep_alive_timeout_ms": 10000, 00:19:58.816 "arbitration_burst": 0, 00:19:58.816 "low_priority_weight": 0, 00:19:58.816 "medium_priority_weight": 0, 00:19:58.816 "high_priority_weight": 0, 00:19:58.816 "nvme_adminq_poll_period_us": 10000, 00:19:58.816 "nvme_ioq_poll_period_us": 0, 00:19:58.816 "io_queue_requests": 0, 00:19:58.816 "delay_cmd_submit": true, 00:19:58.816 "transport_retry_count": 4, 00:19:58.816 "bdev_retry_count": 3, 00:19:58.816 "transport_ack_timeout": 0, 00:19:58.816 "ctrlr_loss_timeout_sec": 0, 00:19:58.816 "reconnect_delay_sec": 0, 00:19:58.816 "fast_io_fail_timeout_sec": 0, 00:19:58.816 "disable_auto_failback": false, 00:19:58.816 "generate_uuids": false, 00:19:58.816 "transport_tos": 0, 00:19:58.816 "nvme_error_stat": false, 00:19:58.816 "rdma_srq_size": 0, 00:19:58.816 "io_path_stat": false, 00:19:58.816 "allow_accel_sequence": false, 00:19:58.816 "rdma_max_cq_size": 0, 00:19:58.816 "rdma_cm_event_timeout_ms": 0, 00:19:58.816 "dhchap_digests": [ 00:19:58.816 "sha256", 00:19:58.816 "sha384", 00:19:58.816 "sha512" 00:19:58.816 ], 00:19:58.816 "dhchap_dhgroups": [ 00:19:58.816 "null", 00:19:58.816 "ffdhe2048", 00:19:58.816 "ffdhe3072", 00:19:58.816 "ffdhe4096", 00:19:58.816 "ffdhe6144", 00:19:58.816 "ffdhe8192" 00:19:58.816 ] 00:19:58.816 } 00:19:58.816 }, 00:19:58.816 { 00:19:58.816 "method": "bdev_nvme_set_hotplug", 00:19:58.816 "params": { 00:19:58.816 "period_us": 100000, 00:19:58.816 "enable": false 00:19:58.816 } 00:19:58.816 }, 00:19:58.816 { 00:19:58.816 "method": "bdev_malloc_create", 00:19:58.816 "params": { 00:19:58.816 "name": "malloc0", 00:19:58.816 "num_blocks": 8192, 00:19:58.816 "block_size": 4096, 00:19:58.816 "physical_block_size": 4096, 00:19:58.817 "uuid": "69f5767b-287a-4afa-8a4e-a39b2953fc18", 00:19:58.817 "optimal_io_boundary": 0, 00:19:58.817 "md_size": 0, 00:19:58.817 "dif_type": 0, 00:19:58.817 "dif_is_head_of_md": false, 00:19:58.817 "dif_pi_format": 0 00:19:58.817 } 00:19:58.817 }, 00:19:58.817 { 00:19:58.817 "method": "bdev_wait_for_examine" 00:19:58.817 } 00:19:58.817 ] 00:19:58.817 }, 00:19:58.817 { 00:19:58.817 "subsystem": "nbd", 00:19:58.817 "config": [] 00:19:58.817 }, 00:19:58.817 { 00:19:58.817 "subsystem": "scheduler", 00:19:58.817 "config": [ 00:19:58.817 { 00:19:58.817 "method": "framework_set_scheduler", 00:19:58.817 "params": { 00:19:58.817 "name": "static" 00:19:58.817 } 00:19:58.817 } 00:19:58.817 ] 00:19:58.817 }, 00:19:58.817 { 00:19:58.817 "subsystem": "nvmf", 00:19:58.817 "config": [ 00:19:58.817 { 00:19:58.817 "method": "nvmf_set_config", 00:19:58.817 "params": { 00:19:58.817 "discovery_filter": "match_any", 00:19:58.817 "admin_cmd_passthru": { 00:19:58.817 "identify_ctrlr": false 00:19:58.817 }, 00:19:58.817 "dhchap_digests": [ 00:19:58.817 "sha256", 00:19:58.817 "sha384", 00:19:58.817 "sha512" 00:19:58.817 ], 00:19:58.817 "dhchap_dhgroups": [ 00:19:58.817 "null", 00:19:58.817 "ffdhe2048", 00:19:58.817 "ffdhe3072", 00:19:58.817 "ffdhe4096", 00:19:58.817 "ffdhe6144", 00:19:58.817 "ffdhe8192" 00:19:58.817 ] 00:19:58.817 } 00:19:58.817 }, 00:19:58.817 { 00:19:58.817 "method": "nvmf_set_max_subsystems", 00:19:58.817 "params": { 00:19:58.817 "max_subsystems": 1024 00:19:58.817 } 00:19:58.817 }, 00:19:58.817 { 00:19:58.817 "method": "nvmf_set_crdt", 00:19:58.817 "params": { 00:19:58.817 "crdt1": 0, 00:19:58.817 "crdt2": 0, 00:19:58.817 "crdt3": 0 00:19:58.817 } 00:19:58.817 }, 00:19:58.817 { 00:19:58.817 "method": "nvmf_create_transport", 00:19:58.817 "params": { 00:19:58.817 "trtype": "TCP", 00:19:58.817 "max_queue_depth": 128, 00:19:58.817 "max_io_qpairs_per_ctrlr": 127, 00:19:58.817 "in_capsule_data_size": 4096, 00:19:58.817 "max_io_size": 131072, 00:19:58.817 "io_unit_size": 131072, 00:19:58.817 "max_aq_depth": 128, 00:19:58.817 "num_shared_buffers": 511, 00:19:58.817 "buf_cache_size": 4294967295, 00:19:58.817 "dif_insert_or_strip": false, 00:19:58.817 "zcopy": false, 00:19:58.817 "c2h_success": false, 00:19:58.817 "sock_priority": 0, 00:19:58.817 "abort_timeout_sec": 1, 00:19:58.817 "ack_timeout": 0, 00:19:58.817 "data_wr_pool_size": 0 00:19:58.817 } 00:19:58.817 }, 00:19:58.817 { 00:19:58.817 "method": "nvmf_create_subsystem", 00:19:58.817 "params": { 00:19:58.817 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.817 "allow_any_host": false, 00:19:58.817 "serial_number": "00000000000000000000", 00:19:58.817 "model_number": "SPDK bdev Controller", 00:19:58.817 "max_namespaces": 32, 00:19:58.817 "min_cntlid": 1, 00:19:58.817 "max_cntlid": 65519, 00:19:58.817 "ana_reporting": false 00:19:58.817 } 00:19:58.817 }, 00:19:58.817 { 00:19:58.817 "method": "nvmf_subsystem_add_host", 00:19:58.817 "params": { 00:19:58.817 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.817 "host": "nqn.2016-06.io.spdk:host1", 00:19:58.817 "psk": "key0" 00:19:58.817 } 00:19:58.817 }, 00:19:58.817 { 00:19:58.817 "method": "nvmf_subsystem_add_ns", 00:19:58.817 "params": { 00:19:58.817 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.817 "namespace": { 00:19:58.817 "nsid": 1, 00:19:58.817 "bdev_name": "malloc0", 00:19:58.817 "nguid": "69F5767B287A4AFA8A4EA39B2953FC18", 00:19:58.817 "uuid": "69f5767b-287a-4afa-8a4e-a39b2953fc18", 00:19:58.817 "no_auto_visible": false 00:19:58.817 } 00:19:58.817 } 00:19:58.817 }, 00:19:58.817 { 00:19:58.817 "method": "nvmf_subsystem_add_listener", 00:19:58.817 "params": { 00:19:58.817 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.817 "listen_address": { 00:19:58.817 "trtype": "TCP", 00:19:58.817 "adrfam": "IPv4", 00:19:58.817 "traddr": "10.0.0.2", 00:19:58.817 "trsvcid": "4420" 00:19:58.817 }, 00:19:58.817 "secure_channel": false, 00:19:58.817 "sock_impl": "ssl" 00:19:58.817 } 00:19:58.817 } 00:19:58.817 ] 00:19:58.817 } 00:19:58.817 ] 00:19:58.817 }' 00:19:58.817 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:58.817 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:19:58.817 "subsystems": [ 00:19:58.817 { 00:19:58.817 "subsystem": "keyring", 00:19:58.817 "config": [ 00:19:58.817 { 00:19:58.817 "method": "keyring_file_add_key", 00:19:58.817 "params": { 00:19:58.817 "name": "key0", 00:19:58.817 "path": "/tmp/tmp.egsM6D515H" 00:19:58.817 } 00:19:58.817 } 00:19:58.817 ] 00:19:58.817 }, 00:19:58.817 { 00:19:58.817 "subsystem": "iobuf", 00:19:58.817 "config": [ 00:19:58.817 { 00:19:58.817 "method": "iobuf_set_options", 00:19:58.817 "params": { 00:19:58.817 "small_pool_count": 8192, 00:19:58.817 "large_pool_count": 1024, 00:19:58.817 "small_bufsize": 8192, 00:19:58.817 "large_bufsize": 135168, 00:19:58.817 "enable_numa": false 00:19:58.817 } 00:19:58.817 } 00:19:58.817 ] 00:19:58.817 }, 00:19:58.817 { 00:19:58.817 "subsystem": "sock", 00:19:58.817 "config": [ 00:19:58.817 { 00:19:58.817 "method": "sock_set_default_impl", 00:19:58.817 "params": { 00:19:58.817 "impl_name": "posix" 00:19:58.817 } 00:19:58.817 }, 00:19:58.817 { 00:19:58.817 "method": "sock_impl_set_options", 00:19:58.817 "params": { 00:19:58.817 "impl_name": "ssl", 00:19:58.817 "recv_buf_size": 4096, 00:19:58.817 "send_buf_size": 4096, 00:19:58.817 "enable_recv_pipe": true, 00:19:58.817 "enable_quickack": false, 00:19:58.817 "enable_placement_id": 0, 00:19:58.817 "enable_zerocopy_send_server": true, 00:19:58.817 "enable_zerocopy_send_client": false, 00:19:58.817 "zerocopy_threshold": 0, 00:19:58.817 "tls_version": 0, 00:19:58.817 "enable_ktls": false 00:19:58.817 } 00:19:58.817 }, 00:19:58.817 { 00:19:58.817 "method": "sock_impl_set_options", 00:19:58.817 "params": { 00:19:58.817 "impl_name": "posix", 00:19:58.817 "recv_buf_size": 2097152, 00:19:58.817 "send_buf_size": 2097152, 00:19:58.817 "enable_recv_pipe": true, 00:19:58.817 "enable_quickack": false, 00:19:58.817 "enable_placement_id": 0, 00:19:58.817 "enable_zerocopy_send_server": true, 00:19:58.817 "enable_zerocopy_send_client": false, 00:19:58.817 "zerocopy_threshold": 0, 00:19:58.817 "tls_version": 0, 00:19:58.817 "enable_ktls": false 00:19:58.817 } 00:19:58.817 } 00:19:58.817 ] 00:19:58.817 }, 00:19:58.817 { 00:19:58.817 "subsystem": "vmd", 00:19:58.817 "config": [] 00:19:58.817 }, 00:19:58.817 { 00:19:58.817 "subsystem": "accel", 00:19:58.817 "config": [ 00:19:58.817 { 00:19:58.817 "method": "accel_set_options", 00:19:58.817 "params": { 00:19:58.817 "small_cache_size": 128, 00:19:58.817 "large_cache_size": 16, 00:19:58.817 "task_count": 2048, 00:19:58.817 "sequence_count": 2048, 00:19:58.817 "buf_count": 2048 00:19:58.817 } 00:19:58.817 } 00:19:58.817 ] 00:19:58.817 }, 00:19:58.817 { 00:19:58.817 "subsystem": "bdev", 00:19:58.817 "config": [ 00:19:58.817 { 00:19:58.817 "method": "bdev_set_options", 00:19:58.817 "params": { 00:19:58.817 "bdev_io_pool_size": 65535, 00:19:58.817 "bdev_io_cache_size": 256, 00:19:58.817 "bdev_auto_examine": true, 00:19:58.817 "iobuf_small_cache_size": 128, 00:19:58.817 "iobuf_large_cache_size": 16 00:19:58.817 } 00:19:58.817 }, 00:19:58.817 { 00:19:58.817 "method": "bdev_raid_set_options", 00:19:58.817 "params": { 00:19:58.817 "process_window_size_kb": 1024, 00:19:58.817 "process_max_bandwidth_mb_sec": 0 00:19:58.817 } 00:19:58.817 }, 00:19:58.817 { 00:19:58.817 "method": "bdev_iscsi_set_options", 00:19:58.817 "params": { 00:19:58.817 "timeout_sec": 30 00:19:58.817 } 00:19:58.817 }, 00:19:58.817 { 00:19:58.817 "method": "bdev_nvme_set_options", 00:19:58.817 "params": { 00:19:58.817 "action_on_timeout": "none", 00:19:58.817 "timeout_us": 0, 00:19:58.817 "timeout_admin_us": 0, 00:19:58.818 "keep_alive_timeout_ms": 10000, 00:19:58.818 "arbitration_burst": 0, 00:19:58.818 "low_priority_weight": 0, 00:19:58.818 "medium_priority_weight": 0, 00:19:58.818 "high_priority_weight": 0, 00:19:58.818 "nvme_adminq_poll_period_us": 10000, 00:19:58.818 "nvme_ioq_poll_period_us": 0, 00:19:58.818 "io_queue_requests": 512, 00:19:58.818 "delay_cmd_submit": true, 00:19:58.818 "transport_retry_count": 4, 00:19:58.818 "bdev_retry_count": 3, 00:19:58.818 "transport_ack_timeout": 0, 00:19:58.818 "ctrlr_loss_timeout_sec": 0, 00:19:58.818 "reconnect_delay_sec": 0, 00:19:58.818 "fast_io_fail_timeout_sec": 0, 00:19:58.818 "disable_auto_failback": false, 00:19:58.818 "generate_uuids": false, 00:19:58.818 "transport_tos": 0, 00:19:58.818 "nvme_error_stat": false, 00:19:58.818 "rdma_srq_size": 0, 00:19:58.818 "io_path_stat": false, 00:19:58.818 "allow_accel_sequence": false, 00:19:58.818 "rdma_max_cq_size": 0, 00:19:58.818 "rdma_cm_event_timeout_ms": 0, 00:19:58.818 "dhchap_digests": [ 00:19:58.818 "sha256", 00:19:58.818 "sha384", 00:19:58.818 "sha512" 00:19:58.818 ], 00:19:58.818 "dhchap_dhgroups": [ 00:19:58.818 "null", 00:19:58.818 "ffdhe2048", 00:19:58.818 "ffdhe3072", 00:19:58.818 "ffdhe4096", 00:19:58.818 "ffdhe6144", 00:19:58.818 "ffdhe8192" 00:19:58.818 ] 00:19:58.818 } 00:19:58.818 }, 00:19:58.818 { 00:19:58.818 "method": "bdev_nvme_attach_controller", 00:19:58.818 "params": { 00:19:58.818 "name": "nvme0", 00:19:58.818 "trtype": "TCP", 00:19:58.818 "adrfam": "IPv4", 00:19:58.818 "traddr": "10.0.0.2", 00:19:58.818 "trsvcid": "4420", 00:19:58.818 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.818 "prchk_reftag": false, 00:19:58.818 "prchk_guard": false, 00:19:58.818 "ctrlr_loss_timeout_sec": 0, 00:19:58.818 "reconnect_delay_sec": 0, 00:19:58.818 "fast_io_fail_timeout_sec": 0, 00:19:58.818 "psk": "key0", 00:19:58.818 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:58.818 "hdgst": false, 00:19:58.818 "ddgst": false, 00:19:58.818 "multipath": "multipath" 00:19:58.818 } 00:19:58.818 }, 00:19:58.818 { 00:19:58.818 "method": "bdev_nvme_set_hotplug", 00:19:58.818 "params": { 00:19:58.818 "period_us": 100000, 00:19:58.818 "enable": false 00:19:58.818 } 00:19:58.818 }, 00:19:58.818 { 00:19:58.818 "method": "bdev_enable_histogram", 00:19:58.818 "params": { 00:19:58.818 "name": "nvme0n1", 00:19:58.818 "enable": true 00:19:58.818 } 00:19:58.818 }, 00:19:58.818 { 00:19:58.818 "method": "bdev_wait_for_examine" 00:19:58.818 } 00:19:58.818 ] 00:19:58.818 }, 00:19:58.818 { 00:19:58.818 "subsystem": "nbd", 00:19:58.818 "config": [] 00:19:58.818 } 00:19:58.818 ] 00:19:58.818 }' 00:19:58.818 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2229331 00:19:58.818 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2229331 ']' 00:19:58.818 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2229331 00:19:58.818 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:59.079 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:59.079 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2229331 00:19:59.079 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:59.079 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:59.079 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2229331' 00:19:59.079 killing process with pid 2229331 00:19:59.079 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2229331 00:19:59.079 Received shutdown signal, test time was about 1.000000 seconds 00:19:59.079 00:19:59.079 Latency(us) 00:19:59.079 [2024-11-20T15:31:45.038Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.079 [2024-11-20T15:31:45.038Z] =================================================================================================================== 00:19:59.079 [2024-11-20T15:31:45.038Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:59.079 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2229331 00:19:59.079 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2229077 00:19:59.079 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2229077 ']' 00:19:59.079 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2229077 00:19:59.079 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:59.079 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:59.079 16:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2229077 00:19:59.079 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:59.079 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:59.079 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2229077' 00:19:59.079 killing process with pid 2229077 00:19:59.079 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2229077 00:19:59.079 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2229077 00:19:59.341 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:19:59.341 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:59.341 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:59.341 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.341 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:19:59.341 "subsystems": [ 00:19:59.341 { 00:19:59.341 "subsystem": "keyring", 00:19:59.341 "config": [ 00:19:59.341 { 00:19:59.341 "method": "keyring_file_add_key", 00:19:59.341 "params": { 00:19:59.341 "name": "key0", 00:19:59.341 "path": "/tmp/tmp.egsM6D515H" 00:19:59.341 } 00:19:59.341 } 00:19:59.341 ] 00:19:59.341 }, 00:19:59.341 { 00:19:59.341 "subsystem": "iobuf", 00:19:59.341 "config": [ 00:19:59.341 { 00:19:59.341 "method": "iobuf_set_options", 00:19:59.341 "params": { 00:19:59.341 "small_pool_count": 8192, 00:19:59.341 "large_pool_count": 1024, 00:19:59.341 "small_bufsize": 8192, 00:19:59.341 "large_bufsize": 135168, 00:19:59.341 "enable_numa": false 00:19:59.341 } 00:19:59.341 } 00:19:59.341 ] 00:19:59.341 }, 00:19:59.341 { 00:19:59.341 "subsystem": "sock", 00:19:59.341 "config": [ 00:19:59.341 { 00:19:59.341 "method": "sock_set_default_impl", 00:19:59.341 "params": { 00:19:59.341 "impl_name": "posix" 00:19:59.341 } 00:19:59.341 }, 00:19:59.341 { 00:19:59.341 "method": "sock_impl_set_options", 00:19:59.341 "params": { 00:19:59.341 "impl_name": "ssl", 00:19:59.341 "recv_buf_size": 4096, 00:19:59.341 "send_buf_size": 4096, 00:19:59.341 "enable_recv_pipe": true, 00:19:59.341 "enable_quickack": false, 00:19:59.341 "enable_placement_id": 0, 00:19:59.341 "enable_zerocopy_send_server": true, 00:19:59.341 "enable_zerocopy_send_client": false, 00:19:59.341 "zerocopy_threshold": 0, 00:19:59.341 "tls_version": 0, 00:19:59.341 "enable_ktls": false 00:19:59.341 } 00:19:59.341 }, 00:19:59.341 { 00:19:59.341 "method": "sock_impl_set_options", 00:19:59.341 "params": { 00:19:59.341 "impl_name": "posix", 00:19:59.341 "recv_buf_size": 2097152, 00:19:59.341 "send_buf_size": 2097152, 00:19:59.341 "enable_recv_pipe": true, 00:19:59.341 "enable_quickack": false, 00:19:59.341 "enable_placement_id": 0, 00:19:59.341 "enable_zerocopy_send_server": true, 00:19:59.341 "enable_zerocopy_send_client": false, 00:19:59.341 "zerocopy_threshold": 0, 00:19:59.341 "tls_version": 0, 00:19:59.341 "enable_ktls": false 00:19:59.341 } 00:19:59.341 } 00:19:59.341 ] 00:19:59.341 }, 00:19:59.341 { 00:19:59.341 "subsystem": "vmd", 00:19:59.341 "config": [] 00:19:59.341 }, 00:19:59.341 { 00:19:59.341 "subsystem": "accel", 00:19:59.341 "config": [ 00:19:59.341 { 00:19:59.341 "method": "accel_set_options", 00:19:59.341 "params": { 00:19:59.341 "small_cache_size": 128, 00:19:59.341 "large_cache_size": 16, 00:19:59.341 "task_count": 2048, 00:19:59.341 "sequence_count": 2048, 00:19:59.341 "buf_count": 2048 00:19:59.341 } 00:19:59.341 } 00:19:59.341 ] 00:19:59.341 }, 00:19:59.341 { 00:19:59.341 "subsystem": "bdev", 00:19:59.341 "config": [ 00:19:59.341 { 00:19:59.341 "method": "bdev_set_options", 00:19:59.341 "params": { 00:19:59.341 "bdev_io_pool_size": 65535, 00:19:59.341 "bdev_io_cache_size": 256, 00:19:59.341 "bdev_auto_examine": true, 00:19:59.341 "iobuf_small_cache_size": 128, 00:19:59.341 "iobuf_large_cache_size": 16 00:19:59.341 } 00:19:59.341 }, 00:19:59.341 { 00:19:59.341 "method": "bdev_raid_set_options", 00:19:59.341 "params": { 00:19:59.341 "process_window_size_kb": 1024, 00:19:59.341 "process_max_bandwidth_mb_sec": 0 00:19:59.341 } 00:19:59.341 }, 00:19:59.341 { 00:19:59.341 "method": "bdev_iscsi_set_options", 00:19:59.341 "params": { 00:19:59.341 "timeout_sec": 30 00:19:59.341 } 00:19:59.341 }, 00:19:59.341 { 00:19:59.341 "method": "bdev_nvme_set_options", 00:19:59.341 "params": { 00:19:59.341 "action_on_timeout": "none", 00:19:59.341 "timeout_us": 0, 00:19:59.341 "timeout_admin_us": 0, 00:19:59.341 "keep_alive_timeout_ms": 10000, 00:19:59.342 "arbitration_burst": 0, 00:19:59.342 "low_priority_weight": 0, 00:19:59.342 "medium_priority_weight": 0, 00:19:59.342 "high_priority_weight": 0, 00:19:59.342 "nvme_adminq_poll_period_us": 10000, 00:19:59.342 "nvme_ioq_poll_period_us": 0, 00:19:59.342 "io_queue_requests": 0, 00:19:59.342 "delay_cmd_submit": true, 00:19:59.342 "transport_retry_count": 4, 00:19:59.342 "bdev_retry_count": 3, 00:19:59.342 "transport_ack_timeout": 0, 00:19:59.342 "ctrlr_loss_timeout_sec": 0, 00:19:59.342 "reconnect_delay_sec": 0, 00:19:59.342 "fast_io_fail_timeout_sec": 0, 00:19:59.342 "disable_auto_failback": false, 00:19:59.342 "generate_uuids": false, 00:19:59.342 "transport_tos": 0, 00:19:59.342 "nvme_error_stat": false, 00:19:59.342 "rdma_srq_size": 0, 00:19:59.342 "io_path_stat": false, 00:19:59.342 "allow_accel_sequence": false, 00:19:59.342 "rdma_max_cq_size": 0, 00:19:59.342 "rdma_cm_event_timeout_ms": 0, 00:19:59.342 "dhchap_digests": [ 00:19:59.342 "sha256", 00:19:59.342 "sha384", 00:19:59.342 "sha512" 00:19:59.342 ], 00:19:59.342 "dhchap_dhgroups": [ 00:19:59.342 "null", 00:19:59.342 "ffdhe2048", 00:19:59.342 "ffdhe3072", 00:19:59.342 "ffdhe4096", 00:19:59.342 "ffdhe6144", 00:19:59.342 "ffdhe8192" 00:19:59.342 ] 00:19:59.342 } 00:19:59.342 }, 00:19:59.342 { 00:19:59.342 "method": "bdev_nvme_set_hotplug", 00:19:59.342 "params": { 00:19:59.342 "period_us": 100000, 00:19:59.342 "enable": false 00:19:59.342 } 00:19:59.342 }, 00:19:59.342 { 00:19:59.342 "method": "bdev_malloc_create", 00:19:59.342 "params": { 00:19:59.342 "name": "malloc0", 00:19:59.342 "num_blocks": 8192, 00:19:59.342 "block_size": 4096, 00:19:59.342 "physical_block_size": 4096, 00:19:59.342 "uuid": "69f5767b-287a-4afa-8a4e-a39b2953fc18", 00:19:59.342 "optimal_io_boundary": 0, 00:19:59.342 "md_size": 0, 00:19:59.342 "dif_type": 0, 00:19:59.342 "dif_is_head_of_md": false, 00:19:59.342 "dif_pi_format": 0 00:19:59.342 } 00:19:59.342 }, 00:19:59.342 { 00:19:59.342 "method": "bdev_wait_for_examine" 00:19:59.342 } 00:19:59.342 ] 00:19:59.342 }, 00:19:59.342 { 00:19:59.342 "subsystem": "nbd", 00:19:59.342 "config": [] 00:19:59.342 }, 00:19:59.342 { 00:19:59.342 "subsystem": "scheduler", 00:19:59.342 "config": [ 00:19:59.342 { 00:19:59.342 "method": "framework_set_scheduler", 00:19:59.342 "params": { 00:19:59.342 "name": "static" 00:19:59.342 } 00:19:59.342 } 00:19:59.342 ] 00:19:59.342 }, 00:19:59.342 { 00:19:59.342 "subsystem": "nvmf", 00:19:59.342 "config": [ 00:19:59.342 { 00:19:59.342 "method": "nvmf_set_config", 00:19:59.342 "params": { 00:19:59.342 "discovery_filter": "match_any", 00:19:59.342 "admin_cmd_passthru": { 00:19:59.342 "identify_ctrlr": false 00:19:59.342 }, 00:19:59.342 "dhchap_digests": [ 00:19:59.342 "sha256", 00:19:59.342 "sha384", 00:19:59.342 "sha512" 00:19:59.342 ], 00:19:59.342 "dhchap_dhgroups": [ 00:19:59.342 "null", 00:19:59.342 "ffdhe2048", 00:19:59.342 "ffdhe3072", 00:19:59.342 "ffdhe4096", 00:19:59.342 "ffdhe6144", 00:19:59.342 "ffdhe8192" 00:19:59.342 ] 00:19:59.342 } 00:19:59.342 }, 00:19:59.342 { 00:19:59.342 "method": "nvmf_set_max_subsystems", 00:19:59.342 "params": { 00:19:59.342 "max_subsystems": 1024 00:19:59.342 } 00:19:59.342 }, 00:19:59.342 { 00:19:59.342 "method": "nvmf_set_crdt", 00:19:59.342 "params": { 00:19:59.342 "crdt1": 0, 00:19:59.342 "crdt2": 0, 00:19:59.342 "crdt3": 0 00:19:59.342 } 00:19:59.342 }, 00:19:59.342 { 00:19:59.342 "method": "nvmf_create_transport", 00:19:59.342 "params": { 00:19:59.342 "trtype": "TCP", 00:19:59.342 "max_queue_depth": 128, 00:19:59.342 "max_io_qpairs_per_ctrlr": 127, 00:19:59.342 "in_capsule_data_size": 4096, 00:19:59.342 "max_io_size": 131072, 00:19:59.342 "io_unit_size": 131072, 00:19:59.342 "max_aq_depth": 128, 00:19:59.342 "num_shared_buffers": 511, 00:19:59.342 "buf_cache_size": 4294967295, 00:19:59.342 "dif_insert_or_strip": false, 00:19:59.342 "zcopy": false, 00:19:59.342 "c2h_success": false, 00:19:59.342 "sock_priority": 0, 00:19:59.342 "abort_timeout_sec": 1, 00:19:59.342 "ack_timeout": 0, 00:19:59.342 "data_wr_pool_size": 0 00:19:59.342 } 00:19:59.342 }, 00:19:59.342 { 00:19:59.342 "method": "nvmf_create_subsystem", 00:19:59.342 "params": { 00:19:59.342 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.342 "allow_any_host": false, 00:19:59.342 "serial_number": "00000000000000000000", 00:19:59.342 "model_number": "SPDK bdev Controller", 00:19:59.342 "max_namespaces": 32, 00:19:59.342 "min_cntlid": 1, 00:19:59.342 "max_cntlid": 65519, 00:19:59.342 "ana_reporting": false 00:19:59.342 } 00:19:59.342 }, 00:19:59.342 { 00:19:59.342 "method": "nvmf_subsystem_add_host", 00:19:59.342 "params": { 00:19:59.342 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.342 "host": "nqn.2016-06.io.spdk:host1", 00:19:59.342 "psk": "key0" 00:19:59.342 } 00:19:59.342 }, 00:19:59.342 { 00:19:59.342 "method": "nvmf_subsystem_add_ns", 00:19:59.342 "params": { 00:19:59.342 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.342 "namespace": { 00:19:59.342 "nsid": 1, 00:19:59.342 "bdev_name": "malloc0", 00:19:59.342 "nguid": "69F5767B287A4AFA8A4EA39B2953FC18", 00:19:59.342 "uuid": "69f5767b-287a-4afa-8a4e-a39b2953fc18", 00:19:59.342 "no_auto_visible": false 00:19:59.342 } 00:19:59.342 } 00:19:59.342 }, 00:19:59.342 { 00:19:59.342 "method": "nvmf_subsystem_add_listener", 00:19:59.342 "params": { 00:19:59.342 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.342 "listen_address": { 00:19:59.342 "trtype": "TCP", 00:19:59.342 "adrfam": "IPv4", 00:19:59.342 "traddr": "10.0.0.2", 00:19:59.342 "trsvcid": "4420" 00:19:59.342 }, 00:19:59.342 "secure_channel": false, 00:19:59.342 "sock_impl": "ssl" 00:19:59.342 } 00:19:59.342 } 00:19:59.342 ] 00:19:59.342 } 00:19:59.342 ] 00:19:59.342 }' 00:19:59.342 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2230013 00:19:59.342 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2230013 00:19:59.342 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:59.342 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2230013 ']' 00:19:59.342 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.342 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:59.342 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:59.342 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:59.342 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.342 [2024-11-20 16:31:45.185857] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:19:59.342 [2024-11-20 16:31:45.185912] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:59.342 [2024-11-20 16:31:45.262853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.604 [2024-11-20 16:31:45.297608] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:59.604 [2024-11-20 16:31:45.297641] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:59.604 [2024-11-20 16:31:45.297649] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:59.604 [2024-11-20 16:31:45.297655] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:59.604 [2024-11-20 16:31:45.297661] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:59.604 [2024-11-20 16:31:45.298263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.604 [2024-11-20 16:31:45.498439] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:59.604 [2024-11-20 16:31:45.530442] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:59.604 [2024-11-20 16:31:45.530656] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:00.175 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:00.175 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:00.175 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:00.175 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:00.175 16:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.175 16:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:00.175 16:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2230056 00:20:00.175 16:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2230056 /var/tmp/bdevperf.sock 00:20:00.175 16:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2230056 ']' 00:20:00.175 16:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:00.175 16:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:00.175 16:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:00.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:00.175 16:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:00.175 16:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:00.175 16:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.175 16:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:20:00.175 "subsystems": [ 00:20:00.175 { 00:20:00.175 "subsystem": "keyring", 00:20:00.175 "config": [ 00:20:00.175 { 00:20:00.175 "method": "keyring_file_add_key", 00:20:00.175 "params": { 00:20:00.175 "name": "key0", 00:20:00.175 "path": "/tmp/tmp.egsM6D515H" 00:20:00.175 } 00:20:00.175 } 00:20:00.175 ] 00:20:00.175 }, 00:20:00.175 { 00:20:00.175 "subsystem": "iobuf", 00:20:00.175 "config": [ 00:20:00.175 { 00:20:00.175 "method": "iobuf_set_options", 00:20:00.175 "params": { 00:20:00.175 "small_pool_count": 8192, 00:20:00.175 "large_pool_count": 1024, 00:20:00.175 "small_bufsize": 8192, 00:20:00.175 "large_bufsize": 135168, 00:20:00.175 "enable_numa": false 00:20:00.175 } 00:20:00.175 } 00:20:00.175 ] 00:20:00.175 }, 00:20:00.175 { 00:20:00.175 "subsystem": "sock", 00:20:00.175 "config": [ 00:20:00.175 { 00:20:00.175 "method": "sock_set_default_impl", 00:20:00.175 "params": { 00:20:00.175 "impl_name": "posix" 00:20:00.175 } 00:20:00.175 }, 00:20:00.175 { 00:20:00.175 "method": "sock_impl_set_options", 00:20:00.175 "params": { 00:20:00.175 "impl_name": "ssl", 00:20:00.175 "recv_buf_size": 4096, 00:20:00.175 "send_buf_size": 4096, 00:20:00.175 "enable_recv_pipe": true, 00:20:00.175 "enable_quickack": false, 00:20:00.175 "enable_placement_id": 0, 00:20:00.175 "enable_zerocopy_send_server": true, 00:20:00.175 "enable_zerocopy_send_client": false, 00:20:00.175 "zerocopy_threshold": 0, 00:20:00.175 "tls_version": 0, 00:20:00.175 "enable_ktls": false 00:20:00.175 } 00:20:00.175 }, 00:20:00.175 { 00:20:00.175 "method": "sock_impl_set_options", 00:20:00.175 "params": { 00:20:00.175 "impl_name": "posix", 00:20:00.175 "recv_buf_size": 2097152, 00:20:00.175 "send_buf_size": 2097152, 00:20:00.175 "enable_recv_pipe": true, 00:20:00.175 "enable_quickack": false, 00:20:00.175 "enable_placement_id": 0, 00:20:00.175 "enable_zerocopy_send_server": true, 00:20:00.175 "enable_zerocopy_send_client": false, 00:20:00.175 "zerocopy_threshold": 0, 00:20:00.175 "tls_version": 0, 00:20:00.175 "enable_ktls": false 00:20:00.175 } 00:20:00.175 } 00:20:00.175 ] 00:20:00.175 }, 00:20:00.175 { 00:20:00.175 "subsystem": "vmd", 00:20:00.175 "config": [] 00:20:00.175 }, 00:20:00.175 { 00:20:00.175 "subsystem": "accel", 00:20:00.175 "config": [ 00:20:00.175 { 00:20:00.175 "method": "accel_set_options", 00:20:00.176 "params": { 00:20:00.176 "small_cache_size": 128, 00:20:00.176 "large_cache_size": 16, 00:20:00.176 "task_count": 2048, 00:20:00.176 "sequence_count": 2048, 00:20:00.176 "buf_count": 2048 00:20:00.176 } 00:20:00.176 } 00:20:00.176 ] 00:20:00.176 }, 00:20:00.176 { 00:20:00.176 "subsystem": "bdev", 00:20:00.176 "config": [ 00:20:00.176 { 00:20:00.176 "method": "bdev_set_options", 00:20:00.176 "params": { 00:20:00.176 "bdev_io_pool_size": 65535, 00:20:00.176 "bdev_io_cache_size": 256, 00:20:00.176 "bdev_auto_examine": true, 00:20:00.176 "iobuf_small_cache_size": 128, 00:20:00.176 "iobuf_large_cache_size": 16 00:20:00.176 } 00:20:00.176 }, 00:20:00.176 { 00:20:00.176 "method": "bdev_raid_set_options", 00:20:00.176 "params": { 00:20:00.176 "process_window_size_kb": 1024, 00:20:00.176 "process_max_bandwidth_mb_sec": 0 00:20:00.176 } 00:20:00.176 }, 00:20:00.176 { 00:20:00.176 "method": "bdev_iscsi_set_options", 00:20:00.176 "params": { 00:20:00.176 "timeout_sec": 30 00:20:00.176 } 00:20:00.176 }, 00:20:00.176 { 00:20:00.176 "method": "bdev_nvme_set_options", 00:20:00.176 "params": { 00:20:00.176 "action_on_timeout": "none", 00:20:00.176 "timeout_us": 0, 00:20:00.176 "timeout_admin_us": 0, 00:20:00.176 "keep_alive_timeout_ms": 10000, 00:20:00.176 "arbitration_burst": 0, 00:20:00.176 "low_priority_weight": 0, 00:20:00.176 "medium_priority_weight": 0, 00:20:00.176 "high_priority_weight": 0, 00:20:00.176 "nvme_adminq_poll_period_us": 10000, 00:20:00.176 "nvme_ioq_poll_period_us": 0, 00:20:00.176 "io_queue_requests": 512, 00:20:00.176 "delay_cmd_submit": true, 00:20:00.176 "transport_retry_count": 4, 00:20:00.176 "bdev_retry_count": 3, 00:20:00.176 "transport_ack_timeout": 0, 00:20:00.176 "ctrlr_loss_timeout_sec": 0, 00:20:00.176 "reconnect_delay_sec": 0, 00:20:00.176 "fast_io_fail_timeout_sec": 0, 00:20:00.176 "disable_auto_failback": false, 00:20:00.176 "generate_uuids": false, 00:20:00.176 "transport_tos": 0, 00:20:00.176 "nvme_error_stat": false, 00:20:00.176 "rdma_srq_size": 0, 00:20:00.176 "io_path_stat": false, 00:20:00.176 "allow_accel_sequence": false, 00:20:00.176 "rdma_max_cq_size": 0, 00:20:00.176 "rdma_cm_event_timeout_ms": 0, 00:20:00.176 "dhchap_digests": [ 00:20:00.176 "sha256", 00:20:00.176 "sha384", 00:20:00.176 "sha512" 00:20:00.176 ], 00:20:00.176 "dhchap_dhgroups": [ 00:20:00.176 "null", 00:20:00.176 "ffdhe2048", 00:20:00.176 "ffdhe3072", 00:20:00.176 "ffdhe4096", 00:20:00.176 "ffdhe6144", 00:20:00.176 "ffdhe8192" 00:20:00.176 ] 00:20:00.176 } 00:20:00.176 }, 00:20:00.176 { 00:20:00.176 "method": "bdev_nvme_attach_controller", 00:20:00.176 "params": { 00:20:00.176 "name": "nvme0", 00:20:00.176 "trtype": "TCP", 00:20:00.176 "adrfam": "IPv4", 00:20:00.176 "traddr": "10.0.0.2", 00:20:00.176 "trsvcid": "4420", 00:20:00.176 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.176 "prchk_reftag": false, 00:20:00.176 "prchk_guard": false, 00:20:00.176 "ctrlr_loss_timeout_sec": 0, 00:20:00.176 "reconnect_delay_sec": 0, 00:20:00.176 "fast_io_fail_timeout_sec": 0, 00:20:00.176 "psk": "key0", 00:20:00.176 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:00.176 "hdgst": false, 00:20:00.176 "ddgst": false, 00:20:00.176 "multipath": "multipath" 00:20:00.176 } 00:20:00.176 }, 00:20:00.176 { 00:20:00.176 "method": "bdev_nvme_set_hotplug", 00:20:00.176 "params": { 00:20:00.176 "period_us": 100000, 00:20:00.176 "enable": false 00:20:00.176 } 00:20:00.176 }, 00:20:00.176 { 00:20:00.176 "method": "bdev_enable_histogram", 00:20:00.176 "params": { 00:20:00.176 "name": "nvme0n1", 00:20:00.176 "enable": true 00:20:00.176 } 00:20:00.176 }, 00:20:00.176 { 00:20:00.176 "method": "bdev_wait_for_examine" 00:20:00.176 } 00:20:00.176 ] 00:20:00.176 }, 00:20:00.176 { 00:20:00.176 "subsystem": "nbd", 00:20:00.176 "config": [] 00:20:00.176 } 00:20:00.176 ] 00:20:00.176 }' 00:20:00.176 [2024-11-20 16:31:46.063750] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:20:00.176 [2024-11-20 16:31:46.063803] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2230056 ] 00:20:00.437 [2024-11-20 16:31:46.148084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.437 [2024-11-20 16:31:46.177934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:00.437 [2024-11-20 16:31:46.314233] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:01.008 16:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:01.008 16:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:01.008 16:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:01.008 16:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:20:01.268 16:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.268 16:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:01.268 Running I/O for 1 seconds... 00:20:02.210 4326.00 IOPS, 16.90 MiB/s 00:20:02.210 Latency(us) 00:20:02.210 [2024-11-20T15:31:48.169Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:02.210 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:02.210 Verification LBA range: start 0x0 length 0x2000 00:20:02.210 nvme0n1 : 1.01 4391.94 17.16 0.00 0.00 28960.91 5379.41 29272.75 00:20:02.210 [2024-11-20T15:31:48.169Z] =================================================================================================================== 00:20:02.210 [2024-11-20T15:31:48.169Z] Total : 4391.94 17.16 0.00 0.00 28960.91 5379.41 29272.75 00:20:02.210 { 00:20:02.210 "results": [ 00:20:02.210 { 00:20:02.210 "job": "nvme0n1", 00:20:02.210 "core_mask": "0x2", 00:20:02.210 "workload": "verify", 00:20:02.210 "status": "finished", 00:20:02.210 "verify_range": { 00:20:02.210 "start": 0, 00:20:02.210 "length": 8192 00:20:02.210 }, 00:20:02.210 "queue_depth": 128, 00:20:02.210 "io_size": 4096, 00:20:02.210 "runtime": 1.014131, 00:20:02.210 "iops": 4391.93753075293, 00:20:02.210 "mibps": 17.156005979503632, 00:20:02.210 "io_failed": 0, 00:20:02.210 "io_timeout": 0, 00:20:02.210 "avg_latency_us": 28960.911767699447, 00:20:02.210 "min_latency_us": 5379.413333333333, 00:20:02.210 "max_latency_us": 29272.746666666666 00:20:02.210 } 00:20:02.210 ], 00:20:02.210 "core_count": 1 00:20:02.210 } 00:20:02.210 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:20:02.210 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:20:02.210 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:02.210 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:20:02.210 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:20:02.210 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:20:02.210 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:02.210 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:20:02.210 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:20:02.210 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:20:02.210 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:02.210 nvmf_trace.0 00:20:02.471 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:20:02.471 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2230056 00:20:02.471 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2230056 ']' 00:20:02.471 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2230056 00:20:02.471 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:02.471 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:02.471 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2230056 00:20:02.471 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:02.471 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:02.471 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2230056' 00:20:02.471 killing process with pid 2230056 00:20:02.471 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2230056 00:20:02.471 Received shutdown signal, test time was about 1.000000 seconds 00:20:02.471 00:20:02.471 Latency(us) 00:20:02.471 [2024-11-20T15:31:48.430Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:02.471 [2024-11-20T15:31:48.430Z] =================================================================================================================== 00:20:02.471 [2024-11-20T15:31:48.430Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:02.471 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2230056 00:20:02.471 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:02.471 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:02.471 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:20:02.471 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:02.471 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:20:02.471 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:02.471 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:02.471 rmmod nvme_tcp 00:20:02.732 rmmod nvme_fabrics 00:20:02.732 rmmod nvme_keyring 00:20:02.732 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:02.732 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:20:02.732 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:20:02.732 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 2230013 ']' 00:20:02.732 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 2230013 00:20:02.732 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2230013 ']' 00:20:02.732 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2230013 00:20:02.732 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:02.732 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:02.732 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2230013 00:20:02.732 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:02.732 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:02.732 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2230013' 00:20:02.732 killing process with pid 2230013 00:20:02.732 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2230013 00:20:02.732 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2230013 00:20:02.732 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:02.732 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:02.732 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:02.732 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:20:02.732 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:20:02.732 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:02.732 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:20:02.732 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:02.732 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:02.732 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:02.732 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:02.732 16:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:05.280 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:05.280 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.RlR1dpsSFh /tmp/tmp.dB1S6I3Hz4 /tmp/tmp.egsM6D515H 00:20:05.280 00:20:05.280 real 1m21.451s 00:20:05.280 user 2m5.174s 00:20:05.280 sys 0m27.197s 00:20:05.280 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:05.280 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:05.280 ************************************ 00:20:05.280 END TEST nvmf_tls 00:20:05.280 ************************************ 00:20:05.280 16:31:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:05.280 16:31:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:05.280 16:31:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:05.280 16:31:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:05.280 ************************************ 00:20:05.280 START TEST nvmf_fips 00:20:05.280 ************************************ 00:20:05.281 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:05.281 * Looking for test storage... 00:20:05.281 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:05.281 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:05.281 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:20:05.281 16:31:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:05.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:05.281 --rc genhtml_branch_coverage=1 00:20:05.281 --rc genhtml_function_coverage=1 00:20:05.281 --rc genhtml_legend=1 00:20:05.281 --rc geninfo_all_blocks=1 00:20:05.281 --rc geninfo_unexecuted_blocks=1 00:20:05.281 00:20:05.281 ' 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:05.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:05.281 --rc genhtml_branch_coverage=1 00:20:05.281 --rc genhtml_function_coverage=1 00:20:05.281 --rc genhtml_legend=1 00:20:05.281 --rc geninfo_all_blocks=1 00:20:05.281 --rc geninfo_unexecuted_blocks=1 00:20:05.281 00:20:05.281 ' 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:05.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:05.281 --rc genhtml_branch_coverage=1 00:20:05.281 --rc genhtml_function_coverage=1 00:20:05.281 --rc genhtml_legend=1 00:20:05.281 --rc geninfo_all_blocks=1 00:20:05.281 --rc geninfo_unexecuted_blocks=1 00:20:05.281 00:20:05.281 ' 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:05.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:05.281 --rc genhtml_branch_coverage=1 00:20:05.281 --rc genhtml_function_coverage=1 00:20:05.281 --rc genhtml_legend=1 00:20:05.281 --rc geninfo_all_blocks=1 00:20:05.281 --rc geninfo_unexecuted_blocks=1 00:20:05.281 00:20:05.281 ' 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:05.281 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:05.282 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:20:05.282 Error setting digest 00:20:05.282 406214B9CB7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:20:05.282 406214B9CB7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:05.282 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:05.543 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:05.543 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:05.543 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:20:05.543 16:31:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:13.690 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:13.690 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:20:13.690 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:13.690 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:13.690 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:13.690 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:13.690 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:13.690 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:20:13.690 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:13.690 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:20:13.690 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:20:13.690 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:20:13.690 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:20:13.690 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:20:13.690 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:20:13.690 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:13.690 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:13.690 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:13.690 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:13.690 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:13.690 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:13.690 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:13.690 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:13.690 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:13.690 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:13.690 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:13.690 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:13.690 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:13.690 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:13.690 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:13.690 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:13.690 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:13.690 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:13.690 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:13.690 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:13.690 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:13.690 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:13.691 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:13.691 Found net devices under 0000:31:00.0: cvl_0_0 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:13.691 Found net devices under 0000:31:00.1: cvl_0_1 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:13.691 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:13.691 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.521 ms 00:20:13.691 00:20:13.691 --- 10.0.0.2 ping statistics --- 00:20:13.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.691 rtt min/avg/max/mdev = 0.521/0.521/0.521/0.000 ms 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:13.691 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:13.691 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:20:13.691 00:20:13.691 --- 10.0.0.1 ping statistics --- 00:20:13.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.691 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:13.691 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:13.692 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:13.692 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=2234939 00:20:13.692 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 2234939 00:20:13.692 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:13.692 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2234939 ']' 00:20:13.692 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.692 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:13.692 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.692 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:13.692 16:31:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:13.692 [2024-11-20 16:31:58.728959] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:20:13.692 [2024-11-20 16:31:58.729038] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:13.692 [2024-11-20 16:31:58.828784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.692 [2024-11-20 16:31:58.879559] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:13.692 [2024-11-20 16:31:58.879609] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:13.692 [2024-11-20 16:31:58.879618] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:13.692 [2024-11-20 16:31:58.879626] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:13.692 [2024-11-20 16:31:58.879632] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:13.692 [2024-11-20 16:31:58.880494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:13.692 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:13.692 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:13.692 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:13.692 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:13.692 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:13.692 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:13.692 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:13.692 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:13.692 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:20:13.692 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.NLp 00:20:13.692 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:13.692 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.NLp 00:20:13.692 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.NLp 00:20:13.692 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.NLp 00:20:13.692 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:14.065 [2024-11-20 16:31:59.744025] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:14.065 [2024-11-20 16:31:59.760021] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:14.066 [2024-11-20 16:31:59.760323] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:14.066 malloc0 00:20:14.066 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:14.066 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2235133 00:20:14.066 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2235133 /var/tmp/bdevperf.sock 00:20:14.066 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:14.066 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2235133 ']' 00:20:14.066 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:14.066 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:14.066 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:14.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:14.066 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:14.066 16:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:14.066 [2024-11-20 16:31:59.914186] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:20:14.066 [2024-11-20 16:31:59.914253] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2235133 ] 00:20:14.066 [2024-11-20 16:31:59.977862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.350 [2024-11-20 16:32:00.016440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:14.922 16:32:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:14.922 16:32:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:14.922 16:32:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.NLp 00:20:14.922 16:32:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:15.182 [2024-11-20 16:32:00.992069] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:15.182 TLSTESTn1 00:20:15.182 16:32:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:15.442 Running I/O for 10 seconds... 00:20:17.326 5068.00 IOPS, 19.80 MiB/s [2024-11-20T15:32:04.227Z] 5173.50 IOPS, 20.21 MiB/s [2024-11-20T15:32:05.614Z] 5188.00 IOPS, 20.27 MiB/s [2024-11-20T15:32:06.555Z] 5402.75 IOPS, 21.10 MiB/s [2024-11-20T15:32:07.495Z] 5549.00 IOPS, 21.68 MiB/s [2024-11-20T15:32:08.433Z] 5383.33 IOPS, 21.03 MiB/s [2024-11-20T15:32:09.372Z] 5458.29 IOPS, 21.32 MiB/s [2024-11-20T15:32:10.312Z] 5525.50 IOPS, 21.58 MiB/s [2024-11-20T15:32:11.250Z] 5488.89 IOPS, 21.44 MiB/s [2024-11-20T15:32:11.250Z] 5452.10 IOPS, 21.30 MiB/s 00:20:25.291 Latency(us) 00:20:25.291 [2024-11-20T15:32:11.250Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.291 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:25.291 Verification LBA range: start 0x0 length 0x2000 00:20:25.291 TLSTESTn1 : 10.01 5457.26 21.32 0.00 0.00 23422.55 4751.36 29928.11 00:20:25.291 [2024-11-20T15:32:11.250Z] =================================================================================================================== 00:20:25.291 [2024-11-20T15:32:11.250Z] Total : 5457.26 21.32 0.00 0.00 23422.55 4751.36 29928.11 00:20:25.291 { 00:20:25.291 "results": [ 00:20:25.291 { 00:20:25.291 "job": "TLSTESTn1", 00:20:25.291 "core_mask": "0x4", 00:20:25.291 "workload": "verify", 00:20:25.291 "status": "finished", 00:20:25.291 "verify_range": { 00:20:25.291 "start": 0, 00:20:25.291 "length": 8192 00:20:25.291 }, 00:20:25.291 "queue_depth": 128, 00:20:25.291 "io_size": 4096, 00:20:25.291 "runtime": 10.01381, 00:20:25.291 "iops": 5457.26351908015, 00:20:25.291 "mibps": 21.317435621406837, 00:20:25.291 "io_failed": 0, 00:20:25.291 "io_timeout": 0, 00:20:25.291 "avg_latency_us": 23422.55216903333, 00:20:25.291 "min_latency_us": 4751.36, 00:20:25.291 "max_latency_us": 29928.106666666667 00:20:25.291 } 00:20:25.291 ], 00:20:25.291 "core_count": 1 00:20:25.291 } 00:20:25.291 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:25.291 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:25.291 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:20:25.291 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:20:25.291 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:20:25.291 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:25.552 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:20:25.552 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:20:25.552 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:20:25.552 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:25.552 nvmf_trace.0 00:20:25.552 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:20:25.552 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2235133 00:20:25.552 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2235133 ']' 00:20:25.552 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2235133 00:20:25.552 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:20:25.552 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:25.552 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2235133 00:20:25.552 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:25.552 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:25.552 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2235133' 00:20:25.552 killing process with pid 2235133 00:20:25.552 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2235133 00:20:25.552 Received shutdown signal, test time was about 10.000000 seconds 00:20:25.552 00:20:25.552 Latency(us) 00:20:25.552 [2024-11-20T15:32:11.511Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.552 [2024-11-20T15:32:11.511Z] =================================================================================================================== 00:20:25.552 [2024-11-20T15:32:11.511Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:25.552 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2235133 00:20:25.552 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:25.552 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:25.552 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:20:25.812 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:25.812 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:20:25.812 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:25.812 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:25.812 rmmod nvme_tcp 00:20:25.812 rmmod nvme_fabrics 00:20:25.812 rmmod nvme_keyring 00:20:25.812 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:25.812 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:20:25.812 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:20:25.812 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 2234939 ']' 00:20:25.812 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 2234939 00:20:25.812 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2234939 ']' 00:20:25.812 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2234939 00:20:25.812 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:20:25.812 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:25.812 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2234939 00:20:25.812 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:25.812 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:25.812 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2234939' 00:20:25.812 killing process with pid 2234939 00:20:25.812 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2234939 00:20:25.812 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2234939 00:20:25.812 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:25.813 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:25.813 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:25.813 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:20:25.813 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:20:25.813 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:20:25.813 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:25.813 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:25.813 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:25.813 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.813 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:25.813 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.357 16:32:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:28.357 16:32:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.NLp 00:20:28.357 00:20:28.357 real 0m22.988s 00:20:28.357 user 0m24.793s 00:20:28.357 sys 0m9.341s 00:20:28.357 16:32:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:28.357 16:32:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:28.357 ************************************ 00:20:28.357 END TEST nvmf_fips 00:20:28.357 ************************************ 00:20:28.357 16:32:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:28.357 16:32:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:28.357 16:32:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:28.357 16:32:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:28.357 ************************************ 00:20:28.357 START TEST nvmf_control_msg_list 00:20:28.357 ************************************ 00:20:28.357 16:32:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:28.357 * Looking for test storage... 00:20:28.357 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:28.357 16:32:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:28.357 16:32:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:20:28.357 16:32:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:28.357 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:28.357 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:28.357 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:28.357 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:28.357 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:20:28.357 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:20:28.357 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:20:28.357 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:20:28.357 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:20:28.357 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:20:28.357 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:20:28.357 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:28.357 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:20:28.357 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:20:28.357 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:28.357 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:28.357 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:20:28.357 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:20:28.357 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:28.357 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:20:28.357 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:20:28.357 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:20:28.357 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:20:28.357 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:28.357 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:20:28.357 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:20:28.357 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:28.357 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:28.357 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:28.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.358 --rc genhtml_branch_coverage=1 00:20:28.358 --rc genhtml_function_coverage=1 00:20:28.358 --rc genhtml_legend=1 00:20:28.358 --rc geninfo_all_blocks=1 00:20:28.358 --rc geninfo_unexecuted_blocks=1 00:20:28.358 00:20:28.358 ' 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:28.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.358 --rc genhtml_branch_coverage=1 00:20:28.358 --rc genhtml_function_coverage=1 00:20:28.358 --rc genhtml_legend=1 00:20:28.358 --rc geninfo_all_blocks=1 00:20:28.358 --rc geninfo_unexecuted_blocks=1 00:20:28.358 00:20:28.358 ' 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:28.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.358 --rc genhtml_branch_coverage=1 00:20:28.358 --rc genhtml_function_coverage=1 00:20:28.358 --rc genhtml_legend=1 00:20:28.358 --rc geninfo_all_blocks=1 00:20:28.358 --rc geninfo_unexecuted_blocks=1 00:20:28.358 00:20:28.358 ' 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:28.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.358 --rc genhtml_branch_coverage=1 00:20:28.358 --rc genhtml_function_coverage=1 00:20:28.358 --rc genhtml_legend=1 00:20:28.358 --rc geninfo_all_blocks=1 00:20:28.358 --rc geninfo_unexecuted_blocks=1 00:20:28.358 00:20:28.358 ' 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:28.358 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:20:28.358 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:36.504 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:36.504 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:20:36.504 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:36.504 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:36.504 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:36.504 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:36.504 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:36.504 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:20:36.504 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:36.504 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:20:36.504 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:20:36.504 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:20:36.504 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:20:36.504 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:20:36.504 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:20:36.504 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:36.504 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:36.504 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:36.504 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:36.504 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:36.504 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:36.504 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:36.504 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:36.505 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:36.505 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:36.505 Found net devices under 0000:31:00.0: cvl_0_0 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:36.505 Found net devices under 0000:31:00.1: cvl_0_1 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:36.505 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:36.505 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.605 ms 00:20:36.505 00:20:36.505 --- 10.0.0.2 ping statistics --- 00:20:36.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.505 rtt min/avg/max/mdev = 0.605/0.605/0.605/0.000 ms 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:36.505 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:36.505 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:20:36.505 00:20:36.505 --- 10.0.0.1 ping statistics --- 00:20:36.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.505 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:36.505 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=2241587 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 2241587 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 2241587 ']' 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:36.506 [2024-11-20 16:32:21.415225] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:20:36.506 [2024-11-20 16:32:21.415272] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:36.506 [2024-11-20 16:32:21.486739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.506 [2024-11-20 16:32:21.521897] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:36.506 [2024-11-20 16:32:21.521928] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:36.506 [2024-11-20 16:32:21.521937] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:36.506 [2024-11-20 16:32:21.521944] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:36.506 [2024-11-20 16:32:21.521950] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:36.506 [2024-11-20 16:32:21.522515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:36.506 [2024-11-20 16:32:21.658740] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:36.506 Malloc0 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:36.506 [2024-11-20 16:32:21.709690] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2241767 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2241769 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2241771 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2241767 00:20:36.506 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:36.506 [2024-11-20 16:32:21.780136] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:36.506 [2024-11-20 16:32:21.800037] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:36.506 [2024-11-20 16:32:21.810065] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:37.078 Initializing NVMe Controllers 00:20:37.078 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:37.078 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:20:37.078 Initialization complete. Launching workers. 00:20:37.078 ======================================================== 00:20:37.078 Latency(us) 00:20:37.078 Device Information : IOPS MiB/s Average min max 00:20:37.078 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 1652.00 6.45 605.28 297.41 776.95 00:20:37.078 ======================================================== 00:20:37.078 Total : 1652.00 6.45 605.28 297.41 776.95 00:20:37.078 00:20:37.078 Initializing NVMe Controllers 00:20:37.078 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:37.078 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:20:37.078 Initialization complete. Launching workers. 00:20:37.078 ======================================================== 00:20:37.078 Latency(us) 00:20:37.078 Device Information : IOPS MiB/s Average min max 00:20:37.078 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 2169.00 8.47 460.87 128.07 686.80 00:20:37.078 ======================================================== 00:20:37.078 Total : 2169.00 8.47 460.87 128.07 686.80 00:20:37.078 00:20:37.078 [2024-11-20 16:32:22.903838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xded300 is same with the state(6) to be set 00:20:37.078 16:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2241769 00:20:37.078 16:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2241771 00:20:37.078 Initializing NVMe Controllers 00:20:37.078 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:37.078 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:20:37.078 Initialization complete. Launching workers. 00:20:37.078 ======================================================== 00:20:37.078 Latency(us) 00:20:37.078 Device Information : IOPS MiB/s Average min max 00:20:37.078 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40912.69 40810.71 41161.13 00:20:37.078 ======================================================== 00:20:37.078 Total : 25.00 0.10 40912.69 40810.71 41161.13 00:20:37.078 00:20:37.339 16:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:37.339 16:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:20:37.339 16:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:37.339 16:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:20:37.339 16:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:37.339 16:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:20:37.339 16:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:37.339 16:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:37.339 rmmod nvme_tcp 00:20:37.339 rmmod nvme_fabrics 00:20:37.339 rmmod nvme_keyring 00:20:37.339 16:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:37.339 16:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:20:37.339 16:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:20:37.339 16:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 2241587 ']' 00:20:37.339 16:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 2241587 00:20:37.339 16:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 2241587 ']' 00:20:37.339 16:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 2241587 00:20:37.339 16:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:20:37.340 16:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:37.340 16:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2241587 00:20:37.340 16:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:37.340 16:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:37.340 16:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2241587' 00:20:37.340 killing process with pid 2241587 00:20:37.340 16:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 2241587 00:20:37.340 16:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 2241587 00:20:37.601 16:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:37.601 16:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:37.601 16:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:37.601 16:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:20:37.601 16:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:20:37.601 16:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:37.601 16:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:20:37.601 16:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:37.601 16:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:37.601 16:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:37.601 16:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:37.601 16:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.514 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:39.514 00:20:39.514 real 0m11.485s 00:20:39.514 user 0m7.207s 00:20:39.514 sys 0m6.206s 00:20:39.514 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:39.514 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:39.514 ************************************ 00:20:39.514 END TEST nvmf_control_msg_list 00:20:39.514 ************************************ 00:20:39.514 16:32:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:39.514 16:32:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:39.514 16:32:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:39.514 16:32:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:39.514 ************************************ 00:20:39.514 START TEST nvmf_wait_for_buf 00:20:39.514 ************************************ 00:20:39.514 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:39.776 * Looking for test storage... 00:20:39.776 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:39.776 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:39.776 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:20:39.776 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:39.776 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:39.776 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:39.776 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:39.776 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:39.776 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:20:39.776 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:20:39.776 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:20:39.776 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:20:39.776 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:20:39.776 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:20:39.776 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:20:39.776 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:39.776 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:20:39.776 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:20:39.776 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:39.776 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:39.776 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:20:39.776 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:20:39.776 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:39.776 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:20:39.776 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:39.776 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:20:39.776 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:20:39.776 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:39.776 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:20:39.776 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:39.776 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:39.776 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:39.776 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:20:39.776 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:39.776 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:39.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.776 --rc genhtml_branch_coverage=1 00:20:39.777 --rc genhtml_function_coverage=1 00:20:39.777 --rc genhtml_legend=1 00:20:39.777 --rc geninfo_all_blocks=1 00:20:39.777 --rc geninfo_unexecuted_blocks=1 00:20:39.777 00:20:39.777 ' 00:20:39.777 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:39.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.777 --rc genhtml_branch_coverage=1 00:20:39.777 --rc genhtml_function_coverage=1 00:20:39.777 --rc genhtml_legend=1 00:20:39.777 --rc geninfo_all_blocks=1 00:20:39.777 --rc geninfo_unexecuted_blocks=1 00:20:39.777 00:20:39.777 ' 00:20:39.777 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:39.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.777 --rc genhtml_branch_coverage=1 00:20:39.777 --rc genhtml_function_coverage=1 00:20:39.777 --rc genhtml_legend=1 00:20:39.777 --rc geninfo_all_blocks=1 00:20:39.777 --rc geninfo_unexecuted_blocks=1 00:20:39.777 00:20:39.777 ' 00:20:39.777 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:39.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.777 --rc genhtml_branch_coverage=1 00:20:39.777 --rc genhtml_function_coverage=1 00:20:39.777 --rc genhtml_legend=1 00:20:39.777 --rc geninfo_all_blocks=1 00:20:39.777 --rc geninfo_unexecuted_blocks=1 00:20:39.777 00:20:39.777 ' 00:20:39.777 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:39.777 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:20:39.777 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:39.777 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:39.777 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:39.777 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:39.777 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:39.777 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:39.777 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:39.777 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:39.777 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:39.777 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:39.777 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:39.777 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:39.777 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:39.777 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:39.777 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:39.777 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:39.777 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:39.777 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:39.777 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:39.777 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:39.777 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:39.777 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.777 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.777 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.777 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:20:39.777 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.777 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:20:39.777 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:39.777 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:39.777 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:39.777 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:39.777 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:39.777 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:39.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:39.777 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:39.777 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:39.777 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:39.777 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:20:39.777 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:39.777 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:39.777 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:39.777 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:39.777 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:39.777 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.777 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:39.777 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.777 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:39.777 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:39.777 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:20:39.777 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:47.922 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:47.922 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:47.922 Found net devices under 0000:31:00.0: cvl_0_0 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:47.922 Found net devices under 0000:31:00.1: cvl_0_1 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:47.922 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:47.923 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:47.923 16:32:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:47.923 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:47.923 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:47.923 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:47.923 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:47.923 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:47.923 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:20:47.923 00:20:47.923 --- 10.0.0.2 ping statistics --- 00:20:47.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.923 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:20:47.923 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:47.923 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:47.923 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:20:47.923 00:20:47.923 --- 10.0.0.1 ping statistics --- 00:20:47.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.923 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:20:47.923 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:47.923 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:20:47.923 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:47.923 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:47.923 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:47.923 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:47.923 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:47.923 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:47.923 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:47.923 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:20:47.923 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:47.923 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:47.923 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:47.923 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=2246244 00:20:47.923 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 2246244 00:20:47.923 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:47.923 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 2246244 ']' 00:20:47.923 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:47.923 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:47.923 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:47.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:47.923 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:47.923 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:47.923 [2024-11-20 16:32:33.153433] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:20:47.923 [2024-11-20 16:32:33.153491] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:47.923 [2024-11-20 16:32:33.237387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.923 [2024-11-20 16:32:33.275749] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:47.923 [2024-11-20 16:32:33.275786] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:47.923 [2024-11-20 16:32:33.275795] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:47.923 [2024-11-20 16:32:33.275801] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:47.923 [2024-11-20 16:32:33.275807] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:47.923 [2024-11-20 16:32:33.276446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:48.184 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:48.184 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:20:48.184 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:48.184 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:48.184 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:48.184 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:48.184 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:48.184 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:48.184 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:20:48.184 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.184 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:48.184 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.184 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:20:48.184 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.184 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:48.184 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.184 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:20:48.184 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.184 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:48.184 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.184 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:48.184 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.184 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:48.184 Malloc0 00:20:48.184 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.184 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:20:48.184 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.184 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:48.184 [2024-11-20 16:32:34.084406] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:48.184 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.184 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:20:48.185 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.185 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:48.185 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.185 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:48.185 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.185 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:48.185 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.185 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:48.185 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.185 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:48.185 [2024-11-20 16:32:34.108576] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:48.185 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.185 16:32:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:48.445 [2024-11-20 16:32:34.224088] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:49.831 Initializing NVMe Controllers 00:20:49.831 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:49.831 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:20:49.831 Initialization complete. Launching workers. 00:20:49.831 ======================================================== 00:20:49.831 Latency(us) 00:20:49.831 Device Information : IOPS MiB/s Average min max 00:20:49.831 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32295.43 8034.16 63851.78 00:20:49.831 ======================================================== 00:20:49.831 Total : 129.00 16.12 32295.43 8034.16 63851.78 00:20:49.831 00:20:49.831 16:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:20:49.831 16:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:20:49.831 16:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.831 16:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:49.831 16:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.831 16:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:20:49.831 16:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:20:49.831 16:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:49.831 16:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:20:49.831 16:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:49.831 16:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:20:49.831 16:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:49.831 16:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:20:49.831 16:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:49.831 16:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:49.831 rmmod nvme_tcp 00:20:49.831 rmmod nvme_fabrics 00:20:49.831 rmmod nvme_keyring 00:20:49.831 16:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:49.831 16:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:20:49.831 16:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:20:49.831 16:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 2246244 ']' 00:20:49.831 16:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 2246244 00:20:49.831 16:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 2246244 ']' 00:20:49.831 16:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 2246244 00:20:49.831 16:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:20:49.831 16:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:49.831 16:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2246244 00:20:50.093 16:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:50.093 16:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:50.093 16:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2246244' 00:20:50.093 killing process with pid 2246244 00:20:50.093 16:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 2246244 00:20:50.093 16:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 2246244 00:20:50.093 16:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:50.093 16:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:50.093 16:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:50.093 16:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:20:50.093 16:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:20:50.093 16:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:50.093 16:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:20:50.093 16:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:50.093 16:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:50.093 16:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.093 16:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:50.093 16:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:52.641 16:32:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:52.641 00:20:52.641 real 0m12.521s 00:20:52.641 user 0m5.104s 00:20:52.641 sys 0m5.956s 00:20:52.641 16:32:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:52.641 16:32:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:52.641 ************************************ 00:20:52.641 END TEST nvmf_wait_for_buf 00:20:52.641 ************************************ 00:20:52.641 16:32:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:20:52.641 16:32:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:20:52.641 16:32:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:20:52.641 16:32:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:20:52.641 16:32:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:20:52.641 16:32:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:59.233 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:59.233 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:20:59.233 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:59.233 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:59.233 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:59.233 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:59.233 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:59.233 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:20:59.233 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:59.233 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:20:59.233 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:20:59.233 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:20:59.233 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:20:59.233 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:20:59.233 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:20:59.233 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:59.233 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:59.233 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:59.233 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:59.233 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:59.233 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:59.233 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:59.233 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:59.233 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:59.233 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:59.233 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:59.233 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:59.233 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:59.233 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:59.233 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:59.233 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:59.233 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:59.233 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:59.233 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:59.233 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:59.233 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:59.233 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:59.233 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:59.233 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:59.233 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:59.234 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:59.234 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:59.234 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:59.234 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:59.234 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:59.234 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:59.234 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:59.234 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:59.234 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:59.234 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:59.234 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:59.234 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:59.234 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:59.234 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:59.234 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:59.234 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:59.234 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:59.234 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:59.234 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:59.234 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:59.234 Found net devices under 0000:31:00.0: cvl_0_0 00:20:59.234 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:59.234 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:59.234 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:59.234 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:59.234 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:59.234 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:59.234 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:59.234 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:59.234 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:59.234 Found net devices under 0000:31:00.1: cvl_0_1 00:20:59.234 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:59.234 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:59.234 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:59.234 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:20:59.234 16:32:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:59.234 16:32:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:59.234 16:32:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:59.234 16:32:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:59.234 ************************************ 00:20:59.234 START TEST nvmf_perf_adq 00:20:59.234 ************************************ 00:20:59.234 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:59.495 * Looking for test storage... 00:20:59.495 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:59.495 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:59.495 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:20:59.495 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:59.495 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:59.495 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:59.495 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:59.495 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:59.495 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:20:59.495 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:20:59.495 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:20:59.495 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:20:59.495 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:20:59.495 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:20:59.495 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:20:59.495 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:59.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.496 --rc genhtml_branch_coverage=1 00:20:59.496 --rc genhtml_function_coverage=1 00:20:59.496 --rc genhtml_legend=1 00:20:59.496 --rc geninfo_all_blocks=1 00:20:59.496 --rc geninfo_unexecuted_blocks=1 00:20:59.496 00:20:59.496 ' 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:59.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.496 --rc genhtml_branch_coverage=1 00:20:59.496 --rc genhtml_function_coverage=1 00:20:59.496 --rc genhtml_legend=1 00:20:59.496 --rc geninfo_all_blocks=1 00:20:59.496 --rc geninfo_unexecuted_blocks=1 00:20:59.496 00:20:59.496 ' 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:59.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.496 --rc genhtml_branch_coverage=1 00:20:59.496 --rc genhtml_function_coverage=1 00:20:59.496 --rc genhtml_legend=1 00:20:59.496 --rc geninfo_all_blocks=1 00:20:59.496 --rc geninfo_unexecuted_blocks=1 00:20:59.496 00:20:59.496 ' 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:59.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.496 --rc genhtml_branch_coverage=1 00:20:59.496 --rc genhtml_function_coverage=1 00:20:59.496 --rc genhtml_legend=1 00:20:59.496 --rc geninfo_all_blocks=1 00:20:59.496 --rc geninfo_unexecuted_blocks=1 00:20:59.496 00:20:59.496 ' 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:59.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:59.496 16:32:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:07.641 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:07.641 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:07.641 Found net devices under 0000:31:00.0: cvl_0_0 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:07.641 Found net devices under 0000:31:00.1: cvl_0_1 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:07.641 16:32:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:08.214 16:32:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:10.126 16:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:15.415 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:15.415 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:15.415 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:15.416 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:15.416 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:15.416 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:15.416 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:15.416 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:15.416 Found net devices under 0000:31:00.0: cvl_0_0 00:21:15.416 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:15.416 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:15.416 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:15.416 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:15.416 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:15.416 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:15.416 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:15.416 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:15.416 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:15.416 Found net devices under 0000:31:00.1: cvl_0_1 00:21:15.416 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:15.416 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:15.416 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:15.416 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:15.416 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:15.416 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:15.416 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:15.416 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:15.416 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:15.416 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:15.416 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:15.416 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:15.416 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:15.416 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:15.416 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:15.416 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:15.416 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:15.416 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:15.416 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:15.416 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:15.416 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:15.416 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:15.416 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:15.416 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:15.416 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:15.416 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:15.416 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:15.416 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:15.416 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:15.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:15.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.717 ms 00:21:15.416 00:21:15.416 --- 10.0.0.2 ping statistics --- 00:21:15.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:15.416 rtt min/avg/max/mdev = 0.717/0.717/0.717/0.000 ms 00:21:15.416 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:15.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:15.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:21:15.416 00:21:15.416 --- 10.0.0.1 ping statistics --- 00:21:15.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:15.416 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:21:15.416 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:15.416 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:15.416 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:15.416 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:15.416 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:15.416 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:15.416 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:15.416 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:15.416 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:15.416 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:15.416 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:15.416 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:15.416 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:15.416 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2256567 00:21:15.416 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2256567 00:21:15.416 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:15.416 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2256567 ']' 00:21:15.416 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:15.416 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:15.416 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:15.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:15.416 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:15.416 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:15.416 [2024-11-20 16:33:01.318258] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:21:15.416 [2024-11-20 16:33:01.318324] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:15.677 [2024-11-20 16:33:01.405156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:15.678 [2024-11-20 16:33:01.447879] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:15.678 [2024-11-20 16:33:01.447914] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:15.678 [2024-11-20 16:33:01.447922] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:15.678 [2024-11-20 16:33:01.447929] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:15.678 [2024-11-20 16:33:01.447935] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:15.678 [2024-11-20 16:33:01.449540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:15.678 [2024-11-20 16:33:01.449656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:15.678 [2024-11-20 16:33:01.449810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:15.678 [2024-11-20 16:33:01.449811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:16.249 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:16.249 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:21:16.249 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:16.249 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:16.249 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:16.249 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:16.249 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:21:16.249 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:16.249 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:16.249 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.249 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:16.249 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.249 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:16.249 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:16.249 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.249 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:16.511 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.511 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:16.511 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.511 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:16.511 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.511 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:16.511 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.511 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:16.511 [2024-11-20 16:33:02.284656] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:16.511 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.511 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:16.511 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.511 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:16.511 Malloc1 00:21:16.511 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.511 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:16.511 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.511 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:16.511 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.511 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:16.511 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.511 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:16.511 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.511 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:16.511 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.511 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:16.511 [2024-11-20 16:33:02.352087] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:16.511 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.511 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2256686 00:21:16.511 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:21:16.511 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:18.424 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:21:18.424 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.424 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:18.686 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.686 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:21:18.686 "tick_rate": 2400000000, 00:21:18.686 "poll_groups": [ 00:21:18.686 { 00:21:18.686 "name": "nvmf_tgt_poll_group_000", 00:21:18.686 "admin_qpairs": 1, 00:21:18.686 "io_qpairs": 1, 00:21:18.686 "current_admin_qpairs": 1, 00:21:18.686 "current_io_qpairs": 1, 00:21:18.686 "pending_bdev_io": 0, 00:21:18.686 "completed_nvme_io": 19144, 00:21:18.686 "transports": [ 00:21:18.686 { 00:21:18.686 "trtype": "TCP" 00:21:18.686 } 00:21:18.686 ] 00:21:18.686 }, 00:21:18.686 { 00:21:18.686 "name": "nvmf_tgt_poll_group_001", 00:21:18.686 "admin_qpairs": 0, 00:21:18.686 "io_qpairs": 1, 00:21:18.686 "current_admin_qpairs": 0, 00:21:18.686 "current_io_qpairs": 1, 00:21:18.686 "pending_bdev_io": 0, 00:21:18.686 "completed_nvme_io": 27525, 00:21:18.686 "transports": [ 00:21:18.686 { 00:21:18.686 "trtype": "TCP" 00:21:18.686 } 00:21:18.686 ] 00:21:18.686 }, 00:21:18.686 { 00:21:18.686 "name": "nvmf_tgt_poll_group_002", 00:21:18.686 "admin_qpairs": 0, 00:21:18.686 "io_qpairs": 1, 00:21:18.686 "current_admin_qpairs": 0, 00:21:18.686 "current_io_qpairs": 1, 00:21:18.686 "pending_bdev_io": 0, 00:21:18.686 "completed_nvme_io": 20684, 00:21:18.686 "transports": [ 00:21:18.686 { 00:21:18.686 "trtype": "TCP" 00:21:18.686 } 00:21:18.686 ] 00:21:18.686 }, 00:21:18.686 { 00:21:18.686 "name": "nvmf_tgt_poll_group_003", 00:21:18.686 "admin_qpairs": 0, 00:21:18.686 "io_qpairs": 1, 00:21:18.686 "current_admin_qpairs": 0, 00:21:18.686 "current_io_qpairs": 1, 00:21:18.686 "pending_bdev_io": 0, 00:21:18.686 "completed_nvme_io": 20209, 00:21:18.686 "transports": [ 00:21:18.686 { 00:21:18.686 "trtype": "TCP" 00:21:18.686 } 00:21:18.686 ] 00:21:18.686 } 00:21:18.686 ] 00:21:18.686 }' 00:21:18.686 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:18.686 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:21:18.686 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:21:18.686 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:21:18.686 16:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2256686 00:21:26.825 Initializing NVMe Controllers 00:21:26.825 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:26.825 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:26.826 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:26.826 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:26.826 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:26.826 Initialization complete. Launching workers. 00:21:26.826 ======================================================== 00:21:26.826 Latency(us) 00:21:26.826 Device Information : IOPS MiB/s Average min max 00:21:26.826 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11320.60 44.22 5654.88 1729.33 9661.80 00:21:26.826 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14656.20 57.25 4366.12 1315.17 9465.49 00:21:26.826 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13852.90 54.11 4620.05 1348.95 11366.24 00:21:26.826 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13543.90 52.91 4724.96 1340.15 11385.58 00:21:26.826 ======================================================== 00:21:26.826 Total : 53373.59 208.49 4796.43 1315.17 11385.58 00:21:26.826 00:21:26.826 16:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:21:26.826 16:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:26.826 16:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:26.826 16:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:26.826 16:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:26.826 16:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:26.826 16:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:26.826 rmmod nvme_tcp 00:21:26.826 rmmod nvme_fabrics 00:21:26.826 rmmod nvme_keyring 00:21:26.826 16:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:26.826 16:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:26.826 16:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:26.826 16:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2256567 ']' 00:21:26.826 16:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2256567 00:21:26.826 16:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2256567 ']' 00:21:26.826 16:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2256567 00:21:26.826 16:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:21:26.826 16:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:26.826 16:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2256567 00:21:26.826 16:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:26.826 16:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:26.826 16:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2256567' 00:21:26.826 killing process with pid 2256567 00:21:26.826 16:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2256567 00:21:26.826 16:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2256567 00:21:27.086 16:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:27.086 16:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:27.086 16:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:27.086 16:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:27.086 16:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:27.086 16:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:27.086 16:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:27.086 16:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:27.086 16:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:27.086 16:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:27.086 16:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:27.086 16:33:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:29.001 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:29.001 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:21:29.001 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:29.001 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:30.914 16:33:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:32.828 16:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:38.125 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:21:38.125 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:38.125 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:38.125 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:38.125 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:38.125 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:38.125 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:38.125 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:38.125 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:38.125 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:38.125 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:38.125 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:38.125 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:38.125 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:38.125 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:38.125 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:38.125 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:38.125 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:38.125 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:38.126 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:38.126 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:38.126 Found net devices under 0000:31:00.0: cvl_0_0 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:38.126 Found net devices under 0000:31:00.1: cvl_0_1 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:38.126 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:38.126 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.695 ms 00:21:38.126 00:21:38.126 --- 10.0.0.2 ping statistics --- 00:21:38.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.126 rtt min/avg/max/mdev = 0.695/0.695/0.695/0.000 ms 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:38.126 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:38.126 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:21:38.126 00:21:38.126 --- 10.0.0.1 ping statistics --- 00:21:38.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.126 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:38.126 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:38.127 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:38.127 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:38.127 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:38.127 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:21:38.127 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:38.127 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:38.127 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:38.127 net.core.busy_poll = 1 00:21:38.127 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:38.127 net.core.busy_read = 1 00:21:38.127 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:38.127 16:33:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:38.127 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:38.127 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:38.127 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:38.389 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:38.389 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:38.389 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:38.389 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:38.389 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2261902 00:21:38.389 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2261902 00:21:38.389 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:38.389 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2261902 ']' 00:21:38.389 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:38.389 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:38.389 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:38.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:38.389 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:38.389 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:38.389 [2024-11-20 16:33:24.190072] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:21:38.389 [2024-11-20 16:33:24.190141] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:38.389 [2024-11-20 16:33:24.274042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:38.389 [2024-11-20 16:33:24.315811] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:38.389 [2024-11-20 16:33:24.315846] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:38.389 [2024-11-20 16:33:24.315854] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:38.389 [2024-11-20 16:33:24.315861] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:38.389 [2024-11-20 16:33:24.315867] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:38.389 [2024-11-20 16:33:24.317729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:38.389 [2024-11-20 16:33:24.317846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:38.389 [2024-11-20 16:33:24.318019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:38.389 [2024-11-20 16:33:24.318019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:39.332 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:39.332 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:21:39.332 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:39.332 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:39.332 16:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:39.332 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:39.332 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:21:39.332 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:39.332 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:39.332 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.332 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:39.332 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.332 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:39.332 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:39.332 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.332 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:39.332 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.332 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:39.332 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.332 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:39.332 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.332 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:39.332 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.332 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:39.332 [2024-11-20 16:33:25.173106] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:39.333 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.333 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:39.333 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.333 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:39.333 Malloc1 00:21:39.333 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.333 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:39.333 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.333 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:39.333 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.333 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:39.333 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.333 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:39.333 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.333 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:39.333 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.333 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:39.333 [2024-11-20 16:33:25.243419] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:39.333 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.333 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2261977 00:21:39.333 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:21:39.333 16:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:41.879 16:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:21:41.879 16:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.879 16:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:41.879 16:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.879 16:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:21:41.879 "tick_rate": 2400000000, 00:21:41.879 "poll_groups": [ 00:21:41.879 { 00:21:41.879 "name": "nvmf_tgt_poll_group_000", 00:21:41.879 "admin_qpairs": 1, 00:21:41.879 "io_qpairs": 2, 00:21:41.879 "current_admin_qpairs": 1, 00:21:41.879 "current_io_qpairs": 2, 00:21:41.879 "pending_bdev_io": 0, 00:21:41.879 "completed_nvme_io": 28397, 00:21:41.879 "transports": [ 00:21:41.879 { 00:21:41.879 "trtype": "TCP" 00:21:41.879 } 00:21:41.879 ] 00:21:41.879 }, 00:21:41.879 { 00:21:41.879 "name": "nvmf_tgt_poll_group_001", 00:21:41.879 "admin_qpairs": 0, 00:21:41.879 "io_qpairs": 2, 00:21:41.879 "current_admin_qpairs": 0, 00:21:41.879 "current_io_qpairs": 2, 00:21:41.879 "pending_bdev_io": 0, 00:21:41.879 "completed_nvme_io": 36077, 00:21:41.879 "transports": [ 00:21:41.879 { 00:21:41.879 "trtype": "TCP" 00:21:41.879 } 00:21:41.879 ] 00:21:41.879 }, 00:21:41.879 { 00:21:41.879 "name": "nvmf_tgt_poll_group_002", 00:21:41.879 "admin_qpairs": 0, 00:21:41.879 "io_qpairs": 0, 00:21:41.879 "current_admin_qpairs": 0, 00:21:41.879 "current_io_qpairs": 0, 00:21:41.879 "pending_bdev_io": 0, 00:21:41.879 "completed_nvme_io": 0, 00:21:41.879 "transports": [ 00:21:41.879 { 00:21:41.879 "trtype": "TCP" 00:21:41.879 } 00:21:41.879 ] 00:21:41.879 }, 00:21:41.879 { 00:21:41.879 "name": "nvmf_tgt_poll_group_003", 00:21:41.879 "admin_qpairs": 0, 00:21:41.879 "io_qpairs": 0, 00:21:41.879 "current_admin_qpairs": 0, 00:21:41.879 "current_io_qpairs": 0, 00:21:41.879 "pending_bdev_io": 0, 00:21:41.879 "completed_nvme_io": 0, 00:21:41.879 "transports": [ 00:21:41.879 { 00:21:41.879 "trtype": "TCP" 00:21:41.879 } 00:21:41.879 ] 00:21:41.879 } 00:21:41.879 ] 00:21:41.879 }' 00:21:41.879 16:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:41.879 16:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:21:41.879 16:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:21:41.879 16:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:21:41.879 16:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2261977 00:21:50.013 Initializing NVMe Controllers 00:21:50.013 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:50.013 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:50.013 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:50.013 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:50.013 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:50.013 Initialization complete. Launching workers. 00:21:50.013 ======================================================== 00:21:50.013 Latency(us) 00:21:50.013 Device Information : IOPS MiB/s Average min max 00:21:50.013 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7729.90 30.19 8281.61 1069.94 52653.36 00:21:50.013 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10406.80 40.65 6150.28 1105.63 49382.17 00:21:50.013 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11683.40 45.64 5478.34 1036.34 53323.62 00:21:50.013 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9573.80 37.40 6684.65 962.11 49678.09 00:21:50.013 ======================================================== 00:21:50.013 Total : 39393.90 153.88 6499.08 962.11 53323.62 00:21:50.013 00:21:50.013 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:21:50.013 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:50.013 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:50.013 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:50.013 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:50.013 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:50.013 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:50.013 rmmod nvme_tcp 00:21:50.013 rmmod nvme_fabrics 00:21:50.013 rmmod nvme_keyring 00:21:50.013 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:50.013 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:50.013 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:50.013 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2261902 ']' 00:21:50.013 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2261902 00:21:50.013 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2261902 ']' 00:21:50.013 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2261902 00:21:50.014 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:21:50.014 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:50.014 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2261902 00:21:50.014 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:50.014 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:50.014 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2261902' 00:21:50.014 killing process with pid 2261902 00:21:50.014 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2261902 00:21:50.014 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2261902 00:21:50.014 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:50.014 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:50.014 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:50.014 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:50.014 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:50.014 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:50.014 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:50.014 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:50.014 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:50.014 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:50.014 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:50.014 16:33:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:51.930 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:51.930 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:21:51.930 00:21:51.930 real 0m52.653s 00:21:51.930 user 2m49.129s 00:21:51.930 sys 0m11.675s 00:21:51.930 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:51.930 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:51.930 ************************************ 00:21:51.930 END TEST nvmf_perf_adq 00:21:51.931 ************************************ 00:21:51.931 16:33:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:51.931 16:33:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:51.931 16:33:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:51.931 16:33:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:51.931 ************************************ 00:21:51.931 START TEST nvmf_shutdown 00:21:51.931 ************************************ 00:21:51.931 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:52.193 * Looking for test storage... 00:21:52.193 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:52.193 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:52.193 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:21:52.193 16:33:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:52.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.193 --rc genhtml_branch_coverage=1 00:21:52.193 --rc genhtml_function_coverage=1 00:21:52.193 --rc genhtml_legend=1 00:21:52.193 --rc geninfo_all_blocks=1 00:21:52.193 --rc geninfo_unexecuted_blocks=1 00:21:52.193 00:21:52.193 ' 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:52.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.193 --rc genhtml_branch_coverage=1 00:21:52.193 --rc genhtml_function_coverage=1 00:21:52.193 --rc genhtml_legend=1 00:21:52.193 --rc geninfo_all_blocks=1 00:21:52.193 --rc geninfo_unexecuted_blocks=1 00:21:52.193 00:21:52.193 ' 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:52.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.193 --rc genhtml_branch_coverage=1 00:21:52.193 --rc genhtml_function_coverage=1 00:21:52.193 --rc genhtml_legend=1 00:21:52.193 --rc geninfo_all_blocks=1 00:21:52.193 --rc geninfo_unexecuted_blocks=1 00:21:52.193 00:21:52.193 ' 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:52.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.193 --rc genhtml_branch_coverage=1 00:21:52.193 --rc genhtml_function_coverage=1 00:21:52.193 --rc genhtml_legend=1 00:21:52.193 --rc geninfo_all_blocks=1 00:21:52.193 --rc geninfo_unexecuted_blocks=1 00:21:52.193 00:21:52.193 ' 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:52.193 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:52.194 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:52.194 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:52.194 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:52.194 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:52.194 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:52.194 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:52.194 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:52.194 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:52.194 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:52.194 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:52.194 ************************************ 00:21:52.194 START TEST nvmf_shutdown_tc1 00:21:52.194 ************************************ 00:21:52.194 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:21:52.194 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:21:52.194 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:52.194 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:52.194 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:52.194 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:52.194 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:52.194 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:52.194 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.194 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:52.194 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.194 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:52.194 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:52.194 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:52.194 16:33:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:00.336 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:00.336 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:00.336 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:00.336 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:00.336 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:00.336 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:00.336 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:00.336 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:22:00.336 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:00.337 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:00.337 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:00.337 Found net devices under 0000:31:00.0: cvl_0_0 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:00.337 Found net devices under 0000:31:00.1: cvl_0_1 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:00.337 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:00.338 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:00.338 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:00.338 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:00.338 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:00.338 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:00.338 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:00.338 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:00.338 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:00.338 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:00.338 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:00.338 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:00.338 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:00.338 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:22:00.338 00:22:00.338 --- 10.0.0.2 ping statistics --- 00:22:00.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.338 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:22:00.338 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:00.338 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:00.338 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:22:00.338 00:22:00.338 --- 10.0.0.1 ping statistics --- 00:22:00.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.338 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:22:00.338 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:00.338 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:22:00.338 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:00.338 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:00.338 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:00.338 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:00.338 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:00.338 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:00.338 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:00.338 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:00.338 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:00.338 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:00.338 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:00.338 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=2268442 00:22:00.338 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 2268442 00:22:00.338 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:00.338 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2268442 ']' 00:22:00.338 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:00.338 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:00.338 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:00.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:00.338 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:00.338 16:33:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:00.338 [2024-11-20 16:33:45.519632] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:22:00.338 [2024-11-20 16:33:45.519696] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:00.338 [2024-11-20 16:33:45.621359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:00.338 [2024-11-20 16:33:45.673644] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:00.338 [2024-11-20 16:33:45.673696] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:00.338 [2024-11-20 16:33:45.673704] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:00.338 [2024-11-20 16:33:45.673711] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:00.338 [2024-11-20 16:33:45.673718] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:00.338 [2024-11-20 16:33:45.675776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:00.338 [2024-11-20 16:33:45.675942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:00.338 [2024-11-20 16:33:45.676079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:00.338 [2024-11-20 16:33:45.676080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:00.600 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:00.600 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:22:00.600 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:00.600 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:00.600 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:00.600 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:00.600 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:00.600 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.600 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:00.600 [2024-11-20 16:33:46.381486] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:00.600 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.600 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:00.600 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:00.600 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:00.600 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:00.600 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:00.600 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:00.600 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:00.600 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:00.600 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:00.600 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:00.600 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:00.600 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:00.600 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:00.600 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:00.600 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:00.600 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:00.600 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:00.600 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:00.600 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:00.600 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:00.600 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:00.600 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:00.600 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:00.600 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:00.600 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:00.600 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:00.600 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.600 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:00.600 Malloc1 00:22:00.600 [2024-11-20 16:33:46.496325] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:00.600 Malloc2 00:22:00.600 Malloc3 00:22:00.868 Malloc4 00:22:00.868 Malloc5 00:22:00.868 Malloc6 00:22:00.868 Malloc7 00:22:00.868 Malloc8 00:22:00.868 Malloc9 00:22:01.167 Malloc10 00:22:01.167 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.167 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:01.167 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:01.167 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:01.167 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2268703 00:22:01.167 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2268703 /var/tmp/bdevperf.sock 00:22:01.167 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2268703 ']' 00:22:01.167 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:01.167 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:01.167 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:01.167 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:01.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:01.167 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:01.167 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:01.167 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:01.167 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:01.167 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:01.167 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:01.167 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:01.167 { 00:22:01.167 "params": { 00:22:01.167 "name": "Nvme$subsystem", 00:22:01.167 "trtype": "$TEST_TRANSPORT", 00:22:01.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.167 "adrfam": "ipv4", 00:22:01.167 "trsvcid": "$NVMF_PORT", 00:22:01.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.167 "hdgst": ${hdgst:-false}, 00:22:01.167 "ddgst": ${ddgst:-false} 00:22:01.167 }, 00:22:01.167 "method": "bdev_nvme_attach_controller" 00:22:01.167 } 00:22:01.167 EOF 00:22:01.167 )") 00:22:01.167 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:01.167 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:01.167 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:01.167 { 00:22:01.167 "params": { 00:22:01.167 "name": "Nvme$subsystem", 00:22:01.167 "trtype": "$TEST_TRANSPORT", 00:22:01.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.167 "adrfam": "ipv4", 00:22:01.167 "trsvcid": "$NVMF_PORT", 00:22:01.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.167 "hdgst": ${hdgst:-false}, 00:22:01.167 "ddgst": ${ddgst:-false} 00:22:01.167 }, 00:22:01.167 "method": "bdev_nvme_attach_controller" 00:22:01.167 } 00:22:01.167 EOF 00:22:01.167 )") 00:22:01.167 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:01.167 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:01.167 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:01.167 { 00:22:01.167 "params": { 00:22:01.167 "name": "Nvme$subsystem", 00:22:01.167 "trtype": "$TEST_TRANSPORT", 00:22:01.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.167 "adrfam": "ipv4", 00:22:01.167 "trsvcid": "$NVMF_PORT", 00:22:01.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.167 "hdgst": ${hdgst:-false}, 00:22:01.167 "ddgst": ${ddgst:-false} 00:22:01.167 }, 00:22:01.167 "method": "bdev_nvme_attach_controller" 00:22:01.167 } 00:22:01.167 EOF 00:22:01.167 )") 00:22:01.167 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:01.167 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:01.167 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:01.167 { 00:22:01.167 "params": { 00:22:01.167 "name": "Nvme$subsystem", 00:22:01.167 "trtype": "$TEST_TRANSPORT", 00:22:01.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.167 "adrfam": "ipv4", 00:22:01.167 "trsvcid": "$NVMF_PORT", 00:22:01.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.167 "hdgst": ${hdgst:-false}, 00:22:01.167 "ddgst": ${ddgst:-false} 00:22:01.167 }, 00:22:01.167 "method": "bdev_nvme_attach_controller" 00:22:01.167 } 00:22:01.167 EOF 00:22:01.167 )") 00:22:01.167 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:01.167 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:01.167 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:01.167 { 00:22:01.167 "params": { 00:22:01.167 "name": "Nvme$subsystem", 00:22:01.167 "trtype": "$TEST_TRANSPORT", 00:22:01.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.167 "adrfam": "ipv4", 00:22:01.167 "trsvcid": "$NVMF_PORT", 00:22:01.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.167 "hdgst": ${hdgst:-false}, 00:22:01.167 "ddgst": ${ddgst:-false} 00:22:01.167 }, 00:22:01.167 "method": "bdev_nvme_attach_controller" 00:22:01.167 } 00:22:01.167 EOF 00:22:01.167 )") 00:22:01.167 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:01.167 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:01.167 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:01.167 { 00:22:01.167 "params": { 00:22:01.167 "name": "Nvme$subsystem", 00:22:01.167 "trtype": "$TEST_TRANSPORT", 00:22:01.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.167 "adrfam": "ipv4", 00:22:01.167 "trsvcid": "$NVMF_PORT", 00:22:01.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.167 "hdgst": ${hdgst:-false}, 00:22:01.167 "ddgst": ${ddgst:-false} 00:22:01.167 }, 00:22:01.167 "method": "bdev_nvme_attach_controller" 00:22:01.167 } 00:22:01.167 EOF 00:22:01.167 )") 00:22:01.167 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:01.167 [2024-11-20 16:33:46.950893] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:22:01.167 [2024-11-20 16:33:46.950946] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:01.167 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:01.167 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:01.167 { 00:22:01.167 "params": { 00:22:01.167 "name": "Nvme$subsystem", 00:22:01.168 "trtype": "$TEST_TRANSPORT", 00:22:01.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.168 "adrfam": "ipv4", 00:22:01.168 "trsvcid": "$NVMF_PORT", 00:22:01.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.168 "hdgst": ${hdgst:-false}, 00:22:01.168 "ddgst": ${ddgst:-false} 00:22:01.168 }, 00:22:01.168 "method": "bdev_nvme_attach_controller" 00:22:01.168 } 00:22:01.168 EOF 00:22:01.168 )") 00:22:01.168 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:01.168 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:01.168 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:01.168 { 00:22:01.168 "params": { 00:22:01.168 "name": "Nvme$subsystem", 00:22:01.168 "trtype": "$TEST_TRANSPORT", 00:22:01.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.168 "adrfam": "ipv4", 00:22:01.168 "trsvcid": "$NVMF_PORT", 00:22:01.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.168 "hdgst": ${hdgst:-false}, 00:22:01.168 "ddgst": ${ddgst:-false} 00:22:01.168 }, 00:22:01.168 "method": "bdev_nvme_attach_controller" 00:22:01.168 } 00:22:01.168 EOF 00:22:01.168 )") 00:22:01.168 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:01.168 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:01.168 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:01.168 { 00:22:01.168 "params": { 00:22:01.168 "name": "Nvme$subsystem", 00:22:01.168 "trtype": "$TEST_TRANSPORT", 00:22:01.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.168 "adrfam": "ipv4", 00:22:01.168 "trsvcid": "$NVMF_PORT", 00:22:01.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.168 "hdgst": ${hdgst:-false}, 00:22:01.168 "ddgst": ${ddgst:-false} 00:22:01.168 }, 00:22:01.168 "method": "bdev_nvme_attach_controller" 00:22:01.168 } 00:22:01.168 EOF 00:22:01.168 )") 00:22:01.168 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:01.168 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:01.168 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:01.168 { 00:22:01.168 "params": { 00:22:01.168 "name": "Nvme$subsystem", 00:22:01.168 "trtype": "$TEST_TRANSPORT", 00:22:01.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.168 "adrfam": "ipv4", 00:22:01.168 "trsvcid": "$NVMF_PORT", 00:22:01.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.168 "hdgst": ${hdgst:-false}, 00:22:01.168 "ddgst": ${ddgst:-false} 00:22:01.168 }, 00:22:01.168 "method": "bdev_nvme_attach_controller" 00:22:01.168 } 00:22:01.168 EOF 00:22:01.168 )") 00:22:01.168 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:01.168 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:01.168 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:01.168 16:33:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:01.168 "params": { 00:22:01.168 "name": "Nvme1", 00:22:01.168 "trtype": "tcp", 00:22:01.168 "traddr": "10.0.0.2", 00:22:01.168 "adrfam": "ipv4", 00:22:01.168 "trsvcid": "4420", 00:22:01.168 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:01.168 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:01.168 "hdgst": false, 00:22:01.168 "ddgst": false 00:22:01.168 }, 00:22:01.168 "method": "bdev_nvme_attach_controller" 00:22:01.168 },{ 00:22:01.168 "params": { 00:22:01.168 "name": "Nvme2", 00:22:01.168 "trtype": "tcp", 00:22:01.168 "traddr": "10.0.0.2", 00:22:01.168 "adrfam": "ipv4", 00:22:01.168 "trsvcid": "4420", 00:22:01.168 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:01.168 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:01.168 "hdgst": false, 00:22:01.168 "ddgst": false 00:22:01.168 }, 00:22:01.168 "method": "bdev_nvme_attach_controller" 00:22:01.168 },{ 00:22:01.168 "params": { 00:22:01.168 "name": "Nvme3", 00:22:01.168 "trtype": "tcp", 00:22:01.168 "traddr": "10.0.0.2", 00:22:01.168 "adrfam": "ipv4", 00:22:01.168 "trsvcid": "4420", 00:22:01.168 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:01.168 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:01.168 "hdgst": false, 00:22:01.168 "ddgst": false 00:22:01.168 }, 00:22:01.168 "method": "bdev_nvme_attach_controller" 00:22:01.168 },{ 00:22:01.168 "params": { 00:22:01.168 "name": "Nvme4", 00:22:01.168 "trtype": "tcp", 00:22:01.168 "traddr": "10.0.0.2", 00:22:01.168 "adrfam": "ipv4", 00:22:01.168 "trsvcid": "4420", 00:22:01.168 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:01.168 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:01.168 "hdgst": false, 00:22:01.168 "ddgst": false 00:22:01.168 }, 00:22:01.168 "method": "bdev_nvme_attach_controller" 00:22:01.168 },{ 00:22:01.168 "params": { 00:22:01.168 "name": "Nvme5", 00:22:01.168 "trtype": "tcp", 00:22:01.168 "traddr": "10.0.0.2", 00:22:01.168 "adrfam": "ipv4", 00:22:01.168 "trsvcid": "4420", 00:22:01.168 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:01.168 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:01.168 "hdgst": false, 00:22:01.168 "ddgst": false 00:22:01.168 }, 00:22:01.168 "method": "bdev_nvme_attach_controller" 00:22:01.168 },{ 00:22:01.168 "params": { 00:22:01.168 "name": "Nvme6", 00:22:01.168 "trtype": "tcp", 00:22:01.168 "traddr": "10.0.0.2", 00:22:01.168 "adrfam": "ipv4", 00:22:01.168 "trsvcid": "4420", 00:22:01.168 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:01.168 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:01.168 "hdgst": false, 00:22:01.168 "ddgst": false 00:22:01.168 }, 00:22:01.168 "method": "bdev_nvme_attach_controller" 00:22:01.168 },{ 00:22:01.168 "params": { 00:22:01.168 "name": "Nvme7", 00:22:01.168 "trtype": "tcp", 00:22:01.168 "traddr": "10.0.0.2", 00:22:01.168 "adrfam": "ipv4", 00:22:01.168 "trsvcid": "4420", 00:22:01.168 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:01.168 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:01.168 "hdgst": false, 00:22:01.168 "ddgst": false 00:22:01.168 }, 00:22:01.168 "method": "bdev_nvme_attach_controller" 00:22:01.168 },{ 00:22:01.168 "params": { 00:22:01.168 "name": "Nvme8", 00:22:01.168 "trtype": "tcp", 00:22:01.168 "traddr": "10.0.0.2", 00:22:01.168 "adrfam": "ipv4", 00:22:01.168 "trsvcid": "4420", 00:22:01.168 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:01.168 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:01.168 "hdgst": false, 00:22:01.168 "ddgst": false 00:22:01.168 }, 00:22:01.168 "method": "bdev_nvme_attach_controller" 00:22:01.168 },{ 00:22:01.168 "params": { 00:22:01.168 "name": "Nvme9", 00:22:01.168 "trtype": "tcp", 00:22:01.168 "traddr": "10.0.0.2", 00:22:01.168 "adrfam": "ipv4", 00:22:01.168 "trsvcid": "4420", 00:22:01.168 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:01.168 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:01.168 "hdgst": false, 00:22:01.168 "ddgst": false 00:22:01.168 }, 00:22:01.168 "method": "bdev_nvme_attach_controller" 00:22:01.168 },{ 00:22:01.168 "params": { 00:22:01.168 "name": "Nvme10", 00:22:01.168 "trtype": "tcp", 00:22:01.168 "traddr": "10.0.0.2", 00:22:01.168 "adrfam": "ipv4", 00:22:01.168 "trsvcid": "4420", 00:22:01.168 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:01.168 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:01.168 "hdgst": false, 00:22:01.168 "ddgst": false 00:22:01.168 }, 00:22:01.168 "method": "bdev_nvme_attach_controller" 00:22:01.168 }' 00:22:01.168 [2024-11-20 16:33:47.023959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.168 [2024-11-20 16:33:47.061210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:02.675 16:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:02.675 16:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:22:02.675 16:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:02.675 16:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.675 16:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:02.675 16:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.675 16:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2268703 00:22:02.675 16:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:22:02.675 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2268703 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:02.675 16:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:22:03.616 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2268442 00:22:03.616 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:03.616 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:03.616 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:03.616 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:03.616 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:03.616 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:03.616 { 00:22:03.616 "params": { 00:22:03.616 "name": "Nvme$subsystem", 00:22:03.616 "trtype": "$TEST_TRANSPORT", 00:22:03.616 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.616 "adrfam": "ipv4", 00:22:03.616 "trsvcid": "$NVMF_PORT", 00:22:03.616 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.616 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.616 "hdgst": ${hdgst:-false}, 00:22:03.616 "ddgst": ${ddgst:-false} 00:22:03.616 }, 00:22:03.616 "method": "bdev_nvme_attach_controller" 00:22:03.616 } 00:22:03.616 EOF 00:22:03.616 )") 00:22:03.616 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:03.616 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:03.616 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:03.616 { 00:22:03.616 "params": { 00:22:03.616 "name": "Nvme$subsystem", 00:22:03.616 "trtype": "$TEST_TRANSPORT", 00:22:03.616 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.616 "adrfam": "ipv4", 00:22:03.616 "trsvcid": "$NVMF_PORT", 00:22:03.616 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.616 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.616 "hdgst": ${hdgst:-false}, 00:22:03.616 "ddgst": ${ddgst:-false} 00:22:03.616 }, 00:22:03.616 "method": "bdev_nvme_attach_controller" 00:22:03.616 } 00:22:03.616 EOF 00:22:03.616 )") 00:22:03.616 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:03.616 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:03.616 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:03.616 { 00:22:03.616 "params": { 00:22:03.616 "name": "Nvme$subsystem", 00:22:03.616 "trtype": "$TEST_TRANSPORT", 00:22:03.616 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.616 "adrfam": "ipv4", 00:22:03.616 "trsvcid": "$NVMF_PORT", 00:22:03.616 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.616 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.616 "hdgst": ${hdgst:-false}, 00:22:03.616 "ddgst": ${ddgst:-false} 00:22:03.616 }, 00:22:03.616 "method": "bdev_nvme_attach_controller" 00:22:03.616 } 00:22:03.616 EOF 00:22:03.616 )") 00:22:03.616 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:03.616 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:03.616 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:03.616 { 00:22:03.616 "params": { 00:22:03.616 "name": "Nvme$subsystem", 00:22:03.616 "trtype": "$TEST_TRANSPORT", 00:22:03.616 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.616 "adrfam": "ipv4", 00:22:03.616 "trsvcid": "$NVMF_PORT", 00:22:03.616 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.616 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.616 "hdgst": ${hdgst:-false}, 00:22:03.616 "ddgst": ${ddgst:-false} 00:22:03.616 }, 00:22:03.616 "method": "bdev_nvme_attach_controller" 00:22:03.616 } 00:22:03.616 EOF 00:22:03.616 )") 00:22:03.616 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:03.616 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:03.616 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:03.616 { 00:22:03.616 "params": { 00:22:03.616 "name": "Nvme$subsystem", 00:22:03.616 "trtype": "$TEST_TRANSPORT", 00:22:03.616 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.616 "adrfam": "ipv4", 00:22:03.616 "trsvcid": "$NVMF_PORT", 00:22:03.616 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.616 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.616 "hdgst": ${hdgst:-false}, 00:22:03.616 "ddgst": ${ddgst:-false} 00:22:03.616 }, 00:22:03.616 "method": "bdev_nvme_attach_controller" 00:22:03.616 } 00:22:03.616 EOF 00:22:03.616 )") 00:22:03.616 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:03.616 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:03.616 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:03.617 { 00:22:03.617 "params": { 00:22:03.617 "name": "Nvme$subsystem", 00:22:03.617 "trtype": "$TEST_TRANSPORT", 00:22:03.617 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.617 "adrfam": "ipv4", 00:22:03.617 "trsvcid": "$NVMF_PORT", 00:22:03.617 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.617 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.617 "hdgst": ${hdgst:-false}, 00:22:03.617 "ddgst": ${ddgst:-false} 00:22:03.617 }, 00:22:03.617 "method": "bdev_nvme_attach_controller" 00:22:03.617 } 00:22:03.617 EOF 00:22:03.617 )") 00:22:03.617 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:03.617 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:03.617 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:03.617 { 00:22:03.617 "params": { 00:22:03.617 "name": "Nvme$subsystem", 00:22:03.617 "trtype": "$TEST_TRANSPORT", 00:22:03.617 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.617 "adrfam": "ipv4", 00:22:03.617 "trsvcid": "$NVMF_PORT", 00:22:03.617 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.617 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.617 "hdgst": ${hdgst:-false}, 00:22:03.617 "ddgst": ${ddgst:-false} 00:22:03.617 }, 00:22:03.617 "method": "bdev_nvme_attach_controller" 00:22:03.617 } 00:22:03.617 EOF 00:22:03.617 )") 00:22:03.617 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:03.617 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:03.617 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:03.617 { 00:22:03.617 "params": { 00:22:03.617 "name": "Nvme$subsystem", 00:22:03.617 "trtype": "$TEST_TRANSPORT", 00:22:03.617 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.617 "adrfam": "ipv4", 00:22:03.617 "trsvcid": "$NVMF_PORT", 00:22:03.617 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.617 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.617 "hdgst": ${hdgst:-false}, 00:22:03.617 "ddgst": ${ddgst:-false} 00:22:03.617 }, 00:22:03.617 "method": "bdev_nvme_attach_controller" 00:22:03.617 } 00:22:03.617 EOF 00:22:03.617 )") 00:22:03.617 [2024-11-20 16:33:49.358685] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:22:03.617 [2024-11-20 16:33:49.358752] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2269207 ] 00:22:03.617 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:03.617 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:03.617 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:03.617 { 00:22:03.617 "params": { 00:22:03.617 "name": "Nvme$subsystem", 00:22:03.617 "trtype": "$TEST_TRANSPORT", 00:22:03.617 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.617 "adrfam": "ipv4", 00:22:03.617 "trsvcid": "$NVMF_PORT", 00:22:03.617 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.617 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.617 "hdgst": ${hdgst:-false}, 00:22:03.617 "ddgst": ${ddgst:-false} 00:22:03.617 }, 00:22:03.617 "method": "bdev_nvme_attach_controller" 00:22:03.617 } 00:22:03.617 EOF 00:22:03.617 )") 00:22:03.617 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:03.617 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:03.617 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:03.617 { 00:22:03.617 "params": { 00:22:03.617 "name": "Nvme$subsystem", 00:22:03.617 "trtype": "$TEST_TRANSPORT", 00:22:03.617 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.617 "adrfam": "ipv4", 00:22:03.617 "trsvcid": "$NVMF_PORT", 00:22:03.617 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.617 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.617 "hdgst": ${hdgst:-false}, 00:22:03.617 "ddgst": ${ddgst:-false} 00:22:03.617 }, 00:22:03.617 "method": "bdev_nvme_attach_controller" 00:22:03.617 } 00:22:03.617 EOF 00:22:03.617 )") 00:22:03.617 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:03.617 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:03.617 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:03.617 16:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:03.617 "params": { 00:22:03.617 "name": "Nvme1", 00:22:03.617 "trtype": "tcp", 00:22:03.617 "traddr": "10.0.0.2", 00:22:03.617 "adrfam": "ipv4", 00:22:03.617 "trsvcid": "4420", 00:22:03.617 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:03.617 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:03.617 "hdgst": false, 00:22:03.617 "ddgst": false 00:22:03.617 }, 00:22:03.617 "method": "bdev_nvme_attach_controller" 00:22:03.617 },{ 00:22:03.617 "params": { 00:22:03.617 "name": "Nvme2", 00:22:03.617 "trtype": "tcp", 00:22:03.617 "traddr": "10.0.0.2", 00:22:03.617 "adrfam": "ipv4", 00:22:03.617 "trsvcid": "4420", 00:22:03.617 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:03.617 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:03.617 "hdgst": false, 00:22:03.617 "ddgst": false 00:22:03.617 }, 00:22:03.617 "method": "bdev_nvme_attach_controller" 00:22:03.617 },{ 00:22:03.617 "params": { 00:22:03.617 "name": "Nvme3", 00:22:03.617 "trtype": "tcp", 00:22:03.617 "traddr": "10.0.0.2", 00:22:03.617 "adrfam": "ipv4", 00:22:03.617 "trsvcid": "4420", 00:22:03.617 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:03.617 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:03.617 "hdgst": false, 00:22:03.617 "ddgst": false 00:22:03.617 }, 00:22:03.617 "method": "bdev_nvme_attach_controller" 00:22:03.617 },{ 00:22:03.617 "params": { 00:22:03.617 "name": "Nvme4", 00:22:03.617 "trtype": "tcp", 00:22:03.617 "traddr": "10.0.0.2", 00:22:03.617 "adrfam": "ipv4", 00:22:03.617 "trsvcid": "4420", 00:22:03.617 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:03.617 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:03.617 "hdgst": false, 00:22:03.617 "ddgst": false 00:22:03.617 }, 00:22:03.617 "method": "bdev_nvme_attach_controller" 00:22:03.617 },{ 00:22:03.617 "params": { 00:22:03.617 "name": "Nvme5", 00:22:03.617 "trtype": "tcp", 00:22:03.617 "traddr": "10.0.0.2", 00:22:03.617 "adrfam": "ipv4", 00:22:03.617 "trsvcid": "4420", 00:22:03.617 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:03.617 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:03.617 "hdgst": false, 00:22:03.617 "ddgst": false 00:22:03.617 }, 00:22:03.617 "method": "bdev_nvme_attach_controller" 00:22:03.617 },{ 00:22:03.617 "params": { 00:22:03.617 "name": "Nvme6", 00:22:03.617 "trtype": "tcp", 00:22:03.617 "traddr": "10.0.0.2", 00:22:03.617 "adrfam": "ipv4", 00:22:03.617 "trsvcid": "4420", 00:22:03.617 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:03.617 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:03.618 "hdgst": false, 00:22:03.618 "ddgst": false 00:22:03.618 }, 00:22:03.618 "method": "bdev_nvme_attach_controller" 00:22:03.618 },{ 00:22:03.618 "params": { 00:22:03.618 "name": "Nvme7", 00:22:03.618 "trtype": "tcp", 00:22:03.618 "traddr": "10.0.0.2", 00:22:03.618 "adrfam": "ipv4", 00:22:03.618 "trsvcid": "4420", 00:22:03.618 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:03.618 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:03.618 "hdgst": false, 00:22:03.618 "ddgst": false 00:22:03.618 }, 00:22:03.618 "method": "bdev_nvme_attach_controller" 00:22:03.618 },{ 00:22:03.618 "params": { 00:22:03.618 "name": "Nvme8", 00:22:03.618 "trtype": "tcp", 00:22:03.618 "traddr": "10.0.0.2", 00:22:03.618 "adrfam": "ipv4", 00:22:03.618 "trsvcid": "4420", 00:22:03.618 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:03.618 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:03.618 "hdgst": false, 00:22:03.618 "ddgst": false 00:22:03.618 }, 00:22:03.618 "method": "bdev_nvme_attach_controller" 00:22:03.618 },{ 00:22:03.618 "params": { 00:22:03.618 "name": "Nvme9", 00:22:03.618 "trtype": "tcp", 00:22:03.618 "traddr": "10.0.0.2", 00:22:03.618 "adrfam": "ipv4", 00:22:03.618 "trsvcid": "4420", 00:22:03.618 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:03.618 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:03.618 "hdgst": false, 00:22:03.618 "ddgst": false 00:22:03.618 }, 00:22:03.618 "method": "bdev_nvme_attach_controller" 00:22:03.618 },{ 00:22:03.618 "params": { 00:22:03.618 "name": "Nvme10", 00:22:03.618 "trtype": "tcp", 00:22:03.618 "traddr": "10.0.0.2", 00:22:03.618 "adrfam": "ipv4", 00:22:03.618 "trsvcid": "4420", 00:22:03.618 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:03.618 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:03.618 "hdgst": false, 00:22:03.618 "ddgst": false 00:22:03.618 }, 00:22:03.618 "method": "bdev_nvme_attach_controller" 00:22:03.618 }' 00:22:03.618 [2024-11-20 16:33:49.433101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.618 [2024-11-20 16:33:49.468820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:05.004 Running I/O for 1 seconds... 00:22:06.206 1865.00 IOPS, 116.56 MiB/s 00:22:06.206 Latency(us) 00:22:06.206 [2024-11-20T15:33:52.166Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:06.207 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.207 Verification LBA range: start 0x0 length 0x400 00:22:06.207 Nvme1n1 : 1.16 220.05 13.75 0.00 0.00 287884.80 29928.11 244667.73 00:22:06.207 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.207 Verification LBA range: start 0x0 length 0x400 00:22:06.207 Nvme2n1 : 1.17 218.30 13.64 0.00 0.00 285562.67 14745.60 255153.49 00:22:06.207 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.207 Verification LBA range: start 0x0 length 0x400 00:22:06.207 Nvme3n1 : 1.16 224.65 14.04 0.00 0.00 271614.48 4450.99 241172.48 00:22:06.207 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.207 Verification LBA range: start 0x0 length 0x400 00:22:06.207 Nvme4n1 : 1.16 220.83 13.80 0.00 0.00 272634.24 15400.96 241172.48 00:22:06.207 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.207 Verification LBA range: start 0x0 length 0x400 00:22:06.207 Nvme5n1 : 1.19 215.73 13.48 0.00 0.00 273537.71 17913.17 253405.87 00:22:06.207 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.207 Verification LBA range: start 0x0 length 0x400 00:22:06.207 Nvme6n1 : 1.18 270.37 16.90 0.00 0.00 215135.57 32331.09 237677.23 00:22:06.207 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.207 Verification LBA range: start 0x0 length 0x400 00:22:06.207 Nvme7n1 : 1.18 271.23 16.95 0.00 0.00 210963.46 15728.64 248162.99 00:22:06.207 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.207 Verification LBA range: start 0x0 length 0x400 00:22:06.207 Nvme8n1 : 1.19 269.07 16.82 0.00 0.00 209107.80 15619.41 256901.12 00:22:06.207 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.207 Verification LBA range: start 0x0 length 0x400 00:22:06.207 Nvme9n1 : 1.19 268.17 16.76 0.00 0.00 206105.94 16820.91 262144.00 00:22:06.207 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.207 Verification LBA range: start 0x0 length 0x400 00:22:06.207 Nvme10n1 : 1.18 220.15 13.76 0.00 0.00 245215.76 4778.67 265639.25 00:22:06.207 [2024-11-20T15:33:52.166Z] =================================================================================================================== 00:22:06.207 [2024-11-20T15:33:52.166Z] Total : 2398.55 149.91 0.00 0.00 244411.66 4450.99 265639.25 00:22:06.467 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:22:06.467 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:06.467 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:06.468 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:06.468 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:06.468 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:06.468 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:22:06.468 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:06.468 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:22:06.468 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:06.468 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:06.468 rmmod nvme_tcp 00:22:06.468 rmmod nvme_fabrics 00:22:06.468 rmmod nvme_keyring 00:22:06.468 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:06.468 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:22:06.468 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:22:06.468 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 2268442 ']' 00:22:06.468 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 2268442 00:22:06.468 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 2268442 ']' 00:22:06.468 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 2268442 00:22:06.468 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:22:06.468 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:06.468 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2268442 00:22:06.468 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:06.468 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:06.468 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2268442' 00:22:06.468 killing process with pid 2268442 00:22:06.468 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 2268442 00:22:06.468 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 2268442 00:22:06.728 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:06.728 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:06.728 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:06.728 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:22:06.728 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:06.728 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:22:06.728 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:22:06.728 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:06.728 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:06.728 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:06.729 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:06.729 16:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:09.277 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:09.277 00:22:09.277 real 0m16.528s 00:22:09.277 user 0m33.628s 00:22:09.277 sys 0m6.638s 00:22:09.277 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:09.277 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:09.277 ************************************ 00:22:09.277 END TEST nvmf_shutdown_tc1 00:22:09.277 ************************************ 00:22:09.277 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:09.277 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:09.277 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:09.277 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:09.277 ************************************ 00:22:09.277 START TEST nvmf_shutdown_tc2 00:22:09.277 ************************************ 00:22:09.277 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:22:09.277 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:22:09.277 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:09.277 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:09.277 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:09.277 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:09.277 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:09.277 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:09.277 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:09.277 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:09.277 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:09.277 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:09.277 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:09.277 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:09.277 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:09.277 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:09.277 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:09.277 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:09.277 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:09.277 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:09.277 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:09.277 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:09.277 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:22:09.277 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:09.277 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:22:09.277 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:22:09.277 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:22:09.277 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:22:09.277 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:22:09.277 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:09.277 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:09.277 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:09.277 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:09.277 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:09.277 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:09.278 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:09.278 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:09.278 Found net devices under 0000:31:00.0: cvl_0_0 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:09.278 Found net devices under 0000:31:00.1: cvl_0_1 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:09.278 16:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:09.278 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:09.278 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:09.278 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:09.278 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:09.278 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.607 ms 00:22:09.278 00:22:09.278 --- 10.0.0.2 ping statistics --- 00:22:09.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.278 rtt min/avg/max/mdev = 0.607/0.607/0.607/0.000 ms 00:22:09.278 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:09.278 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:09.278 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:22:09.278 00:22:09.278 --- 10.0.0.1 ping statistics --- 00:22:09.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.278 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:22:09.278 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:09.278 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:22:09.279 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:09.279 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:09.279 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:09.279 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:09.279 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:09.279 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:09.279 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:09.279 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:09.279 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:09.279 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:09.279 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:09.279 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2270320 00:22:09.279 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2270320 00:22:09.279 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:09.279 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2270320 ']' 00:22:09.279 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:09.279 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:09.279 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:09.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:09.279 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:09.279 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:09.279 [2024-11-20 16:33:55.121585] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:22:09.279 [2024-11-20 16:33:55.121645] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:09.279 [2024-11-20 16:33:55.216114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:09.540 [2024-11-20 16:33:55.250940] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:09.540 [2024-11-20 16:33:55.250971] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:09.540 [2024-11-20 16:33:55.250976] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:09.540 [2024-11-20 16:33:55.250986] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:09.540 [2024-11-20 16:33:55.250991] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:09.540 [2024-11-20 16:33:55.252317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:09.540 [2024-11-20 16:33:55.252482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:09.540 [2024-11-20 16:33:55.252645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:09.540 [2024-11-20 16:33:55.252648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:10.112 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:10.112 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:10.112 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:10.112 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:10.112 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:10.112 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:10.112 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:10.112 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.112 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:10.112 [2024-11-20 16:33:55.968723] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:10.112 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.112 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:10.112 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:10.112 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:10.112 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:10.112 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:10.112 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:10.112 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:10.112 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:10.112 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:10.112 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:10.112 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:10.112 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:10.112 16:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:10.112 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:10.112 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:10.112 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:10.112 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:10.112 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:10.112 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:10.112 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:10.112 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:10.112 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:10.112 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:10.112 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:10.112 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:10.112 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:10.112 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.112 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:10.112 Malloc1 00:22:10.373 [2024-11-20 16:33:56.083943] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:10.373 Malloc2 00:22:10.373 Malloc3 00:22:10.373 Malloc4 00:22:10.373 Malloc5 00:22:10.373 Malloc6 00:22:10.373 Malloc7 00:22:10.634 Malloc8 00:22:10.634 Malloc9 00:22:10.634 Malloc10 00:22:10.634 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.634 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:10.635 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:10.635 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:10.635 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2270708 00:22:10.635 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2270708 /var/tmp/bdevperf.sock 00:22:10.635 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2270708 ']' 00:22:10.635 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:10.635 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:10.635 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:10.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:10.635 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:10.635 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:10.635 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:10.635 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:10.635 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:22:10.635 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:22:10.635 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:10.635 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:10.635 { 00:22:10.635 "params": { 00:22:10.635 "name": "Nvme$subsystem", 00:22:10.635 "trtype": "$TEST_TRANSPORT", 00:22:10.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.635 "adrfam": "ipv4", 00:22:10.635 "trsvcid": "$NVMF_PORT", 00:22:10.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.635 "hdgst": ${hdgst:-false}, 00:22:10.635 "ddgst": ${ddgst:-false} 00:22:10.635 }, 00:22:10.635 "method": "bdev_nvme_attach_controller" 00:22:10.635 } 00:22:10.635 EOF 00:22:10.635 )") 00:22:10.635 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:10.635 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:10.635 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:10.635 { 00:22:10.635 "params": { 00:22:10.635 "name": "Nvme$subsystem", 00:22:10.635 "trtype": "$TEST_TRANSPORT", 00:22:10.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.635 "adrfam": "ipv4", 00:22:10.635 "trsvcid": "$NVMF_PORT", 00:22:10.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.635 "hdgst": ${hdgst:-false}, 00:22:10.635 "ddgst": ${ddgst:-false} 00:22:10.635 }, 00:22:10.635 "method": "bdev_nvme_attach_controller" 00:22:10.635 } 00:22:10.635 EOF 00:22:10.635 )") 00:22:10.635 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:10.635 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:10.635 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:10.635 { 00:22:10.635 "params": { 00:22:10.635 "name": "Nvme$subsystem", 00:22:10.635 "trtype": "$TEST_TRANSPORT", 00:22:10.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.635 "adrfam": "ipv4", 00:22:10.635 "trsvcid": "$NVMF_PORT", 00:22:10.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.635 "hdgst": ${hdgst:-false}, 00:22:10.635 "ddgst": ${ddgst:-false} 00:22:10.635 }, 00:22:10.635 "method": "bdev_nvme_attach_controller" 00:22:10.635 } 00:22:10.635 EOF 00:22:10.635 )") 00:22:10.635 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:10.635 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:10.635 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:10.635 { 00:22:10.635 "params": { 00:22:10.635 "name": "Nvme$subsystem", 00:22:10.635 "trtype": "$TEST_TRANSPORT", 00:22:10.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.635 "adrfam": "ipv4", 00:22:10.635 "trsvcid": "$NVMF_PORT", 00:22:10.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.635 "hdgst": ${hdgst:-false}, 00:22:10.635 "ddgst": ${ddgst:-false} 00:22:10.635 }, 00:22:10.635 "method": "bdev_nvme_attach_controller" 00:22:10.635 } 00:22:10.635 EOF 00:22:10.635 )") 00:22:10.635 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:10.635 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:10.635 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:10.635 { 00:22:10.635 "params": { 00:22:10.635 "name": "Nvme$subsystem", 00:22:10.635 "trtype": "$TEST_TRANSPORT", 00:22:10.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.635 "adrfam": "ipv4", 00:22:10.635 "trsvcid": "$NVMF_PORT", 00:22:10.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.635 "hdgst": ${hdgst:-false}, 00:22:10.635 "ddgst": ${ddgst:-false} 00:22:10.635 }, 00:22:10.635 "method": "bdev_nvme_attach_controller" 00:22:10.635 } 00:22:10.635 EOF 00:22:10.635 )") 00:22:10.635 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:10.635 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:10.635 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:10.635 { 00:22:10.635 "params": { 00:22:10.635 "name": "Nvme$subsystem", 00:22:10.635 "trtype": "$TEST_TRANSPORT", 00:22:10.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.635 "adrfam": "ipv4", 00:22:10.635 "trsvcid": "$NVMF_PORT", 00:22:10.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.635 "hdgst": ${hdgst:-false}, 00:22:10.635 "ddgst": ${ddgst:-false} 00:22:10.635 }, 00:22:10.635 "method": "bdev_nvme_attach_controller" 00:22:10.635 } 00:22:10.635 EOF 00:22:10.635 )") 00:22:10.635 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:10.635 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:10.635 [2024-11-20 16:33:56.533058] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:22:10.635 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:10.635 { 00:22:10.635 "params": { 00:22:10.635 "name": "Nvme$subsystem", 00:22:10.635 "trtype": "$TEST_TRANSPORT", 00:22:10.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.635 "adrfam": "ipv4", 00:22:10.635 "trsvcid": "$NVMF_PORT", 00:22:10.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.635 "hdgst": ${hdgst:-false}, 00:22:10.635 "ddgst": ${ddgst:-false} 00:22:10.635 }, 00:22:10.635 "method": "bdev_nvme_attach_controller" 00:22:10.635 } 00:22:10.635 EOF 00:22:10.635 )") 00:22:10.635 [2024-11-20 16:33:56.533114] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2270708 ] 00:22:10.635 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:10.635 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:10.635 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:10.635 { 00:22:10.635 "params": { 00:22:10.635 "name": "Nvme$subsystem", 00:22:10.635 "trtype": "$TEST_TRANSPORT", 00:22:10.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.635 "adrfam": "ipv4", 00:22:10.635 "trsvcid": "$NVMF_PORT", 00:22:10.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.635 "hdgst": ${hdgst:-false}, 00:22:10.635 "ddgst": ${ddgst:-false} 00:22:10.635 }, 00:22:10.635 "method": "bdev_nvme_attach_controller" 00:22:10.635 } 00:22:10.635 EOF 00:22:10.635 )") 00:22:10.635 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:10.635 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:10.635 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:10.635 { 00:22:10.635 "params": { 00:22:10.635 "name": "Nvme$subsystem", 00:22:10.635 "trtype": "$TEST_TRANSPORT", 00:22:10.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.635 "adrfam": "ipv4", 00:22:10.635 "trsvcid": "$NVMF_PORT", 00:22:10.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.636 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.636 "hdgst": ${hdgst:-false}, 00:22:10.636 "ddgst": ${ddgst:-false} 00:22:10.636 }, 00:22:10.636 "method": "bdev_nvme_attach_controller" 00:22:10.636 } 00:22:10.636 EOF 00:22:10.636 )") 00:22:10.636 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:10.636 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:10.636 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:10.636 { 00:22:10.636 "params": { 00:22:10.636 "name": "Nvme$subsystem", 00:22:10.636 "trtype": "$TEST_TRANSPORT", 00:22:10.636 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.636 "adrfam": "ipv4", 00:22:10.636 "trsvcid": "$NVMF_PORT", 00:22:10.636 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.636 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.636 "hdgst": ${hdgst:-false}, 00:22:10.636 "ddgst": ${ddgst:-false} 00:22:10.636 }, 00:22:10.636 "method": "bdev_nvme_attach_controller" 00:22:10.636 } 00:22:10.636 EOF 00:22:10.636 )") 00:22:10.636 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:10.636 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:22:10.636 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:22:10.636 16:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:10.636 "params": { 00:22:10.636 "name": "Nvme1", 00:22:10.636 "trtype": "tcp", 00:22:10.636 "traddr": "10.0.0.2", 00:22:10.636 "adrfam": "ipv4", 00:22:10.636 "trsvcid": "4420", 00:22:10.636 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:10.636 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:10.636 "hdgst": false, 00:22:10.636 "ddgst": false 00:22:10.636 }, 00:22:10.636 "method": "bdev_nvme_attach_controller" 00:22:10.636 },{ 00:22:10.636 "params": { 00:22:10.636 "name": "Nvme2", 00:22:10.636 "trtype": "tcp", 00:22:10.636 "traddr": "10.0.0.2", 00:22:10.636 "adrfam": "ipv4", 00:22:10.636 "trsvcid": "4420", 00:22:10.636 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:10.636 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:10.636 "hdgst": false, 00:22:10.636 "ddgst": false 00:22:10.636 }, 00:22:10.636 "method": "bdev_nvme_attach_controller" 00:22:10.636 },{ 00:22:10.636 "params": { 00:22:10.636 "name": "Nvme3", 00:22:10.636 "trtype": "tcp", 00:22:10.636 "traddr": "10.0.0.2", 00:22:10.636 "adrfam": "ipv4", 00:22:10.636 "trsvcid": "4420", 00:22:10.636 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:10.636 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:10.636 "hdgst": false, 00:22:10.636 "ddgst": false 00:22:10.636 }, 00:22:10.636 "method": "bdev_nvme_attach_controller" 00:22:10.636 },{ 00:22:10.636 "params": { 00:22:10.636 "name": "Nvme4", 00:22:10.636 "trtype": "tcp", 00:22:10.636 "traddr": "10.0.0.2", 00:22:10.636 "adrfam": "ipv4", 00:22:10.636 "trsvcid": "4420", 00:22:10.636 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:10.636 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:10.636 "hdgst": false, 00:22:10.636 "ddgst": false 00:22:10.636 }, 00:22:10.636 "method": "bdev_nvme_attach_controller" 00:22:10.636 },{ 00:22:10.636 "params": { 00:22:10.636 "name": "Nvme5", 00:22:10.636 "trtype": "tcp", 00:22:10.636 "traddr": "10.0.0.2", 00:22:10.636 "adrfam": "ipv4", 00:22:10.636 "trsvcid": "4420", 00:22:10.636 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:10.636 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:10.636 "hdgst": false, 00:22:10.636 "ddgst": false 00:22:10.636 }, 00:22:10.636 "method": "bdev_nvme_attach_controller" 00:22:10.636 },{ 00:22:10.636 "params": { 00:22:10.636 "name": "Nvme6", 00:22:10.636 "trtype": "tcp", 00:22:10.636 "traddr": "10.0.0.2", 00:22:10.636 "adrfam": "ipv4", 00:22:10.636 "trsvcid": "4420", 00:22:10.636 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:10.636 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:10.636 "hdgst": false, 00:22:10.636 "ddgst": false 00:22:10.636 }, 00:22:10.636 "method": "bdev_nvme_attach_controller" 00:22:10.636 },{ 00:22:10.636 "params": { 00:22:10.636 "name": "Nvme7", 00:22:10.636 "trtype": "tcp", 00:22:10.636 "traddr": "10.0.0.2", 00:22:10.636 "adrfam": "ipv4", 00:22:10.636 "trsvcid": "4420", 00:22:10.636 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:10.636 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:10.636 "hdgst": false, 00:22:10.636 "ddgst": false 00:22:10.636 }, 00:22:10.636 "method": "bdev_nvme_attach_controller" 00:22:10.636 },{ 00:22:10.636 "params": { 00:22:10.636 "name": "Nvme8", 00:22:10.636 "trtype": "tcp", 00:22:10.636 "traddr": "10.0.0.2", 00:22:10.636 "adrfam": "ipv4", 00:22:10.636 "trsvcid": "4420", 00:22:10.636 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:10.636 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:10.636 "hdgst": false, 00:22:10.636 "ddgst": false 00:22:10.636 }, 00:22:10.636 "method": "bdev_nvme_attach_controller" 00:22:10.636 },{ 00:22:10.636 "params": { 00:22:10.636 "name": "Nvme9", 00:22:10.636 "trtype": "tcp", 00:22:10.636 "traddr": "10.0.0.2", 00:22:10.636 "adrfam": "ipv4", 00:22:10.636 "trsvcid": "4420", 00:22:10.636 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:10.636 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:10.636 "hdgst": false, 00:22:10.636 "ddgst": false 00:22:10.636 }, 00:22:10.636 "method": "bdev_nvme_attach_controller" 00:22:10.636 },{ 00:22:10.636 "params": { 00:22:10.636 "name": "Nvme10", 00:22:10.636 "trtype": "tcp", 00:22:10.636 "traddr": "10.0.0.2", 00:22:10.636 "adrfam": "ipv4", 00:22:10.636 "trsvcid": "4420", 00:22:10.636 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:10.636 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:10.636 "hdgst": false, 00:22:10.636 "ddgst": false 00:22:10.636 }, 00:22:10.636 "method": "bdev_nvme_attach_controller" 00:22:10.636 }' 00:22:10.897 [2024-11-20 16:33:56.605349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.897 [2024-11-20 16:33:56.641600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:12.281 Running I/O for 10 seconds... 00:22:12.281 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:12.281 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:12.281 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:12.281 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.281 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:12.541 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.541 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:12.541 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:12.541 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:12.541 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:22:12.541 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:22:12.541 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:12.541 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:12.541 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:12.541 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:12.541 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.541 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:12.541 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.541 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:12.541 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:12.541 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:12.801 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:12.801 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:12.801 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:12.801 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:12.801 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.801 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:12.801 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.801 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:12.801 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:12.801 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:13.062 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:13.062 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:13.062 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:13.062 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:13.062 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.062 16:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:13.062 16:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.062 16:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:13.062 16:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:13.062 16:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:22:13.062 16:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:22:13.062 16:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:22:13.062 16:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2270708 00:22:13.062 16:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2270708 ']' 00:22:13.062 16:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2270708 00:22:13.062 16:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:13.062 16:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:13.324 16:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2270708 00:22:13.324 16:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:13.324 16:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:13.324 16:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2270708' 00:22:13.324 killing process with pid 2270708 00:22:13.324 16:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2270708 00:22:13.324 16:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2270708 00:22:13.324 Received shutdown signal, test time was about 0.996870 seconds 00:22:13.324 00:22:13.324 Latency(us) 00:22:13.324 [2024-11-20T15:33:59.283Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:13.324 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:13.324 Verification LBA range: start 0x0 length 0x400 00:22:13.324 Nvme1n1 : 0.97 197.91 12.37 0.00 0.00 319545.17 17367.04 249910.61 00:22:13.324 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:13.324 Verification LBA range: start 0x0 length 0x400 00:22:13.324 Nvme2n1 : 0.99 258.25 16.14 0.00 0.00 240142.51 20643.84 219327.15 00:22:13.324 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:13.324 Verification LBA range: start 0x0 length 0x400 00:22:13.324 Nvme3n1 : 0.98 261.71 16.36 0.00 0.00 231501.23 16493.23 246415.36 00:22:13.324 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:13.324 Verification LBA range: start 0x0 length 0x400 00:22:13.324 Nvme4n1 : 1.00 257.03 16.06 0.00 0.00 231404.16 17913.17 242920.11 00:22:13.324 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:13.324 Verification LBA range: start 0x0 length 0x400 00:22:13.324 Nvme5n1 : 0.99 257.64 16.10 0.00 0.00 225953.28 20971.52 242920.11 00:22:13.324 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:13.324 Verification LBA range: start 0x0 length 0x400 00:22:13.324 Nvme6n1 : 0.98 264.70 16.54 0.00 0.00 214693.31 1897.81 227191.47 00:22:13.324 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:13.324 Verification LBA range: start 0x0 length 0x400 00:22:13.324 Nvme7n1 : 0.96 205.64 12.85 0.00 0.00 267633.96 3522.56 244667.73 00:22:13.324 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:13.324 Verification LBA range: start 0x0 length 0x400 00:22:13.324 Nvme8n1 : 0.99 259.14 16.20 0.00 0.00 210251.09 22282.24 241172.48 00:22:13.324 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:13.324 Verification LBA range: start 0x0 length 0x400 00:22:13.324 Nvme9n1 : 0.98 259.93 16.25 0.00 0.00 204442.24 18677.76 246415.36 00:22:13.324 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:13.324 Verification LBA range: start 0x0 length 0x400 00:22:13.324 Nvme10n1 : 0.98 196.50 12.28 0.00 0.00 264036.41 15291.73 263891.63 00:22:13.324 [2024-11-20T15:33:59.283Z] =================================================================================================================== 00:22:13.324 [2024-11-20T15:33:59.283Z] Total : 2418.45 151.15 0.00 0.00 237516.87 1897.81 263891.63 00:22:13.584 16:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:22:14.552 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2270320 00:22:14.552 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:22:14.552 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:14.553 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:14.553 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:14.553 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:14.553 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:14.553 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:22:14.553 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:14.553 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:22:14.553 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:14.553 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:14.553 rmmod nvme_tcp 00:22:14.553 rmmod nvme_fabrics 00:22:14.553 rmmod nvme_keyring 00:22:14.553 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:14.553 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:22:14.553 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:22:14.553 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 2270320 ']' 00:22:14.553 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 2270320 00:22:14.553 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2270320 ']' 00:22:14.553 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2270320 00:22:14.553 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:14.553 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:14.553 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2270320 00:22:14.553 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:14.553 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:14.553 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2270320' 00:22:14.553 killing process with pid 2270320 00:22:14.553 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2270320 00:22:14.553 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2270320 00:22:14.813 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:14.813 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:14.813 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:14.813 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:22:14.813 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:22:14.813 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:14.813 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:22:14.813 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:14.813 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:14.814 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.814 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:14.814 16:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:17.381 00:22:17.381 real 0m8.030s 00:22:17.381 user 0m24.666s 00:22:17.381 sys 0m1.254s 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:17.381 ************************************ 00:22:17.381 END TEST nvmf_shutdown_tc2 00:22:17.381 ************************************ 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:17.381 ************************************ 00:22:17.381 START TEST nvmf_shutdown_tc3 00:22:17.381 ************************************ 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:17.381 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:17.381 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:17.381 Found net devices under 0000:31:00.0: cvl_0_0 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:17.381 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:17.382 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:17.382 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:17.382 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:17.382 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:17.382 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:17.382 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:17.382 Found net devices under 0000:31:00.1: cvl_0_1 00:22:17.382 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:17.382 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:17.382 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:17.382 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:17.382 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:17.382 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:17.382 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:17.382 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:17.382 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:17.382 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:17.382 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:17.382 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:17.382 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:17.382 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:17.382 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:17.382 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:17.382 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:17.382 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:17.382 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:17.382 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:17.382 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:17.382 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:17.382 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:17.382 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:17.382 16:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:17.382 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:17.382 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:17.382 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:17.382 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:17.382 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:17.382 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.475 ms 00:22:17.382 00:22:17.382 --- 10.0.0.2 ping statistics --- 00:22:17.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.382 rtt min/avg/max/mdev = 0.475/0.475/0.475/0.000 ms 00:22:17.382 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:17.382 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:17.382 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:22:17.382 00:22:17.382 --- 10.0.0.1 ping statistics --- 00:22:17.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.382 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:22:17.382 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:17.382 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:22:17.382 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:17.382 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:17.382 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:17.382 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:17.382 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:17.382 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:17.382 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:17.382 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:17.382 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:17.382 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:17.382 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:17.382 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2272165 00:22:17.382 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2272165 00:22:17.382 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2272165 ']' 00:22:17.382 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:17.382 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:17.382 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:17.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:17.382 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:17.382 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:17.382 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:17.382 [2024-11-20 16:34:03.216132] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:22:17.382 [2024-11-20 16:34:03.216218] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:17.382 [2024-11-20 16:34:03.312527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:17.644 [2024-11-20 16:34:03.347205] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:17.644 [2024-11-20 16:34:03.347234] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:17.644 [2024-11-20 16:34:03.347239] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:17.644 [2024-11-20 16:34:03.347244] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:17.644 [2024-11-20 16:34:03.347249] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:17.644 [2024-11-20 16:34:03.348812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:17.644 [2024-11-20 16:34:03.348967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:17.644 [2024-11-20 16:34:03.349100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:17.644 [2024-11-20 16:34:03.349102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:18.214 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:18.214 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:22:18.214 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:18.214 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:18.214 16:34:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:18.214 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:18.214 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:18.214 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.214 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:18.214 [2024-11-20 16:34:04.040701] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:18.214 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.214 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:18.214 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:18.214 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:18.214 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:18.214 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:18.214 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:18.214 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:18.214 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:18.214 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:18.214 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:18.214 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:18.214 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:18.214 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:18.214 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:18.214 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:18.214 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:18.214 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:18.214 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:18.214 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:18.214 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:18.214 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:18.214 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:18.214 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:18.214 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:18.214 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:18.214 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:18.214 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.214 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:18.214 Malloc1 00:22:18.214 [2024-11-20 16:34:04.148330] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:18.214 Malloc2 00:22:18.475 Malloc3 00:22:18.475 Malloc4 00:22:18.475 Malloc5 00:22:18.475 Malloc6 00:22:18.475 Malloc7 00:22:18.475 Malloc8 00:22:18.736 Malloc9 00:22:18.736 Malloc10 00:22:18.736 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.736 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:18.736 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:18.736 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:18.736 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2272473 00:22:18.736 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2272473 /var/tmp/bdevperf.sock 00:22:18.736 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2272473 ']' 00:22:18.736 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:18.736 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:18.736 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:18.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:18.736 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:18.736 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:18.736 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:18.736 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:18.736 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:22:18.736 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:22:18.736 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:18.736 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:18.736 { 00:22:18.736 "params": { 00:22:18.736 "name": "Nvme$subsystem", 00:22:18.736 "trtype": "$TEST_TRANSPORT", 00:22:18.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:18.736 "adrfam": "ipv4", 00:22:18.736 "trsvcid": "$NVMF_PORT", 00:22:18.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:18.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:18.736 "hdgst": ${hdgst:-false}, 00:22:18.736 "ddgst": ${ddgst:-false} 00:22:18.736 }, 00:22:18.736 "method": "bdev_nvme_attach_controller" 00:22:18.736 } 00:22:18.736 EOF 00:22:18.736 )") 00:22:18.736 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:18.736 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:18.736 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:18.736 { 00:22:18.736 "params": { 00:22:18.736 "name": "Nvme$subsystem", 00:22:18.736 "trtype": "$TEST_TRANSPORT", 00:22:18.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:18.736 "adrfam": "ipv4", 00:22:18.736 "trsvcid": "$NVMF_PORT", 00:22:18.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:18.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:18.736 "hdgst": ${hdgst:-false}, 00:22:18.736 "ddgst": ${ddgst:-false} 00:22:18.736 }, 00:22:18.736 "method": "bdev_nvme_attach_controller" 00:22:18.736 } 00:22:18.736 EOF 00:22:18.737 )") 00:22:18.737 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:18.737 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:18.737 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:18.737 { 00:22:18.737 "params": { 00:22:18.737 "name": "Nvme$subsystem", 00:22:18.737 "trtype": "$TEST_TRANSPORT", 00:22:18.737 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:18.737 "adrfam": "ipv4", 00:22:18.737 "trsvcid": "$NVMF_PORT", 00:22:18.737 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:18.737 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:18.737 "hdgst": ${hdgst:-false}, 00:22:18.737 "ddgst": ${ddgst:-false} 00:22:18.737 }, 00:22:18.737 "method": "bdev_nvme_attach_controller" 00:22:18.737 } 00:22:18.737 EOF 00:22:18.737 )") 00:22:18.737 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:18.737 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:18.737 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:18.737 { 00:22:18.737 "params": { 00:22:18.737 "name": "Nvme$subsystem", 00:22:18.737 "trtype": "$TEST_TRANSPORT", 00:22:18.737 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:18.737 "adrfam": "ipv4", 00:22:18.737 "trsvcid": "$NVMF_PORT", 00:22:18.737 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:18.737 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:18.737 "hdgst": ${hdgst:-false}, 00:22:18.737 "ddgst": ${ddgst:-false} 00:22:18.737 }, 00:22:18.737 "method": "bdev_nvme_attach_controller" 00:22:18.737 } 00:22:18.737 EOF 00:22:18.737 )") 00:22:18.737 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:18.737 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:18.737 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:18.737 { 00:22:18.737 "params": { 00:22:18.737 "name": "Nvme$subsystem", 00:22:18.737 "trtype": "$TEST_TRANSPORT", 00:22:18.737 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:18.737 "adrfam": "ipv4", 00:22:18.737 "trsvcid": "$NVMF_PORT", 00:22:18.737 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:18.737 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:18.737 "hdgst": ${hdgst:-false}, 00:22:18.737 "ddgst": ${ddgst:-false} 00:22:18.737 }, 00:22:18.737 "method": "bdev_nvme_attach_controller" 00:22:18.737 } 00:22:18.737 EOF 00:22:18.737 )") 00:22:18.737 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:18.737 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:18.737 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:18.737 { 00:22:18.737 "params": { 00:22:18.737 "name": "Nvme$subsystem", 00:22:18.737 "trtype": "$TEST_TRANSPORT", 00:22:18.737 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:18.737 "adrfam": "ipv4", 00:22:18.737 "trsvcid": "$NVMF_PORT", 00:22:18.737 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:18.737 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:18.737 "hdgst": ${hdgst:-false}, 00:22:18.737 "ddgst": ${ddgst:-false} 00:22:18.737 }, 00:22:18.737 "method": "bdev_nvme_attach_controller" 00:22:18.737 } 00:22:18.737 EOF 00:22:18.737 )") 00:22:18.737 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:18.737 [2024-11-20 16:34:04.597175] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:22:18.737 [2024-11-20 16:34:04.597228] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2272473 ] 00:22:18.737 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:18.737 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:18.737 { 00:22:18.737 "params": { 00:22:18.737 "name": "Nvme$subsystem", 00:22:18.737 "trtype": "$TEST_TRANSPORT", 00:22:18.737 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:18.737 "adrfam": "ipv4", 00:22:18.737 "trsvcid": "$NVMF_PORT", 00:22:18.737 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:18.737 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:18.737 "hdgst": ${hdgst:-false}, 00:22:18.737 "ddgst": ${ddgst:-false} 00:22:18.737 }, 00:22:18.737 "method": "bdev_nvme_attach_controller" 00:22:18.737 } 00:22:18.737 EOF 00:22:18.737 )") 00:22:18.737 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:18.737 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:18.737 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:18.737 { 00:22:18.737 "params": { 00:22:18.737 "name": "Nvme$subsystem", 00:22:18.737 "trtype": "$TEST_TRANSPORT", 00:22:18.737 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:18.737 "adrfam": "ipv4", 00:22:18.737 "trsvcid": "$NVMF_PORT", 00:22:18.737 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:18.737 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:18.737 "hdgst": ${hdgst:-false}, 00:22:18.737 "ddgst": ${ddgst:-false} 00:22:18.737 }, 00:22:18.737 "method": "bdev_nvme_attach_controller" 00:22:18.737 } 00:22:18.737 EOF 00:22:18.737 )") 00:22:18.737 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:18.737 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:18.737 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:18.737 { 00:22:18.737 "params": { 00:22:18.737 "name": "Nvme$subsystem", 00:22:18.737 "trtype": "$TEST_TRANSPORT", 00:22:18.737 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:18.737 "adrfam": "ipv4", 00:22:18.737 "trsvcid": "$NVMF_PORT", 00:22:18.737 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:18.737 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:18.737 "hdgst": ${hdgst:-false}, 00:22:18.737 "ddgst": ${ddgst:-false} 00:22:18.737 }, 00:22:18.737 "method": "bdev_nvme_attach_controller" 00:22:18.737 } 00:22:18.737 EOF 00:22:18.737 )") 00:22:18.737 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:18.737 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:18.737 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:18.737 { 00:22:18.737 "params": { 00:22:18.737 "name": "Nvme$subsystem", 00:22:18.737 "trtype": "$TEST_TRANSPORT", 00:22:18.737 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:18.737 "adrfam": "ipv4", 00:22:18.737 "trsvcid": "$NVMF_PORT", 00:22:18.737 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:18.737 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:18.737 "hdgst": ${hdgst:-false}, 00:22:18.737 "ddgst": ${ddgst:-false} 00:22:18.737 }, 00:22:18.738 "method": "bdev_nvme_attach_controller" 00:22:18.738 } 00:22:18.738 EOF 00:22:18.738 )") 00:22:18.738 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:18.738 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:22:18.738 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:22:18.738 16:34:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:18.738 "params": { 00:22:18.738 "name": "Nvme1", 00:22:18.738 "trtype": "tcp", 00:22:18.738 "traddr": "10.0.0.2", 00:22:18.738 "adrfam": "ipv4", 00:22:18.738 "trsvcid": "4420", 00:22:18.738 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:18.738 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:18.738 "hdgst": false, 00:22:18.738 "ddgst": false 00:22:18.738 }, 00:22:18.738 "method": "bdev_nvme_attach_controller" 00:22:18.738 },{ 00:22:18.738 "params": { 00:22:18.738 "name": "Nvme2", 00:22:18.738 "trtype": "tcp", 00:22:18.738 "traddr": "10.0.0.2", 00:22:18.738 "adrfam": "ipv4", 00:22:18.738 "trsvcid": "4420", 00:22:18.738 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:18.738 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:18.738 "hdgst": false, 00:22:18.738 "ddgst": false 00:22:18.738 }, 00:22:18.738 "method": "bdev_nvme_attach_controller" 00:22:18.738 },{ 00:22:18.738 "params": { 00:22:18.738 "name": "Nvme3", 00:22:18.738 "trtype": "tcp", 00:22:18.738 "traddr": "10.0.0.2", 00:22:18.738 "adrfam": "ipv4", 00:22:18.738 "trsvcid": "4420", 00:22:18.738 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:18.738 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:18.738 "hdgst": false, 00:22:18.738 "ddgst": false 00:22:18.738 }, 00:22:18.738 "method": "bdev_nvme_attach_controller" 00:22:18.738 },{ 00:22:18.738 "params": { 00:22:18.738 "name": "Nvme4", 00:22:18.738 "trtype": "tcp", 00:22:18.738 "traddr": "10.0.0.2", 00:22:18.738 "adrfam": "ipv4", 00:22:18.738 "trsvcid": "4420", 00:22:18.738 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:18.738 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:18.738 "hdgst": false, 00:22:18.738 "ddgst": false 00:22:18.738 }, 00:22:18.738 "method": "bdev_nvme_attach_controller" 00:22:18.738 },{ 00:22:18.738 "params": { 00:22:18.738 "name": "Nvme5", 00:22:18.738 "trtype": "tcp", 00:22:18.738 "traddr": "10.0.0.2", 00:22:18.738 "adrfam": "ipv4", 00:22:18.738 "trsvcid": "4420", 00:22:18.738 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:18.738 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:18.738 "hdgst": false, 00:22:18.738 "ddgst": false 00:22:18.738 }, 00:22:18.738 "method": "bdev_nvme_attach_controller" 00:22:18.738 },{ 00:22:18.738 "params": { 00:22:18.738 "name": "Nvme6", 00:22:18.738 "trtype": "tcp", 00:22:18.738 "traddr": "10.0.0.2", 00:22:18.738 "adrfam": "ipv4", 00:22:18.738 "trsvcid": "4420", 00:22:18.738 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:18.738 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:18.738 "hdgst": false, 00:22:18.738 "ddgst": false 00:22:18.738 }, 00:22:18.738 "method": "bdev_nvme_attach_controller" 00:22:18.738 },{ 00:22:18.738 "params": { 00:22:18.738 "name": "Nvme7", 00:22:18.738 "trtype": "tcp", 00:22:18.738 "traddr": "10.0.0.2", 00:22:18.738 "adrfam": "ipv4", 00:22:18.738 "trsvcid": "4420", 00:22:18.738 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:18.738 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:18.738 "hdgst": false, 00:22:18.738 "ddgst": false 00:22:18.738 }, 00:22:18.738 "method": "bdev_nvme_attach_controller" 00:22:18.738 },{ 00:22:18.738 "params": { 00:22:18.738 "name": "Nvme8", 00:22:18.738 "trtype": "tcp", 00:22:18.738 "traddr": "10.0.0.2", 00:22:18.738 "adrfam": "ipv4", 00:22:18.738 "trsvcid": "4420", 00:22:18.738 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:18.738 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:18.738 "hdgst": false, 00:22:18.738 "ddgst": false 00:22:18.738 }, 00:22:18.738 "method": "bdev_nvme_attach_controller" 00:22:18.738 },{ 00:22:18.738 "params": { 00:22:18.738 "name": "Nvme9", 00:22:18.738 "trtype": "tcp", 00:22:18.738 "traddr": "10.0.0.2", 00:22:18.738 "adrfam": "ipv4", 00:22:18.738 "trsvcid": "4420", 00:22:18.738 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:18.738 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:18.738 "hdgst": false, 00:22:18.738 "ddgst": false 00:22:18.738 }, 00:22:18.738 "method": "bdev_nvme_attach_controller" 00:22:18.738 },{ 00:22:18.738 "params": { 00:22:18.738 "name": "Nvme10", 00:22:18.738 "trtype": "tcp", 00:22:18.738 "traddr": "10.0.0.2", 00:22:18.738 "adrfam": "ipv4", 00:22:18.738 "trsvcid": "4420", 00:22:18.738 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:18.738 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:18.738 "hdgst": false, 00:22:18.738 "ddgst": false 00:22:18.738 }, 00:22:18.738 "method": "bdev_nvme_attach_controller" 00:22:18.738 }' 00:22:18.738 [2024-11-20 16:34:04.673494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.998 [2024-11-20 16:34:04.709952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:19.938 Running I/O for 10 seconds... 00:22:19.938 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:19.938 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:22:19.938 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:19.938 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.938 16:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:20.198 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.198 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:20.198 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:20.198 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:20.198 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:20.198 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:22:20.198 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:22:20.198 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:20.198 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:20.198 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:20.198 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:20.198 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.198 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:20.198 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.198 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:20.198 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:20.198 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:20.457 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:20.457 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:20.457 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:20.457 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:20.457 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.457 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:20.457 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.716 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:20.716 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:20.716 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:20.994 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:20.994 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:20.994 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:20.994 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:20.994 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.994 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:20.994 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.994 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:20.994 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:20.994 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:22:20.994 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:22:20.994 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:22:20.994 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2272165 00:22:20.994 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2272165 ']' 00:22:20.994 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2272165 00:22:20.994 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:22:20.994 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:20.994 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2272165 00:22:20.994 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:20.994 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:20.994 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2272165' 00:22:20.994 killing process with pid 2272165 00:22:20.994 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 2272165 00:22:20.994 16:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 2272165 00:22:20.994 [2024-11-20 16:34:06.792158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x670ab0 is same with the state(6) to be set 00:22:20.994 [2024-11-20 16:34:06.792205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x670ab0 is same with the state(6) to be set 00:22:20.994 [2024-11-20 16:34:06.792211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x670ab0 is same with the state(6) to be set 00:22:20.994 [2024-11-20 16:34:06.792216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x670ab0 is same with the state(6) to be set 00:22:20.994 [2024-11-20 16:34:06.792221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x670ab0 is same with the state(6) to be set 00:22:20.994 [2024-11-20 16:34:06.792955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.994 [2024-11-20 16:34:06.792985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.994 [2024-11-20 16:34:06.792992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.994 [2024-11-20 16:34:06.792997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.994 [2024-11-20 16:34:06.793006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.994 [2024-11-20 16:34:06.793011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.994 [2024-11-20 16:34:06.793016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.994 [2024-11-20 16:34:06.793020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.994 [2024-11-20 16:34:06.793025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.994 [2024-11-20 16:34:06.793030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.994 [2024-11-20 16:34:06.793034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.994 [2024-11-20 16:34:06.793039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.994 [2024-11-20 16:34:06.793043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.994 [2024-11-20 16:34:06.793048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.994 [2024-11-20 16:34:06.793053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.994 [2024-11-20 16:34:06.793057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.994 [2024-11-20 16:34:06.793062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.994 [2024-11-20 16:34:06.793067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.994 [2024-11-20 16:34:06.793071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.994 [2024-11-20 16:34:06.793076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.994 [2024-11-20 16:34:06.793081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.994 [2024-11-20 16:34:06.793086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.994 [2024-11-20 16:34:06.793090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.994 [2024-11-20 16:34:06.793095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.994 [2024-11-20 16:34:06.793099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.994 [2024-11-20 16:34:06.793104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.994 [2024-11-20 16:34:06.793109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.994 [2024-11-20 16:34:06.793113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.994 [2024-11-20 16:34:06.793118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.994 [2024-11-20 16:34:06.793123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.994 [2024-11-20 16:34:06.793127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.994 [2024-11-20 16:34:06.793133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.994 [2024-11-20 16:34:06.793138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.994 [2024-11-20 16:34:06.793143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.994 [2024-11-20 16:34:06.793147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.994 [2024-11-20 16:34:06.793152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.994 [2024-11-20 16:34:06.793156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.793161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.793166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.793171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.793175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.793180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.793184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.793189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.793194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.793198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.793203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.793207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.793211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.793216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.793221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.793225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.793230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.793235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.793240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.793244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.793249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.793253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.793258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.793264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.793268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.793273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.793278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8cad10 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.794053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x670f80 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.794064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x670f80 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.794069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x670f80 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.995 [2024-11-20 16:34:06.795474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.795478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.795483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.795487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.795492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.795497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.795501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.795506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671450 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.796941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x671e10 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.798267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.798282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.798287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.798296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.798300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.798305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.798310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.798314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.798319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.798324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.798328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.798332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.798337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.996 [2024-11-20 16:34:06.798342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.798347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.798351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.798356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.798361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.798365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.798370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.798374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.798379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.798384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.798388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.798393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.798398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.798403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.798407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.798412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.798417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.798423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.798427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.798432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.798437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.798442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.798446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.798451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.798455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.798460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.798464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.798469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.798474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.798479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.798483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.798489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.798494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.798498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.798503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.798507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.798512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.798517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.798521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.798526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.798530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.798535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.798540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.798544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.798549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.798555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.798559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.798564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.798569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.798573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6727b0 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.799270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.799291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.799297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.799302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.799307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.799312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.799317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.799321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.799326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.799331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.799336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.799340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.799345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.799349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.799354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.799359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.799364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.799368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.799373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.997 [2024-11-20 16:34:06.799378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.799383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.799387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.799396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.799401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.799406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.799410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.799415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.799420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.799425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.799429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.799434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.799438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.799443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.799448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.799452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.799457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.799462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.799466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.799471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.799476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.799480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.799485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.799489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.799494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.799499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.799503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.799508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.799513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.799517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.799523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.799527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.799532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.799536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.799541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.799545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.799550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.799555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.799559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.799564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.799569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.799574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.799579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.799584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.799588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x672c80 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.800038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.800052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.800057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.800062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.800067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.800071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.800076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.800080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.800085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.800090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.800094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.800099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.800104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.800111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.800116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.998 [2024-11-20 16:34:06.800121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.999 [2024-11-20 16:34:06.800126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.999 [2024-11-20 16:34:06.800130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.999 [2024-11-20 16:34:06.800135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.999 [2024-11-20 16:34:06.800140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.999 [2024-11-20 16:34:06.800145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.999 [2024-11-20 16:34:06.800150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.999 [2024-11-20 16:34:06.800154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.999 [2024-11-20 16:34:06.800158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.999 [2024-11-20 16:34:06.800163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.999 [2024-11-20 16:34:06.800167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.999 [2024-11-20 16:34:06.800172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.999 [2024-11-20 16:34:06.800176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.999 [2024-11-20 16:34:06.800181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.999 [2024-11-20 16:34:06.800185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.999 [2024-11-20 16:34:06.800190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.999 [2024-11-20 16:34:06.800195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.999 [2024-11-20 16:34:06.800199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.999 [2024-11-20 16:34:06.800204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.999 [2024-11-20 16:34:06.800208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.999 [2024-11-20 16:34:06.800213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.999 [2024-11-20 16:34:06.800217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.999 [2024-11-20 16:34:06.800221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.999 [2024-11-20 16:34:06.800226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.999 [2024-11-20 16:34:06.800231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.999 [2024-11-20 16:34:06.800237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.999 [2024-11-20 16:34:06.800242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.999 [2024-11-20 16:34:06.800247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.999 [2024-11-20 16:34:06.800252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.999 [2024-11-20 16:34:06.800256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.999 [2024-11-20 16:34:06.800261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.999 [2024-11-20 16:34:06.800266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.999 [2024-11-20 16:34:06.800270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.999 [2024-11-20 16:34:06.800275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.999 [2024-11-20 16:34:06.800279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.999 [2024-11-20 16:34:06.800284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.999 [2024-11-20 16:34:06.800288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.999 [2024-11-20 16:34:06.800293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.999 [2024-11-20 16:34:06.800298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.999 [2024-11-20 16:34:06.800302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.999 [2024-11-20 16:34:06.800307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.999 [2024-11-20 16:34:06.800311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.999 [2024-11-20 16:34:06.800316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.999 [2024-11-20 16:34:06.800320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.999 [2024-11-20 16:34:06.800325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.999 [2024-11-20 16:34:06.800329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.999 [2024-11-20 16:34:06.800334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:20.999 [2024-11-20 16:34:06.800339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8db0c0 is same with the state(6) to be set 00:22:21.000 [2024-11-20 16:34:06.809474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.000 [2024-11-20 16:34:06.809510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.000 [2024-11-20 16:34:06.809520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.000 [2024-11-20 16:34:06.809528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.000 [2024-11-20 16:34:06.809542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.000 [2024-11-20 16:34:06.809550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.000 [2024-11-20 16:34:06.809558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.000 [2024-11-20 16:34:06.809565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.000 [2024-11-20 16:34:06.809573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7e250 is same with the state(6) to be set 00:22:21.000 [2024-11-20 16:34:06.809606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.000 [2024-11-20 16:34:06.809615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.000 [2024-11-20 16:34:06.809623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.000 [2024-11-20 16:34:06.809631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.000 [2024-11-20 16:34:06.809639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.000 [2024-11-20 16:34:06.809646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.000 [2024-11-20 16:34:06.809657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.000 [2024-11-20 16:34:06.809670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.000 [2024-11-20 16:34:06.809681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf751e0 is same with the state(6) to be set 00:22:21.000 [2024-11-20 16:34:06.809709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.000 [2024-11-20 16:34:06.809718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.000 [2024-11-20 16:34:06.809726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.000 [2024-11-20 16:34:06.809734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.000 [2024-11-20 16:34:06.809742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.000 [2024-11-20 16:34:06.809750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.000 [2024-11-20 16:34:06.809758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.000 [2024-11-20 16:34:06.809765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.000 [2024-11-20 16:34:06.809773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0380 is same with the state(6) to be set 00:22:21.000 [2024-11-20 16:34:06.809801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.000 [2024-11-20 16:34:06.809810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.000 [2024-11-20 16:34:06.809818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.000 [2024-11-20 16:34:06.809833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.000 [2024-11-20 16:34:06.809842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.000 [2024-11-20 16:34:06.809849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.000 [2024-11-20 16:34:06.809858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.000 [2024-11-20 16:34:06.809865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.000 [2024-11-20 16:34:06.809872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1490 is same with the state(6) to be set 00:22:21.000 [2024-11-20 16:34:06.809900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.000 [2024-11-20 16:34:06.809909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.000 [2024-11-20 16:34:06.809918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.000 [2024-11-20 16:34:06.809926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.000 [2024-11-20 16:34:06.809935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.000 [2024-11-20 16:34:06.809943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.000 [2024-11-20 16:34:06.809952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.000 [2024-11-20 16:34:06.809959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.000 [2024-11-20 16:34:06.809967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb49c0 is same with the state(6) to be set 00:22:21.000 [2024-11-20 16:34:06.809999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.000 [2024-11-20 16:34:06.810008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.000 [2024-11-20 16:34:06.810016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.000 [2024-11-20 16:34:06.810024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.000 [2024-11-20 16:34:06.810033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.000 [2024-11-20 16:34:06.810040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.000 [2024-11-20 16:34:06.810048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.000 [2024-11-20 16:34:06.810055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.000 [2024-11-20 16:34:06.810063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb571b0 is same with the state(6) to be set 00:22:21.000 [2024-11-20 16:34:06.810087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.000 [2024-11-20 16:34:06.810098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.000 [2024-11-20 16:34:06.810107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.001 [2024-11-20 16:34:06.810114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.001 [2024-11-20 16:34:06.810123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.001 [2024-11-20 16:34:06.810130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.001 [2024-11-20 16:34:06.810139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.001 [2024-11-20 16:34:06.810146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.001 [2024-11-20 16:34:06.810154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d190 is same with the state(6) to be set 00:22:21.001 [2024-11-20 16:34:06.810176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.001 [2024-11-20 16:34:06.810185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.001 [2024-11-20 16:34:06.810193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.001 [2024-11-20 16:34:06.810200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.001 [2024-11-20 16:34:06.810209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.001 [2024-11-20 16:34:06.810216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.001 [2024-11-20 16:34:06.810224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.001 [2024-11-20 16:34:06.810231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.001 [2024-11-20 16:34:06.810239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4ade0 is same with the state(6) to be set 00:22:21.001 [2024-11-20 16:34:06.810263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.001 [2024-11-20 16:34:06.810271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.001 [2024-11-20 16:34:06.810279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.001 [2024-11-20 16:34:06.810287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.001 [2024-11-20 16:34:06.810295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.001 [2024-11-20 16:34:06.810302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.001 [2024-11-20 16:34:06.810311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.001 [2024-11-20 16:34:06.810318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.001 [2024-11-20 16:34:06.810325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb546e0 is same with the state(6) to be set 00:22:21.001 [2024-11-20 16:34:06.810352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.001 [2024-11-20 16:34:06.810361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.001 [2024-11-20 16:34:06.810370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.001 [2024-11-20 16:34:06.810377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.001 [2024-11-20 16:34:06.810385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.001 [2024-11-20 16:34:06.810392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.001 [2024-11-20 16:34:06.810401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:21.001 [2024-11-20 16:34:06.810408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.001 [2024-11-20 16:34:06.810415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4afe0 is same with the state(6) to be set 00:22:21.001 [2024-11-20 16:34:06.810744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.001 [2024-11-20 16:34:06.810766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.001 [2024-11-20 16:34:06.810783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.001 [2024-11-20 16:34:06.810791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.001 [2024-11-20 16:34:06.810801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.001 [2024-11-20 16:34:06.810809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.001 [2024-11-20 16:34:06.810819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.001 [2024-11-20 16:34:06.810826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.001 [2024-11-20 16:34:06.810836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.001 [2024-11-20 16:34:06.810844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.001 [2024-11-20 16:34:06.810853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.001 [2024-11-20 16:34:06.810860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.001 [2024-11-20 16:34:06.810870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.001 [2024-11-20 16:34:06.810877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.001 [2024-11-20 16:34:06.810887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.001 [2024-11-20 16:34:06.810894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.001 [2024-11-20 16:34:06.810907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.001 [2024-11-20 16:34:06.810915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.001 [2024-11-20 16:34:06.810925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.001 [2024-11-20 16:34:06.810932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.001 [2024-11-20 16:34:06.810941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.001 [2024-11-20 16:34:06.810949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.002 [2024-11-20 16:34:06.810958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.002 [2024-11-20 16:34:06.810965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.002 [2024-11-20 16:34:06.810975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.002 [2024-11-20 16:34:06.810987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.002 [2024-11-20 16:34:06.810997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.002 [2024-11-20 16:34:06.811004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.002 [2024-11-20 16:34:06.811014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.002 [2024-11-20 16:34:06.811022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.002 [2024-11-20 16:34:06.811031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.002 [2024-11-20 16:34:06.811039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.002 [2024-11-20 16:34:06.811048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.002 [2024-11-20 16:34:06.811055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.002 [2024-11-20 16:34:06.811064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.002 [2024-11-20 16:34:06.811072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.002 [2024-11-20 16:34:06.811081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.002 [2024-11-20 16:34:06.811088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.002 [2024-11-20 16:34:06.811098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.002 [2024-11-20 16:34:06.811105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.002 [2024-11-20 16:34:06.811115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.002 [2024-11-20 16:34:06.811124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.002 [2024-11-20 16:34:06.811133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.002 [2024-11-20 16:34:06.811140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.002 [2024-11-20 16:34:06.811150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.002 [2024-11-20 16:34:06.811158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.002 [2024-11-20 16:34:06.811168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.002 [2024-11-20 16:34:06.811176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.002 [2024-11-20 16:34:06.811185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.002 [2024-11-20 16:34:06.811193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.002 [2024-11-20 16:34:06.811202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.002 [2024-11-20 16:34:06.811210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.002 [2024-11-20 16:34:06.811219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.002 [2024-11-20 16:34:06.811226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.002 [2024-11-20 16:34:06.811236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.002 [2024-11-20 16:34:06.811243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.002 [2024-11-20 16:34:06.811253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.002 [2024-11-20 16:34:06.811260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.002 [2024-11-20 16:34:06.811270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.002 [2024-11-20 16:34:06.811278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.002 [2024-11-20 16:34:06.811287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.002 [2024-11-20 16:34:06.811294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.002 [2024-11-20 16:34:06.811303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.002 [2024-11-20 16:34:06.811311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.002 [2024-11-20 16:34:06.811320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.002 [2024-11-20 16:34:06.811328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.002 [2024-11-20 16:34:06.811339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.002 [2024-11-20 16:34:06.811347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.002 [2024-11-20 16:34:06.811356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.002 [2024-11-20 16:34:06.811364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.002 [2024-11-20 16:34:06.811373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.002 [2024-11-20 16:34:06.811380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.002 [2024-11-20 16:34:06.811389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.002 [2024-11-20 16:34:06.811397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.002 [2024-11-20 16:34:06.811406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.002 [2024-11-20 16:34:06.811414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.002 [2024-11-20 16:34:06.811423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.002 [2024-11-20 16:34:06.811431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.002 [2024-11-20 16:34:06.811440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.002 [2024-11-20 16:34:06.811447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.002 [2024-11-20 16:34:06.811457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.002 [2024-11-20 16:34:06.811464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.002 [2024-11-20 16:34:06.811473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.002 [2024-11-20 16:34:06.811480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.002 [2024-11-20 16:34:06.811490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.002 [2024-11-20 16:34:06.811497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.003 [2024-11-20 16:34:06.811507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.003 [2024-11-20 16:34:06.811514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.003 [2024-11-20 16:34:06.811523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.003 [2024-11-20 16:34:06.811530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.003 [2024-11-20 16:34:06.811540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.003 [2024-11-20 16:34:06.811549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.003 [2024-11-20 16:34:06.811559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.003 [2024-11-20 16:34:06.811566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.003 [2024-11-20 16:34:06.811575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.003 [2024-11-20 16:34:06.811582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.003 [2024-11-20 16:34:06.811592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.003 [2024-11-20 16:34:06.811599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.003 [2024-11-20 16:34:06.811608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.003 [2024-11-20 16:34:06.811616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.003 [2024-11-20 16:34:06.811625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.003 [2024-11-20 16:34:06.811632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.003 [2024-11-20 16:34:06.811641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.003 [2024-11-20 16:34:06.811649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.003 [2024-11-20 16:34:06.811658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.003 [2024-11-20 16:34:06.811665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.003 [2024-11-20 16:34:06.811675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.003 [2024-11-20 16:34:06.811682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.003 [2024-11-20 16:34:06.811692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.003 [2024-11-20 16:34:06.811699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.003 [2024-11-20 16:34:06.811708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.003 [2024-11-20 16:34:06.811716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.003 [2024-11-20 16:34:06.811725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.003 [2024-11-20 16:34:06.811733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.003 [2024-11-20 16:34:06.811742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.003 [2024-11-20 16:34:06.811749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.003 [2024-11-20 16:34:06.811760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.003 [2024-11-20 16:34:06.811768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.003 [2024-11-20 16:34:06.811778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.003 [2024-11-20 16:34:06.811785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.003 [2024-11-20 16:34:06.811794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.003 [2024-11-20 16:34:06.811802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.003 [2024-11-20 16:34:06.811811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.003 [2024-11-20 16:34:06.811819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.003 [2024-11-20 16:34:06.811828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.003 [2024-11-20 16:34:06.811835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.003 [2024-11-20 16:34:06.811845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.003 [2024-11-20 16:34:06.811852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.003 [2024-11-20 16:34:06.811881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:21.003 [2024-11-20 16:34:06.812506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.003 [2024-11-20 16:34:06.812524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.003 [2024-11-20 16:34:06.812535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.003 [2024-11-20 16:34:06.812543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.003 [2024-11-20 16:34:06.812553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.003 [2024-11-20 16:34:06.812560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.003 [2024-11-20 16:34:06.812569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.003 [2024-11-20 16:34:06.812577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.003 [2024-11-20 16:34:06.812586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.003 [2024-11-20 16:34:06.812594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.003 [2024-11-20 16:34:06.812604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.003 [2024-11-20 16:34:06.812611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.003 [2024-11-20 16:34:06.812625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.003 [2024-11-20 16:34:06.812632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.003 [2024-11-20 16:34:06.812642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.003 [2024-11-20 16:34:06.812649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.003 [2024-11-20 16:34:06.812659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.003 [2024-11-20 16:34:06.812666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.003 [2024-11-20 16:34:06.812675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.003 [2024-11-20 16:34:06.812683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.003 [2024-11-20 16:34:06.812693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.003 [2024-11-20 16:34:06.812701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.003 [2024-11-20 16:34:06.812710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.003 [2024-11-20 16:34:06.812717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.003 [2024-11-20 16:34:06.812727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.003 [2024-11-20 16:34:06.812735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.003 [2024-11-20 16:34:06.812744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.003 [2024-11-20 16:34:06.812752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.003 [2024-11-20 16:34:06.812761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.003 [2024-11-20 16:34:06.812769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.003 [2024-11-20 16:34:06.812778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.003 [2024-11-20 16:34:06.812785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.003 [2024-11-20 16:34:06.812795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.003 [2024-11-20 16:34:06.812803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.003 [2024-11-20 16:34:06.812812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.003 [2024-11-20 16:34:06.812819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.004 [2024-11-20 16:34:06.812828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.004 [2024-11-20 16:34:06.812837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.004 [2024-11-20 16:34:06.812847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.004 [2024-11-20 16:34:06.812855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.004 [2024-11-20 16:34:06.812864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.004 [2024-11-20 16:34:06.812871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.004 [2024-11-20 16:34:06.812881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.004 [2024-11-20 16:34:06.812889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.004 [2024-11-20 16:34:06.812898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.004 [2024-11-20 16:34:06.812905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.004 [2024-11-20 16:34:06.812915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.004 [2024-11-20 16:34:06.812922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.004 [2024-11-20 16:34:06.812932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.004 [2024-11-20 16:34:06.812939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.004 [2024-11-20 16:34:06.812949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.004 [2024-11-20 16:34:06.812956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.004 [2024-11-20 16:34:06.812966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.004 [2024-11-20 16:34:06.812973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.004 [2024-11-20 16:34:06.812988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.004 [2024-11-20 16:34:06.812995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.004 [2024-11-20 16:34:06.813005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.004 [2024-11-20 16:34:06.813013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.004 [2024-11-20 16:34:06.813023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.004 [2024-11-20 16:34:06.813031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.004 [2024-11-20 16:34:06.813040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.004 [2024-11-20 16:34:06.813047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.004 [2024-11-20 16:34:06.813058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.004 [2024-11-20 16:34:06.813066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.004 [2024-11-20 16:34:06.813075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.004 [2024-11-20 16:34:06.813083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.004 [2024-11-20 16:34:06.813092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.004 [2024-11-20 16:34:06.813100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.004 [2024-11-20 16:34:06.813109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.004 [2024-11-20 16:34:06.813116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.004 [2024-11-20 16:34:06.813126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.004 [2024-11-20 16:34:06.813133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.004 [2024-11-20 16:34:06.813143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.004 [2024-11-20 16:34:06.813150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.004 [2024-11-20 16:34:06.813159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.004 [2024-11-20 16:34:06.813167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.004 [2024-11-20 16:34:06.813176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.004 [2024-11-20 16:34:06.813183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.004 [2024-11-20 16:34:06.813193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.004 [2024-11-20 16:34:06.813200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.004 [2024-11-20 16:34:06.813209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.004 [2024-11-20 16:34:06.813217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.004 [2024-11-20 16:34:06.813226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.004 [2024-11-20 16:34:06.813233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.004 [2024-11-20 16:34:06.813243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.004 [2024-11-20 16:34:06.813250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.004 [2024-11-20 16:34:06.813260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.004 [2024-11-20 16:34:06.813269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.004 [2024-11-20 16:34:06.813278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.004 [2024-11-20 16:34:06.813286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.004 [2024-11-20 16:34:06.813295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.004 [2024-11-20 16:34:06.813302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.004 [2024-11-20 16:34:06.813311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.004 [2024-11-20 16:34:06.813319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.004 [2024-11-20 16:34:06.813328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.004 [2024-11-20 16:34:06.813335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.004 [2024-11-20 16:34:06.813345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.004 [2024-11-20 16:34:06.813352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.004 [2024-11-20 16:34:06.813361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.004 [2024-11-20 16:34:06.813368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.004 [2024-11-20 16:34:06.813378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.004 [2024-11-20 16:34:06.813385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.004 [2024-11-20 16:34:06.813394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.004 [2024-11-20 16:34:06.813402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.005 [2024-11-20 16:34:06.813411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.005 [2024-11-20 16:34:06.813418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.005 [2024-11-20 16:34:06.813428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.005 [2024-11-20 16:34:06.813436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.005 [2024-11-20 16:34:06.813445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.005 [2024-11-20 16:34:06.813452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.005 [2024-11-20 16:34:06.813462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.005 [2024-11-20 16:34:06.813469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.005 [2024-11-20 16:34:06.813480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.005 [2024-11-20 16:34:06.813487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.005 [2024-11-20 16:34:06.813497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.005 [2024-11-20 16:34:06.813504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.005 [2024-11-20 16:34:06.813513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.005 [2024-11-20 16:34:06.813521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.005 [2024-11-20 16:34:06.813530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.005 [2024-11-20 16:34:06.813537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.005 [2024-11-20 16:34:06.813546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.005 [2024-11-20 16:34:06.813555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.005 [2024-11-20 16:34:06.813564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.005 [2024-11-20 16:34:06.813571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.005 [2024-11-20 16:34:06.813580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.005 [2024-11-20 16:34:06.813588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.005 [2024-11-20 16:34:06.813598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.005 [2024-11-20 16:34:06.813605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.005 [2024-11-20 16:34:06.834304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.005 [2024-11-20 16:34:06.834342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.005 [2024-11-20 16:34:06.834359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.005 [2024-11-20 16:34:06.834368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.005 [2024-11-20 16:34:06.834379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.005 [2024-11-20 16:34:06.834388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.005 [2024-11-20 16:34:06.834399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.005 [2024-11-20 16:34:06.834408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.005 [2024-11-20 16:34:06.834419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.005 [2024-11-20 16:34:06.834433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.005 [2024-11-20 16:34:06.834445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.005 [2024-11-20 16:34:06.834454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.005 [2024-11-20 16:34:06.834465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.005 [2024-11-20 16:34:06.834474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.005 [2024-11-20 16:34:06.834485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.005 [2024-11-20 16:34:06.834495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.005 [2024-11-20 16:34:06.834506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.005 [2024-11-20 16:34:06.834515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.005 [2024-11-20 16:34:06.834524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.005 [2024-11-20 16:34:06.834532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.005 [2024-11-20 16:34:06.834541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.005 [2024-11-20 16:34:06.834548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.005 [2024-11-20 16:34:06.834558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.005 [2024-11-20 16:34:06.834565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.005 [2024-11-20 16:34:06.834575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.005 [2024-11-20 16:34:06.834582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.005 [2024-11-20 16:34:06.834591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.005 [2024-11-20 16:34:06.834599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.005 [2024-11-20 16:34:06.834608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.005 [2024-11-20 16:34:06.834615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.005 [2024-11-20 16:34:06.834625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.005 [2024-11-20 16:34:06.834632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.005 [2024-11-20 16:34:06.834642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.005 [2024-11-20 16:34:06.834649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.005 [2024-11-20 16:34:06.834664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.005 [2024-11-20 16:34:06.834672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.005 [2024-11-20 16:34:06.834681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.005 [2024-11-20 16:34:06.834688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.005 [2024-11-20 16:34:06.834698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.005 [2024-11-20 16:34:06.834705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.005 [2024-11-20 16:34:06.834714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.005 [2024-11-20 16:34:06.834722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.005 [2024-11-20 16:34:06.834731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.005 [2024-11-20 16:34:06.834738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.005 [2024-11-20 16:34:06.834748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.005 [2024-11-20 16:34:06.834755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.005 [2024-11-20 16:34:06.834764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.005 [2024-11-20 16:34:06.834772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.005 [2024-11-20 16:34:06.834782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.005 [2024-11-20 16:34:06.834789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.005 [2024-11-20 16:34:06.834799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.005 [2024-11-20 16:34:06.834806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.005 [2024-11-20 16:34:06.834816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.005 [2024-11-20 16:34:06.834823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.005 [2024-11-20 16:34:06.834833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.005 [2024-11-20 16:34:06.834840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.006 [2024-11-20 16:34:06.834849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.006 [2024-11-20 16:34:06.834856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.006 [2024-11-20 16:34:06.834866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.006 [2024-11-20 16:34:06.834874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.006 [2024-11-20 16:34:06.834884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.006 [2024-11-20 16:34:06.834891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.006 [2024-11-20 16:34:06.834901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.006 [2024-11-20 16:34:06.834908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.006 [2024-11-20 16:34:06.834918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.006 [2024-11-20 16:34:06.834925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.006 [2024-11-20 16:34:06.834934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.006 [2024-11-20 16:34:06.834942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.006 [2024-11-20 16:34:06.834951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.006 [2024-11-20 16:34:06.834958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.006 [2024-11-20 16:34:06.834968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.006 [2024-11-20 16:34:06.834975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.006 [2024-11-20 16:34:06.834989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.006 [2024-11-20 16:34:06.834997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.006 [2024-11-20 16:34:06.835007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.006 [2024-11-20 16:34:06.835014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.006 [2024-11-20 16:34:06.835024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.006 [2024-11-20 16:34:06.835031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.006 [2024-11-20 16:34:06.835041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.006 [2024-11-20 16:34:06.835048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.006 [2024-11-20 16:34:06.835058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.006 [2024-11-20 16:34:06.835066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.006 [2024-11-20 16:34:06.835075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.006 [2024-11-20 16:34:06.835083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.006 [2024-11-20 16:34:06.835094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.006 [2024-11-20 16:34:06.835102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.006 [2024-11-20 16:34:06.835111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.006 [2024-11-20 16:34:06.835118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.006 [2024-11-20 16:34:06.835128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.006 [2024-11-20 16:34:06.835135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.006 [2024-11-20 16:34:06.835145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.006 [2024-11-20 16:34:06.835152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.006 [2024-11-20 16:34:06.835161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.006 [2024-11-20 16:34:06.835169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.006 [2024-11-20 16:34:06.835178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.006 [2024-11-20 16:34:06.835186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.006 [2024-11-20 16:34:06.835196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.006 [2024-11-20 16:34:06.835203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.006 [2024-11-20 16:34:06.835212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.006 [2024-11-20 16:34:06.835220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.006 [2024-11-20 16:34:06.835229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.006 [2024-11-20 16:34:06.835236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.006 [2024-11-20 16:34:06.835246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.006 [2024-11-20 16:34:06.835253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.006 [2024-11-20 16:34:06.835262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.006 [2024-11-20 16:34:06.835270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.006 [2024-11-20 16:34:06.835279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.006 [2024-11-20 16:34:06.835287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.006 [2024-11-20 16:34:06.835296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.006 [2024-11-20 16:34:06.835305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.006 [2024-11-20 16:34:06.835314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.006 [2024-11-20 16:34:06.835322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.006 [2024-11-20 16:34:06.835331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.006 [2024-11-20 16:34:06.835338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.006 [2024-11-20 16:34:06.835348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.006 [2024-11-20 16:34:06.835355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.006 [2024-11-20 16:34:06.835365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.006 [2024-11-20 16:34:06.835372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.006 [2024-11-20 16:34:06.835381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.006 [2024-11-20 16:34:06.835389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.006 [2024-11-20 16:34:06.835398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.006 [2024-11-20 16:34:06.835405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.006 [2024-11-20 16:34:06.835415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.006 [2024-11-20 16:34:06.835422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.006 [2024-11-20 16:34:06.835432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.006 [2024-11-20 16:34:06.835439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.006 [2024-11-20 16:34:06.835448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.006 [2024-11-20 16:34:06.835456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.006 [2024-11-20 16:34:06.836951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:21.006 [2024-11-20 16:34:06.836989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:22:21.006 [2024-11-20 16:34:06.837006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7e250 (9): Bad file descriptor 00:22:21.006 [2024-11-20 16:34:06.837021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb4afe0 (9): Bad file descriptor 00:22:21.006 [2024-11-20 16:34:06.837054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf751e0 (9): Bad file descriptor 00:22:21.006 [2024-11-20 16:34:06.837075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb0380 (9): Bad file descriptor 00:22:21.006 [2024-11-20 16:34:06.837095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1490 (9): Bad file descriptor 00:22:21.007 [2024-11-20 16:34:06.837118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb49c0 (9): Bad file descriptor 00:22:21.007 [2024-11-20 16:34:06.837137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb571b0 (9): Bad file descriptor 00:22:21.007 [2024-11-20 16:34:06.837153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb4d190 (9): Bad file descriptor 00:22:21.007 [2024-11-20 16:34:06.837169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb4ade0 (9): Bad file descriptor 00:22:21.007 [2024-11-20 16:34:06.837183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb546e0 (9): Bad file descriptor 00:22:21.007 [2024-11-20 16:34:06.837219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.007 [2024-11-20 16:34:06.837229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.007 [2024-11-20 16:34:06.837241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.007 [2024-11-20 16:34:06.837248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.007 [2024-11-20 16:34:06.837259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.007 [2024-11-20 16:34:06.837266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.007 [2024-11-20 16:34:06.837276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.007 [2024-11-20 16:34:06.837283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.007 [2024-11-20 16:34:06.837292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.007 [2024-11-20 16:34:06.837300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.007 [2024-11-20 16:34:06.837309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.007 [2024-11-20 16:34:06.837317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.007 [2024-11-20 16:34:06.837326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.007 [2024-11-20 16:34:06.837334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.007 [2024-11-20 16:34:06.837343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.007 [2024-11-20 16:34:06.837351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.007 [2024-11-20 16:34:06.837360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.007 [2024-11-20 16:34:06.837368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.007 [2024-11-20 16:34:06.837377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.007 [2024-11-20 16:34:06.837384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.007 [2024-11-20 16:34:06.837397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.007 [2024-11-20 16:34:06.837405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.007 [2024-11-20 16:34:06.837414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.007 [2024-11-20 16:34:06.837422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.007 [2024-11-20 16:34:06.837431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.007 [2024-11-20 16:34:06.837439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.007 [2024-11-20 16:34:06.837449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.007 [2024-11-20 16:34:06.837456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.007 [2024-11-20 16:34:06.837466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.007 [2024-11-20 16:34:06.837473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.007 [2024-11-20 16:34:06.837483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.007 [2024-11-20 16:34:06.837490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.007 [2024-11-20 16:34:06.837500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.007 [2024-11-20 16:34:06.837508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.007 [2024-11-20 16:34:06.837518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.007 [2024-11-20 16:34:06.837525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.007 [2024-11-20 16:34:06.837535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.007 [2024-11-20 16:34:06.837542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.007 [2024-11-20 16:34:06.837552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.007 [2024-11-20 16:34:06.837559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.007 [2024-11-20 16:34:06.837569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.007 [2024-11-20 16:34:06.837576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.007 [2024-11-20 16:34:06.837585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.007 [2024-11-20 16:34:06.837593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.007 [2024-11-20 16:34:06.837602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.007 [2024-11-20 16:34:06.837612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.007 [2024-11-20 16:34:06.837621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.007 [2024-11-20 16:34:06.837628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.007 [2024-11-20 16:34:06.837638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.007 [2024-11-20 16:34:06.837646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.007 [2024-11-20 16:34:06.837655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.007 [2024-11-20 16:34:06.837662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.007 [2024-11-20 16:34:06.837671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.007 [2024-11-20 16:34:06.837679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.007 [2024-11-20 16:34:06.837688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.007 [2024-11-20 16:34:06.837695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.007 [2024-11-20 16:34:06.837704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.007 [2024-11-20 16:34:06.837712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.007 [2024-11-20 16:34:06.837721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.007 [2024-11-20 16:34:06.837729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.007 [2024-11-20 16:34:06.837738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.007 [2024-11-20 16:34:06.837745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.007 [2024-11-20 16:34:06.837755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.007 [2024-11-20 16:34:06.837762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.007 [2024-11-20 16:34:06.837772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.007 [2024-11-20 16:34:06.837779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.007 [2024-11-20 16:34:06.837788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.007 [2024-11-20 16:34:06.837796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.007 [2024-11-20 16:34:06.837805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.007 [2024-11-20 16:34:06.837812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.007 [2024-11-20 16:34:06.837823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.007 [2024-11-20 16:34:06.837831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.007 [2024-11-20 16:34:06.837841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.007 [2024-11-20 16:34:06.837848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.008 [2024-11-20 16:34:06.837857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.008 [2024-11-20 16:34:06.837864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.008 [2024-11-20 16:34:06.837874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.008 [2024-11-20 16:34:06.837881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.008 [2024-11-20 16:34:06.837891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.008 [2024-11-20 16:34:06.837898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.008 [2024-11-20 16:34:06.837907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.008 [2024-11-20 16:34:06.837915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.008 [2024-11-20 16:34:06.837924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.008 [2024-11-20 16:34:06.837931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.008 [2024-11-20 16:34:06.837941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.008 [2024-11-20 16:34:06.837948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.008 [2024-11-20 16:34:06.837957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.008 [2024-11-20 16:34:06.837965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.008 [2024-11-20 16:34:06.837974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.008 [2024-11-20 16:34:06.837987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.008 [2024-11-20 16:34:06.837997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.008 [2024-11-20 16:34:06.838005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.008 [2024-11-20 16:34:06.838014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.008 [2024-11-20 16:34:06.838021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.008 [2024-11-20 16:34:06.838031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.008 [2024-11-20 16:34:06.838040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.008 [2024-11-20 16:34:06.838050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.008 [2024-11-20 16:34:06.838057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.008 [2024-11-20 16:34:06.838067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.008 [2024-11-20 16:34:06.838074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.008 [2024-11-20 16:34:06.838084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.008 [2024-11-20 16:34:06.838091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.008 [2024-11-20 16:34:06.838100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.008 [2024-11-20 16:34:06.838108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.008 [2024-11-20 16:34:06.838118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.008 [2024-11-20 16:34:06.838125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.008 [2024-11-20 16:34:06.838135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.008 [2024-11-20 16:34:06.838142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.008 [2024-11-20 16:34:06.838152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.008 [2024-11-20 16:34:06.838160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.008 [2024-11-20 16:34:06.838169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.008 [2024-11-20 16:34:06.838176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.008 [2024-11-20 16:34:06.838186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.008 [2024-11-20 16:34:06.838193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.008 [2024-11-20 16:34:06.838203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.008 [2024-11-20 16:34:06.838210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.008 [2024-11-20 16:34:06.838219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.008 [2024-11-20 16:34:06.838226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.008 [2024-11-20 16:34:06.838236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.008 [2024-11-20 16:34:06.838244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.008 [2024-11-20 16:34:06.838255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.008 [2024-11-20 16:34:06.838263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.008 [2024-11-20 16:34:06.838272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.008 [2024-11-20 16:34:06.838279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.008 [2024-11-20 16:34:06.838289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.008 [2024-11-20 16:34:06.838296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.008 [2024-11-20 16:34:06.838305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.008 [2024-11-20 16:34:06.838313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.008 [2024-11-20 16:34:06.839818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:21.008 [2024-11-20 16:34:06.841370] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:21.008 [2024-11-20 16:34:06.841682] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:21.008 [2024-11-20 16:34:06.841721] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:21.008 [2024-11-20 16:34:06.841756] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:21.008 [2024-11-20 16:34:06.841794] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:21.008 [2024-11-20 16:34:06.841809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:21.008 [2024-11-20 16:34:06.842282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.008 [2024-11-20 16:34:06.842323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4afe0 with addr=10.0.0.2, port=4420 00:22:21.008 [2024-11-20 16:34:06.842335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4afe0 is same with the state(6) to be set 00:22:21.008 [2024-11-20 16:34:06.842712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.008 [2024-11-20 16:34:06.842724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7e250 with addr=10.0.0.2, port=4420 00:22:21.008 [2024-11-20 16:34:06.842731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7e250 is same with the state(6) to be set 00:22:21.008 [2024-11-20 16:34:06.842905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.008 [2024-11-20 16:34:06.842915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4ade0 with addr=10.0.0.2, port=4420 00:22:21.008 [2024-11-20 16:34:06.842922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4ade0 is same with the state(6) to be set 00:22:21.008 [2024-11-20 16:34:06.843025] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:21.008 [2024-11-20 16:34:06.843649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.008 [2024-11-20 16:34:06.843665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb571b0 with addr=10.0.0.2, port=4420 00:22:21.008 [2024-11-20 16:34:06.843674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb571b0 is same with the state(6) to be set 00:22:21.008 [2024-11-20 16:34:06.843685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb4afe0 (9): Bad file descriptor 00:22:21.008 [2024-11-20 16:34:06.843696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7e250 (9): Bad file descriptor 00:22:21.008 [2024-11-20 16:34:06.843711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb4ade0 (9): Bad file descriptor 00:22:21.008 [2024-11-20 16:34:06.844046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb571b0 (9): Bad file descriptor 00:22:21.008 [2024-11-20 16:34:06.844059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:21.008 [2024-11-20 16:34:06.844066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:21.008 [2024-11-20 16:34:06.844075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:21.008 [2024-11-20 16:34:06.844084] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:21.008 [2024-11-20 16:34:06.844092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:22:21.009 [2024-11-20 16:34:06.844099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:22:21.009 [2024-11-20 16:34:06.844107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:22:21.009 [2024-11-20 16:34:06.844113] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:22:21.009 [2024-11-20 16:34:06.844121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:21.009 [2024-11-20 16:34:06.844127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:21.009 [2024-11-20 16:34:06.844134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:21.009 [2024-11-20 16:34:06.844140] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:21.009 [2024-11-20 16:34:06.844189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:21.009 [2024-11-20 16:34:06.844197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:21.009 [2024-11-20 16:34:06.844204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:21.009 [2024-11-20 16:34:06.844211] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:21.009 [2024-11-20 16:34:06.847124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.009 [2024-11-20 16:34:06.847140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.009 [2024-11-20 16:34:06.847157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.009 [2024-11-20 16:34:06.847165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.009 [2024-11-20 16:34:06.847174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.009 [2024-11-20 16:34:06.847182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.009 [2024-11-20 16:34:06.847191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.009 [2024-11-20 16:34:06.847199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.009 [2024-11-20 16:34:06.847209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.009 [2024-11-20 16:34:06.847216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.009 [2024-11-20 16:34:06.847230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.009 [2024-11-20 16:34:06.847237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.009 [2024-11-20 16:34:06.847246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.009 [2024-11-20 16:34:06.847254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.009 [2024-11-20 16:34:06.847264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.009 [2024-11-20 16:34:06.847271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.009 [2024-11-20 16:34:06.847280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.009 [2024-11-20 16:34:06.847288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.009 [2024-11-20 16:34:06.847298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.009 [2024-11-20 16:34:06.847305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.009 [2024-11-20 16:34:06.847315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.009 [2024-11-20 16:34:06.847322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.009 [2024-11-20 16:34:06.847331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.009 [2024-11-20 16:34:06.847339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.009 [2024-11-20 16:34:06.847348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.009 [2024-11-20 16:34:06.847356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.009 [2024-11-20 16:34:06.847365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.009 [2024-11-20 16:34:06.847373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.009 [2024-11-20 16:34:06.847382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.009 [2024-11-20 16:34:06.847389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.009 [2024-11-20 16:34:06.847399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.009 [2024-11-20 16:34:06.847406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.009 [2024-11-20 16:34:06.847416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.009 [2024-11-20 16:34:06.847423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.009 [2024-11-20 16:34:06.847432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.009 [2024-11-20 16:34:06.847442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.009 [2024-11-20 16:34:06.847451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.009 [2024-11-20 16:34:06.847459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.009 [2024-11-20 16:34:06.847468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.009 [2024-11-20 16:34:06.847475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.009 [2024-11-20 16:34:06.847485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.009 [2024-11-20 16:34:06.847492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.009 [2024-11-20 16:34:06.847501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.009 [2024-11-20 16:34:06.847508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.009 [2024-11-20 16:34:06.847518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.009 [2024-11-20 16:34:06.847526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.009 [2024-11-20 16:34:06.847535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.009 [2024-11-20 16:34:06.847542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.009 [2024-11-20 16:34:06.847552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.009 [2024-11-20 16:34:06.847559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.009 [2024-11-20 16:34:06.847569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.009 [2024-11-20 16:34:06.847576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.009 [2024-11-20 16:34:06.847585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.009 [2024-11-20 16:34:06.847592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.009 [2024-11-20 16:34:06.847602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.009 [2024-11-20 16:34:06.847610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.009 [2024-11-20 16:34:06.847619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.009 [2024-11-20 16:34:06.847627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.009 [2024-11-20 16:34:06.847636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.009 [2024-11-20 16:34:06.847644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.009 [2024-11-20 16:34:06.847655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.009 [2024-11-20 16:34:06.847663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.010 [2024-11-20 16:34:06.847672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.010 [2024-11-20 16:34:06.847679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.010 [2024-11-20 16:34:06.847689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.010 [2024-11-20 16:34:06.847696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.010 [2024-11-20 16:34:06.847705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.010 [2024-11-20 16:34:06.847713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.010 [2024-11-20 16:34:06.847722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.010 [2024-11-20 16:34:06.847729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.010 [2024-11-20 16:34:06.847739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.010 [2024-11-20 16:34:06.847746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.010 [2024-11-20 16:34:06.847756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.010 [2024-11-20 16:34:06.847763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.010 [2024-11-20 16:34:06.847773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.010 [2024-11-20 16:34:06.847780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.010 [2024-11-20 16:34:06.847789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.010 [2024-11-20 16:34:06.847797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.010 [2024-11-20 16:34:06.847806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.010 [2024-11-20 16:34:06.847814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.010 [2024-11-20 16:34:06.847824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.010 [2024-11-20 16:34:06.847831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.010 [2024-11-20 16:34:06.847841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.010 [2024-11-20 16:34:06.847849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.010 [2024-11-20 16:34:06.847858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.010 [2024-11-20 16:34:06.847867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.010 [2024-11-20 16:34:06.847877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.010 [2024-11-20 16:34:06.847884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.010 [2024-11-20 16:34:06.847894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.010 [2024-11-20 16:34:06.847901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.010 [2024-11-20 16:34:06.847911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.010 [2024-11-20 16:34:06.847918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.010 [2024-11-20 16:34:06.847928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.010 [2024-11-20 16:34:06.847935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.010 [2024-11-20 16:34:06.847944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.010 [2024-11-20 16:34:06.847952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.010 [2024-11-20 16:34:06.847961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.010 [2024-11-20 16:34:06.847969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.010 [2024-11-20 16:34:06.847978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.010 [2024-11-20 16:34:06.847990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.010 [2024-11-20 16:34:06.848000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.010 [2024-11-20 16:34:06.848007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.010 [2024-11-20 16:34:06.848016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.010 [2024-11-20 16:34:06.848024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.010 [2024-11-20 16:34:06.848034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.010 [2024-11-20 16:34:06.848041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.010 [2024-11-20 16:34:06.848050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.010 [2024-11-20 16:34:06.848058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.010 [2024-11-20 16:34:06.848067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.010 [2024-11-20 16:34:06.848075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.010 [2024-11-20 16:34:06.848088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.010 [2024-11-20 16:34:06.848096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.010 [2024-11-20 16:34:06.848105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.010 [2024-11-20 16:34:06.848113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.010 [2024-11-20 16:34:06.848122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.010 [2024-11-20 16:34:06.848130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.010 [2024-11-20 16:34:06.848139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.010 [2024-11-20 16:34:06.848146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.010 [2024-11-20 16:34:06.848156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.010 [2024-11-20 16:34:06.848163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.010 [2024-11-20 16:34:06.848172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.010 [2024-11-20 16:34:06.848180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.010 [2024-11-20 16:34:06.848190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.010 [2024-11-20 16:34:06.848197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.010 [2024-11-20 16:34:06.848206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.010 [2024-11-20 16:34:06.848214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.010 [2024-11-20 16:34:06.848223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.010 [2024-11-20 16:34:06.848231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.010 [2024-11-20 16:34:06.848239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf491c0 is same with the state(6) to be set 00:22:21.010 [2024-11-20 16:34:06.849545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.010 [2024-11-20 16:34:06.849560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.010 [2024-11-20 16:34:06.849572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.010 [2024-11-20 16:34:06.849581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.010 [2024-11-20 16:34:06.849592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.010 [2024-11-20 16:34:06.849601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.010 [2024-11-20 16:34:06.849616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.010 [2024-11-20 16:34:06.849624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.010 [2024-11-20 16:34:06.849633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.010 [2024-11-20 16:34:06.849641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.010 [2024-11-20 16:34:06.849650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.010 [2024-11-20 16:34:06.849658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.011 [2024-11-20 16:34:06.849667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.011 [2024-11-20 16:34:06.849675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.011 [2024-11-20 16:34:06.849685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.011 [2024-11-20 16:34:06.849693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.011 [2024-11-20 16:34:06.849702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.011 [2024-11-20 16:34:06.849709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.011 [2024-11-20 16:34:06.849719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.011 [2024-11-20 16:34:06.849726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.011 [2024-11-20 16:34:06.849735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.011 [2024-11-20 16:34:06.849743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.011 [2024-11-20 16:34:06.849752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.011 [2024-11-20 16:34:06.849759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.011 [2024-11-20 16:34:06.849769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.011 [2024-11-20 16:34:06.849777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.011 [2024-11-20 16:34:06.849786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.011 [2024-11-20 16:34:06.849794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.011 [2024-11-20 16:34:06.849803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.011 [2024-11-20 16:34:06.849810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.011 [2024-11-20 16:34:06.849820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.011 [2024-11-20 16:34:06.849829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.011 [2024-11-20 16:34:06.849838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.011 [2024-11-20 16:34:06.849845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.011 [2024-11-20 16:34:06.849855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.011 [2024-11-20 16:34:06.849862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.011 [2024-11-20 16:34:06.849872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.011 [2024-11-20 16:34:06.849879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.011 [2024-11-20 16:34:06.849888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.011 [2024-11-20 16:34:06.849896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.011 [2024-11-20 16:34:06.849905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.011 [2024-11-20 16:34:06.849912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.011 [2024-11-20 16:34:06.849922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.011 [2024-11-20 16:34:06.849929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.011 [2024-11-20 16:34:06.849938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.011 [2024-11-20 16:34:06.849946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.011 [2024-11-20 16:34:06.849955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.011 [2024-11-20 16:34:06.849962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.011 [2024-11-20 16:34:06.849971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.011 [2024-11-20 16:34:06.849979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.011 [2024-11-20 16:34:06.849992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.011 [2024-11-20 16:34:06.850000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.011 [2024-11-20 16:34:06.850009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.011 [2024-11-20 16:34:06.850017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.011 [2024-11-20 16:34:06.850026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.011 [2024-11-20 16:34:06.850033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.011 [2024-11-20 16:34:06.850045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.011 [2024-11-20 16:34:06.850052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.011 [2024-11-20 16:34:06.850061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.011 [2024-11-20 16:34:06.850068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.011 [2024-11-20 16:34:06.850078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.011 [2024-11-20 16:34:06.850085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.011 [2024-11-20 16:34:06.850095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.011 [2024-11-20 16:34:06.850102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.011 [2024-11-20 16:34:06.850112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.011 [2024-11-20 16:34:06.850119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.011 [2024-11-20 16:34:06.850128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.011 [2024-11-20 16:34:06.850135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.011 [2024-11-20 16:34:06.850145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.011 [2024-11-20 16:34:06.850152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.011 [2024-11-20 16:34:06.850161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.011 [2024-11-20 16:34:06.850168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.011 [2024-11-20 16:34:06.850178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.011 [2024-11-20 16:34:06.850185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.011 [2024-11-20 16:34:06.850195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.011 [2024-11-20 16:34:06.850203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.011 [2024-11-20 16:34:06.850212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.011 [2024-11-20 16:34:06.850220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.011 [2024-11-20 16:34:06.850230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.011 [2024-11-20 16:34:06.850237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.011 [2024-11-20 16:34:06.850247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.011 [2024-11-20 16:34:06.850256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.011 [2024-11-20 16:34:06.850266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.011 [2024-11-20 16:34:06.850273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.011 [2024-11-20 16:34:06.850282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.011 [2024-11-20 16:34:06.850290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.011 [2024-11-20 16:34:06.850299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.011 [2024-11-20 16:34:06.850307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.011 [2024-11-20 16:34:06.850316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.011 [2024-11-20 16:34:06.850323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.011 [2024-11-20 16:34:06.850333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.012 [2024-11-20 16:34:06.850340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.012 [2024-11-20 16:34:06.850350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.012 [2024-11-20 16:34:06.850357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.012 [2024-11-20 16:34:06.850367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.012 [2024-11-20 16:34:06.850374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.012 [2024-11-20 16:34:06.850383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.012 [2024-11-20 16:34:06.850391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.012 [2024-11-20 16:34:06.850400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.012 [2024-11-20 16:34:06.850407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.012 [2024-11-20 16:34:06.850416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.012 [2024-11-20 16:34:06.850424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.012 [2024-11-20 16:34:06.850433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.012 [2024-11-20 16:34:06.850441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.012 [2024-11-20 16:34:06.850450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.012 [2024-11-20 16:34:06.850457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.012 [2024-11-20 16:34:06.850468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.012 [2024-11-20 16:34:06.850475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.012 [2024-11-20 16:34:06.850485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.012 [2024-11-20 16:34:06.850492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.012 [2024-11-20 16:34:06.850501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.012 [2024-11-20 16:34:06.850509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.012 [2024-11-20 16:34:06.850518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.012 [2024-11-20 16:34:06.850525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.012 [2024-11-20 16:34:06.850534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.012 [2024-11-20 16:34:06.850542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.012 [2024-11-20 16:34:06.850551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.012 [2024-11-20 16:34:06.850558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.012 [2024-11-20 16:34:06.850568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.012 [2024-11-20 16:34:06.850579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.012 [2024-11-20 16:34:06.850588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.012 [2024-11-20 16:34:06.850595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.012 [2024-11-20 16:34:06.850605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.012 [2024-11-20 16:34:06.850612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.012 [2024-11-20 16:34:06.850621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.012 [2024-11-20 16:34:06.850629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.012 [2024-11-20 16:34:06.850638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.012 [2024-11-20 16:34:06.850645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.012 [2024-11-20 16:34:06.850654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf57770 is same with the state(6) to be set 00:22:21.012 [2024-11-20 16:34:06.851931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.012 [2024-11-20 16:34:06.851944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.012 [2024-11-20 16:34:06.851960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.012 [2024-11-20 16:34:06.851969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.012 [2024-11-20 16:34:06.851980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.012 [2024-11-20 16:34:06.851993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.012 [2024-11-20 16:34:06.852004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.012 [2024-11-20 16:34:06.852013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.012 [2024-11-20 16:34:06.852024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.012 [2024-11-20 16:34:06.852032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.012 [2024-11-20 16:34:06.852042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.012 [2024-11-20 16:34:06.852049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.012 [2024-11-20 16:34:06.852059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.012 [2024-11-20 16:34:06.852066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.012 [2024-11-20 16:34:06.852075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.012 [2024-11-20 16:34:06.852083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.012 [2024-11-20 16:34:06.852093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.012 [2024-11-20 16:34:06.852100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.012 [2024-11-20 16:34:06.852110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.012 [2024-11-20 16:34:06.852117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.012 [2024-11-20 16:34:06.852127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.012 [2024-11-20 16:34:06.852134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.012 [2024-11-20 16:34:06.852143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.012 [2024-11-20 16:34:06.852151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.012 [2024-11-20 16:34:06.852160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.012 [2024-11-20 16:34:06.852167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.012 [2024-11-20 16:34:06.852177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.012 [2024-11-20 16:34:06.852186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.012 [2024-11-20 16:34:06.852195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.012 [2024-11-20 16:34:06.852203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.012 [2024-11-20 16:34:06.852213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.012 [2024-11-20 16:34:06.852220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.012 [2024-11-20 16:34:06.852229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.012 [2024-11-20 16:34:06.852237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.012 [2024-11-20 16:34:06.852246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.012 [2024-11-20 16:34:06.852254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.012 [2024-11-20 16:34:06.852263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.012 [2024-11-20 16:34:06.852271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.012 [2024-11-20 16:34:06.852280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.012 [2024-11-20 16:34:06.852287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.012 [2024-11-20 16:34:06.852297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.013 [2024-11-20 16:34:06.852305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.013 [2024-11-20 16:34:06.852314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.013 [2024-11-20 16:34:06.852321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.013 [2024-11-20 16:34:06.852330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.013 [2024-11-20 16:34:06.852338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.013 [2024-11-20 16:34:06.852347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.013 [2024-11-20 16:34:06.852354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.013 [2024-11-20 16:34:06.852364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.013 [2024-11-20 16:34:06.852372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.013 [2024-11-20 16:34:06.852381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.013 [2024-11-20 16:34:06.852390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.013 [2024-11-20 16:34:06.852401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.013 [2024-11-20 16:34:06.852408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.013 [2024-11-20 16:34:06.852418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.013 [2024-11-20 16:34:06.852425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.013 [2024-11-20 16:34:06.852435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.013 [2024-11-20 16:34:06.852442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.013 [2024-11-20 16:34:06.852452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.013 [2024-11-20 16:34:06.852459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.013 [2024-11-20 16:34:06.852469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.013 [2024-11-20 16:34:06.852476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.013 [2024-11-20 16:34:06.852486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.013 [2024-11-20 16:34:06.852493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.013 [2024-11-20 16:34:06.852502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.013 [2024-11-20 16:34:06.852509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.013 [2024-11-20 16:34:06.852520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.013 [2024-11-20 16:34:06.852527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.013 [2024-11-20 16:34:06.852536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.013 [2024-11-20 16:34:06.852544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.013 [2024-11-20 16:34:06.852553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.013 [2024-11-20 16:34:06.852561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.013 [2024-11-20 16:34:06.852570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.013 [2024-11-20 16:34:06.852578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.013 [2024-11-20 16:34:06.852587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.013 [2024-11-20 16:34:06.852594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.013 [2024-11-20 16:34:06.852604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.013 [2024-11-20 16:34:06.852614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.013 [2024-11-20 16:34:06.852623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.013 [2024-11-20 16:34:06.852630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.013 [2024-11-20 16:34:06.852640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.013 [2024-11-20 16:34:06.852648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.013 [2024-11-20 16:34:06.852657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.013 [2024-11-20 16:34:06.852664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.013 [2024-11-20 16:34:06.852674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.013 [2024-11-20 16:34:06.852682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.013 [2024-11-20 16:34:06.852691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.013 [2024-11-20 16:34:06.852698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.013 [2024-11-20 16:34:06.852708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.013 [2024-11-20 16:34:06.852715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.013 [2024-11-20 16:34:06.852724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.013 [2024-11-20 16:34:06.852732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.013 [2024-11-20 16:34:06.852741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.013 [2024-11-20 16:34:06.852749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.013 [2024-11-20 16:34:06.852758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.013 [2024-11-20 16:34:06.852766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.013 [2024-11-20 16:34:06.852775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.013 [2024-11-20 16:34:06.852782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.013 [2024-11-20 16:34:06.852792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.013 [2024-11-20 16:34:06.852799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.013 [2024-11-20 16:34:06.852809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.013 [2024-11-20 16:34:06.852816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.013 [2024-11-20 16:34:06.852825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.013 [2024-11-20 16:34:06.852836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.013 [2024-11-20 16:34:06.852845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.013 [2024-11-20 16:34:06.852852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.013 [2024-11-20 16:34:06.852862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.013 [2024-11-20 16:34:06.852869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.013 [2024-11-20 16:34:06.852879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.013 [2024-11-20 16:34:06.852886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.013 [2024-11-20 16:34:06.852895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.014 [2024-11-20 16:34:06.852903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.014 [2024-11-20 16:34:06.852912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.014 [2024-11-20 16:34:06.852920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.014 [2024-11-20 16:34:06.852929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.014 [2024-11-20 16:34:06.852936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.014 [2024-11-20 16:34:06.852945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.014 [2024-11-20 16:34:06.852953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.014 [2024-11-20 16:34:06.852962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.014 [2024-11-20 16:34:06.852970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.014 [2024-11-20 16:34:06.852979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.014 [2024-11-20 16:34:06.852989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.014 [2024-11-20 16:34:06.852999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.014 [2024-11-20 16:34:06.853006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.014 [2024-11-20 16:34:06.853015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.014 [2024-11-20 16:34:06.853023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.014 [2024-11-20 16:34:06.853032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.014 [2024-11-20 16:34:06.853040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.014 [2024-11-20 16:34:06.853049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5a180 is same with the state(6) to be set 00:22:21.014 [2024-11-20 16:34:06.854316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.014 [2024-11-20 16:34:06.854328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.014 [2024-11-20 16:34:06.854341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.014 [2024-11-20 16:34:06.854349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.014 [2024-11-20 16:34:06.854361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.014 [2024-11-20 16:34:06.854370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.014 [2024-11-20 16:34:06.854381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.014 [2024-11-20 16:34:06.854390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.014 [2024-11-20 16:34:06.854401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.014 [2024-11-20 16:34:06.854411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.014 [2024-11-20 16:34:06.854421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.014 [2024-11-20 16:34:06.854428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.014 [2024-11-20 16:34:06.854438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.014 [2024-11-20 16:34:06.854445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.014 [2024-11-20 16:34:06.854455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.014 [2024-11-20 16:34:06.854462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.014 [2024-11-20 16:34:06.854472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.014 [2024-11-20 16:34:06.854479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.014 [2024-11-20 16:34:06.854488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.014 [2024-11-20 16:34:06.854496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.014 [2024-11-20 16:34:06.854505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.014 [2024-11-20 16:34:06.854512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.014 [2024-11-20 16:34:06.854522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.014 [2024-11-20 16:34:06.854530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.014 [2024-11-20 16:34:06.854541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.014 [2024-11-20 16:34:06.854549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.014 [2024-11-20 16:34:06.854558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.014 [2024-11-20 16:34:06.854565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.014 [2024-11-20 16:34:06.854575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.014 [2024-11-20 16:34:06.854582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.014 [2024-11-20 16:34:06.854591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.014 [2024-11-20 16:34:06.854599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.014 [2024-11-20 16:34:06.854608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.014 [2024-11-20 16:34:06.854616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.014 [2024-11-20 16:34:06.854625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.014 [2024-11-20 16:34:06.854633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.014 [2024-11-20 16:34:06.854642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.014 [2024-11-20 16:34:06.854650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.014 [2024-11-20 16:34:06.854659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.014 [2024-11-20 16:34:06.854666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.014 [2024-11-20 16:34:06.854676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.014 [2024-11-20 16:34:06.854683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.014 [2024-11-20 16:34:06.854692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.014 [2024-11-20 16:34:06.854700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.014 [2024-11-20 16:34:06.854709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.014 [2024-11-20 16:34:06.854717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.014 [2024-11-20 16:34:06.854726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.014 [2024-11-20 16:34:06.854733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.014 [2024-11-20 16:34:06.854742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.014 [2024-11-20 16:34:06.854751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.014 [2024-11-20 16:34:06.854761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.014 [2024-11-20 16:34:06.854768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.014 [2024-11-20 16:34:06.854778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.014 [2024-11-20 16:34:06.854785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.014 [2024-11-20 16:34:06.854794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.014 [2024-11-20 16:34:06.854801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.014 [2024-11-20 16:34:06.854811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.014 [2024-11-20 16:34:06.854818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.014 [2024-11-20 16:34:06.854828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.014 [2024-11-20 16:34:06.854835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.014 [2024-11-20 16:34:06.854844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.015 [2024-11-20 16:34:06.854851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.015 [2024-11-20 16:34:06.854861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.015 [2024-11-20 16:34:06.854868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.015 [2024-11-20 16:34:06.854878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.015 [2024-11-20 16:34:06.854886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.015 [2024-11-20 16:34:06.854896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.015 [2024-11-20 16:34:06.854903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.015 [2024-11-20 16:34:06.854913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.015 [2024-11-20 16:34:06.854920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.015 [2024-11-20 16:34:06.854930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.015 [2024-11-20 16:34:06.854938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.015 [2024-11-20 16:34:06.854947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.015 [2024-11-20 16:34:06.854955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.015 [2024-11-20 16:34:06.854966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.015 [2024-11-20 16:34:06.854973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.015 [2024-11-20 16:34:06.854989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.015 [2024-11-20 16:34:06.854997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.015 [2024-11-20 16:34:06.855006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.015 [2024-11-20 16:34:06.855013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.015 [2024-11-20 16:34:06.855023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.015 [2024-11-20 16:34:06.855030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.015 [2024-11-20 16:34:06.855039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.015 [2024-11-20 16:34:06.855046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.015 [2024-11-20 16:34:06.855056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.015 [2024-11-20 16:34:06.855063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.015 [2024-11-20 16:34:06.855072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.015 [2024-11-20 16:34:06.855080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.015 [2024-11-20 16:34:06.855089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.015 [2024-11-20 16:34:06.855097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.015 [2024-11-20 16:34:06.855106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.015 [2024-11-20 16:34:06.855113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.015 [2024-11-20 16:34:06.855123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.015 [2024-11-20 16:34:06.855130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.015 [2024-11-20 16:34:06.855139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.015 [2024-11-20 16:34:06.855147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.015 [2024-11-20 16:34:06.855156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.015 [2024-11-20 16:34:06.855164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.015 [2024-11-20 16:34:06.855173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.015 [2024-11-20 16:34:06.855184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.015 [2024-11-20 16:34:06.855194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.015 [2024-11-20 16:34:06.855201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.015 [2024-11-20 16:34:06.855211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.015 [2024-11-20 16:34:06.855218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.015 [2024-11-20 16:34:06.855227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.015 [2024-11-20 16:34:06.855234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.015 [2024-11-20 16:34:06.855244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.015 [2024-11-20 16:34:06.855251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.015 [2024-11-20 16:34:06.855260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.015 [2024-11-20 16:34:06.855268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.015 [2024-11-20 16:34:06.855277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.015 [2024-11-20 16:34:06.855285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.015 [2024-11-20 16:34:06.855294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.015 [2024-11-20 16:34:06.855301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.015 [2024-11-20 16:34:06.855311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.015 [2024-11-20 16:34:06.855318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.015 [2024-11-20 16:34:06.855327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.015 [2024-11-20 16:34:06.855335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.015 [2024-11-20 16:34:06.855344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.015 [2024-11-20 16:34:06.855351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.015 [2024-11-20 16:34:06.855361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.015 [2024-11-20 16:34:06.855368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.015 [2024-11-20 16:34:06.855380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.015 [2024-11-20 16:34:06.855388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.015 [2024-11-20 16:34:06.855399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.015 [2024-11-20 16:34:06.855406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.015 [2024-11-20 16:34:06.855416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.015 [2024-11-20 16:34:06.855423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.015 [2024-11-20 16:34:06.855431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5b710 is same with the state(6) to be set 00:22:21.015 [2024-11-20 16:34:06.856707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.015 [2024-11-20 16:34:06.856720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.015 [2024-11-20 16:34:06.856734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.015 [2024-11-20 16:34:06.856743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.015 [2024-11-20 16:34:06.856755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.015 [2024-11-20 16:34:06.856764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.015 [2024-11-20 16:34:06.856775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.015 [2024-11-20 16:34:06.856783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.015 [2024-11-20 16:34:06.856793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.015 [2024-11-20 16:34:06.856800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.015 [2024-11-20 16:34:06.856809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.015 [2024-11-20 16:34:06.856817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.016 [2024-11-20 16:34:06.856826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.016 [2024-11-20 16:34:06.856834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.016 [2024-11-20 16:34:06.856843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.016 [2024-11-20 16:34:06.856851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.016 [2024-11-20 16:34:06.856861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.016 [2024-11-20 16:34:06.856868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.016 [2024-11-20 16:34:06.856877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.016 [2024-11-20 16:34:06.856885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.016 [2024-11-20 16:34:06.856896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.016 [2024-11-20 16:34:06.856904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.016 [2024-11-20 16:34:06.856913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.016 [2024-11-20 16:34:06.856921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.016 [2024-11-20 16:34:06.856930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.016 [2024-11-20 16:34:06.856938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.016 [2024-11-20 16:34:06.856947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.016 [2024-11-20 16:34:06.856955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.016 [2024-11-20 16:34:06.856965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.016 [2024-11-20 16:34:06.856972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.016 [2024-11-20 16:34:06.856984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.016 [2024-11-20 16:34:06.856992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.016 [2024-11-20 16:34:06.857002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.016 [2024-11-20 16:34:06.857009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.016 [2024-11-20 16:34:06.857019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.016 [2024-11-20 16:34:06.857026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.016 [2024-11-20 16:34:06.857036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.016 [2024-11-20 16:34:06.857043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.016 [2024-11-20 16:34:06.857053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.016 [2024-11-20 16:34:06.857060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.016 [2024-11-20 16:34:06.857069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.016 [2024-11-20 16:34:06.857077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.016 [2024-11-20 16:34:06.857086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.016 [2024-11-20 16:34:06.857094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.016 [2024-11-20 16:34:06.857103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.016 [2024-11-20 16:34:06.857112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.016 [2024-11-20 16:34:06.857122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.016 [2024-11-20 16:34:06.857129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.016 [2024-11-20 16:34:06.857138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.016 [2024-11-20 16:34:06.857146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.016 [2024-11-20 16:34:06.857155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.016 [2024-11-20 16:34:06.857162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.016 [2024-11-20 16:34:06.857172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.016 [2024-11-20 16:34:06.857179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.016 [2024-11-20 16:34:06.857189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.016 [2024-11-20 16:34:06.857196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.016 [2024-11-20 16:34:06.857206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.016 [2024-11-20 16:34:06.857213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.016 [2024-11-20 16:34:06.857223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.016 [2024-11-20 16:34:06.857230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.016 [2024-11-20 16:34:06.857240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.016 [2024-11-20 16:34:06.857247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.016 [2024-11-20 16:34:06.857256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.016 [2024-11-20 16:34:06.857264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.016 [2024-11-20 16:34:06.857274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.016 [2024-11-20 16:34:06.857281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.016 [2024-11-20 16:34:06.857291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.016 [2024-11-20 16:34:06.857298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.016 [2024-11-20 16:34:06.857307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.016 [2024-11-20 16:34:06.857315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.016 [2024-11-20 16:34:06.857327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.016 [2024-11-20 16:34:06.857334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.016 [2024-11-20 16:34:06.857343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.016 [2024-11-20 16:34:06.857351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.016 [2024-11-20 16:34:06.857360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.016 [2024-11-20 16:34:06.857368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.016 [2024-11-20 16:34:06.857377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.016 [2024-11-20 16:34:06.857385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.016 [2024-11-20 16:34:06.857394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.016 [2024-11-20 16:34:06.857401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.016 [2024-11-20 16:34:06.857411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.016 [2024-11-20 16:34:06.857418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.016 [2024-11-20 16:34:06.857427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.016 [2024-11-20 16:34:06.857434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.016 [2024-11-20 16:34:06.857444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.016 [2024-11-20 16:34:06.857451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.016 [2024-11-20 16:34:06.857460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.016 [2024-11-20 16:34:06.857468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.016 [2024-11-20 16:34:06.857477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.016 [2024-11-20 16:34:06.857484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.017 [2024-11-20 16:34:06.857494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.017 [2024-11-20 16:34:06.857501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.017 [2024-11-20 16:34:06.857511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.017 [2024-11-20 16:34:06.862257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.017 [2024-11-20 16:34:06.862305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.017 [2024-11-20 16:34:06.862320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.017 [2024-11-20 16:34:06.862330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.017 [2024-11-20 16:34:06.862338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.017 [2024-11-20 16:34:06.862348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.017 [2024-11-20 16:34:06.862356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.017 [2024-11-20 16:34:06.862365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.017 [2024-11-20 16:34:06.862373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.017 [2024-11-20 16:34:06.862382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.017 [2024-11-20 16:34:06.862390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.017 [2024-11-20 16:34:06.862399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.017 [2024-11-20 16:34:06.862407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.017 [2024-11-20 16:34:06.862416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.017 [2024-11-20 16:34:06.862424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.017 [2024-11-20 16:34:06.862434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.017 [2024-11-20 16:34:06.862441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.017 [2024-11-20 16:34:06.862450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.017 [2024-11-20 16:34:06.862458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.017 [2024-11-20 16:34:06.862467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.017 [2024-11-20 16:34:06.862475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.017 [2024-11-20 16:34:06.862484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.017 [2024-11-20 16:34:06.862492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.017 [2024-11-20 16:34:06.862501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.017 [2024-11-20 16:34:06.862509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.017 [2024-11-20 16:34:06.862518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.017 [2024-11-20 16:34:06.862526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.017 [2024-11-20 16:34:06.862537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.017 [2024-11-20 16:34:06.862545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.017 [2024-11-20 16:34:06.862554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.017 [2024-11-20 16:34:06.862562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.017 [2024-11-20 16:34:06.862571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.017 [2024-11-20 16:34:06.862579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.017 [2024-11-20 16:34:06.862588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.017 [2024-11-20 16:34:06.862596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.017 [2024-11-20 16:34:06.862605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5cc50 is same with the state(6) to be set 00:22:21.017 [2024-11-20 16:34:06.863943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.017 [2024-11-20 16:34:06.863959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.017 [2024-11-20 16:34:06.863975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.017 [2024-11-20 16:34:06.863990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.017 [2024-11-20 16:34:06.864001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.017 [2024-11-20 16:34:06.864009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.017 [2024-11-20 16:34:06.864019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.017 [2024-11-20 16:34:06.864026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.017 [2024-11-20 16:34:06.864036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.017 [2024-11-20 16:34:06.864044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.017 [2024-11-20 16:34:06.864053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.017 [2024-11-20 16:34:06.864061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.017 [2024-11-20 16:34:06.864070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.017 [2024-11-20 16:34:06.864077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.017 [2024-11-20 16:34:06.864087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.017 [2024-11-20 16:34:06.864095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.017 [2024-11-20 16:34:06.864107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.017 [2024-11-20 16:34:06.864115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.017 [2024-11-20 16:34:06.864124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.017 [2024-11-20 16:34:06.864132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.017 [2024-11-20 16:34:06.864141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.017 [2024-11-20 16:34:06.864149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.017 [2024-11-20 16:34:06.864158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.017 [2024-11-20 16:34:06.864165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.017 [2024-11-20 16:34:06.864175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.017 [2024-11-20 16:34:06.864182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.017 [2024-11-20 16:34:06.864191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.017 [2024-11-20 16:34:06.864199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.017 [2024-11-20 16:34:06.864208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.017 [2024-11-20 16:34:06.864216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.017 [2024-11-20 16:34:06.864225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.017 [2024-11-20 16:34:06.864232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.017 [2024-11-20 16:34:06.864242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.017 [2024-11-20 16:34:06.864249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.017 [2024-11-20 16:34:06.864259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.017 [2024-11-20 16:34:06.864266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.017 [2024-11-20 16:34:06.864276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.017 [2024-11-20 16:34:06.864283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.017 [2024-11-20 16:34:06.864293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.018 [2024-11-20 16:34:06.864300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.018 [2024-11-20 16:34:06.864310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.018 [2024-11-20 16:34:06.864317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.018 [2024-11-20 16:34:06.864328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.018 [2024-11-20 16:34:06.864336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.018 [2024-11-20 16:34:06.864346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.018 [2024-11-20 16:34:06.864354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.018 [2024-11-20 16:34:06.864363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.018 [2024-11-20 16:34:06.864371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.018 [2024-11-20 16:34:06.864380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.018 [2024-11-20 16:34:06.864388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.018 [2024-11-20 16:34:06.864397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.018 [2024-11-20 16:34:06.864405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.018 [2024-11-20 16:34:06.864414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.018 [2024-11-20 16:34:06.864422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.018 [2024-11-20 16:34:06.864431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.018 [2024-11-20 16:34:06.864439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.018 [2024-11-20 16:34:06.864449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.018 [2024-11-20 16:34:06.864456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.018 [2024-11-20 16:34:06.864465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.018 [2024-11-20 16:34:06.864474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.018 [2024-11-20 16:34:06.864483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.018 [2024-11-20 16:34:06.864490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.018 [2024-11-20 16:34:06.864500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.018 [2024-11-20 16:34:06.864507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.018 [2024-11-20 16:34:06.864517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.018 [2024-11-20 16:34:06.864524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.018 [2024-11-20 16:34:06.864533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.018 [2024-11-20 16:34:06.864542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.018 [2024-11-20 16:34:06.864552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.018 [2024-11-20 16:34:06.864559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.018 [2024-11-20 16:34:06.864569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.018 [2024-11-20 16:34:06.864576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.018 [2024-11-20 16:34:06.864585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.018 [2024-11-20 16:34:06.864593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.018 [2024-11-20 16:34:06.864602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.018 [2024-11-20 16:34:06.864610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.018 [2024-11-20 16:34:06.864619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.018 [2024-11-20 16:34:06.864627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.018 [2024-11-20 16:34:06.864636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.018 [2024-11-20 16:34:06.864644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.018 [2024-11-20 16:34:06.864653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.018 [2024-11-20 16:34:06.864660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.018 [2024-11-20 16:34:06.864670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.018 [2024-11-20 16:34:06.864677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.018 [2024-11-20 16:34:06.864686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.018 [2024-11-20 16:34:06.864694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.018 [2024-11-20 16:34:06.864703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.018 [2024-11-20 16:34:06.864710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.018 [2024-11-20 16:34:06.864720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.018 [2024-11-20 16:34:06.864727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.018 [2024-11-20 16:34:06.864736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.018 [2024-11-20 16:34:06.864744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.018 [2024-11-20 16:34:06.864755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.018 [2024-11-20 16:34:06.864762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.018 [2024-11-20 16:34:06.864772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.018 [2024-11-20 16:34:06.864779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.018 [2024-11-20 16:34:06.864789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.018 [2024-11-20 16:34:06.864796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.018 [2024-11-20 16:34:06.864805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.018 [2024-11-20 16:34:06.864813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.018 [2024-11-20 16:34:06.864822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.018 [2024-11-20 16:34:06.864829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.018 [2024-11-20 16:34:06.864839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.018 [2024-11-20 16:34:06.864846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.018 [2024-11-20 16:34:06.864856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.018 [2024-11-20 16:34:06.864863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.018 [2024-11-20 16:34:06.864872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.018 [2024-11-20 16:34:06.864880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.019 [2024-11-20 16:34:06.864889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.019 [2024-11-20 16:34:06.864897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.019 [2024-11-20 16:34:06.864907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.019 [2024-11-20 16:34:06.864914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.019 [2024-11-20 16:34:06.864923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.019 [2024-11-20 16:34:06.864931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.019 [2024-11-20 16:34:06.864940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.019 [2024-11-20 16:34:06.864947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.019 [2024-11-20 16:34:06.864957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.019 [2024-11-20 16:34:06.864965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.019 [2024-11-20 16:34:06.864975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.019 [2024-11-20 16:34:06.864985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.019 [2024-11-20 16:34:06.864995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.019 [2024-11-20 16:34:06.865003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.019 [2024-11-20 16:34:06.865012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.019 [2024-11-20 16:34:06.865020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.019 [2024-11-20 16:34:06.865029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.019 [2024-11-20 16:34:06.865037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.019 [2024-11-20 16:34:06.865046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.019 [2024-11-20 16:34:06.865054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.019 [2024-11-20 16:34:06.865062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5e180 is same with the state(6) to be set 00:22:21.019 [2024-11-20 16:34:06.867207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:22:21.019 [2024-11-20 16:34:06.867233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:21.019 [2024-11-20 16:34:06.867243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:22:21.019 [2024-11-20 16:34:06.867253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:22:21.019 [2024-11-20 16:34:06.867348] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:22:21.019 [2024-11-20 16:34:06.867363] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:22:21.019 [2024-11-20 16:34:06.867443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:22:21.019 task offset: 28032 on job bdev=Nvme2n1 fails 00:22:21.019 00:22:21.019 Latency(us) 00:22:21.019 [2024-11-20T15:34:06.978Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.019 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.019 Job: Nvme1n1 ended in about 0.97 seconds with error 00:22:21.019 Verification LBA range: start 0x0 length 0x400 00:22:21.019 Nvme1n1 : 0.97 197.79 12.36 65.93 0.00 240012.16 7263.57 255153.49 00:22:21.019 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.019 Job: Nvme2n1 ended in about 0.96 seconds with error 00:22:21.019 Verification LBA range: start 0x0 length 0x400 00:22:21.019 Nvme2n1 : 0.96 199.18 12.45 66.39 0.00 233578.88 20206.93 270882.13 00:22:21.019 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.019 Job: Nvme3n1 ended in about 0.98 seconds with error 00:22:21.019 Verification LBA range: start 0x0 length 0x400 00:22:21.019 Nvme3n1 : 0.98 196.09 12.26 65.36 0.00 232647.89 16602.45 253405.87 00:22:21.019 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.019 Job: Nvme4n1 ended in about 0.97 seconds with error 00:22:21.019 Verification LBA range: start 0x0 length 0x400 00:22:21.019 Nvme4n1 : 0.97 198.08 12.38 66.03 0.00 225402.67 25122.13 232434.35 00:22:21.019 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.019 Job: Nvme5n1 ended in about 0.98 seconds with error 00:22:21.019 Verification LBA range: start 0x0 length 0x400 00:22:21.019 Nvme5n1 : 0.98 130.41 8.15 65.20 0.00 298456.18 30146.56 248162.99 00:22:21.019 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.019 Job: Nvme6n1 ended in about 0.97 seconds with error 00:22:21.019 Verification LBA range: start 0x0 length 0x400 00:22:21.019 Nvme6n1 : 0.97 198.66 12.42 66.22 0.00 215259.73 19333.12 265639.25 00:22:21.019 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.019 Job: Nvme7n1 ended in about 0.98 seconds with error 00:22:21.019 Verification LBA range: start 0x0 length 0x400 00:22:21.019 Nvme7n1 : 0.98 195.14 12.20 65.05 0.00 214927.36 31020.37 237677.23 00:22:21.019 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.019 Job: Nvme8n1 ended in about 0.99 seconds with error 00:22:21.019 Verification LBA range: start 0x0 length 0x400 00:22:21.019 Nvme8n1 : 0.99 198.72 12.42 64.89 0.00 207493.49 19333.12 227191.47 00:22:21.019 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.019 Job: Nvme9n1 ended in about 0.99 seconds with error 00:22:21.019 Verification LBA range: start 0x0 length 0x400 00:22:21.019 Nvme9n1 : 0.99 128.84 8.05 64.42 0.00 277133.94 18896.21 283115.52 00:22:21.019 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:21.019 Job: Nvme10n1 ended in about 1.00 seconds with error 00:22:21.019 Verification LBA range: start 0x0 length 0x400 00:22:21.019 Nvme10n1 : 1.00 128.52 8.03 64.26 0.00 271647.57 14199.47 262144.00 00:22:21.019 [2024-11-20T15:34:06.978Z] =================================================================================================================== 00:22:21.019 [2024-11-20T15:34:06.978Z] Total : 1771.42 110.71 653.75 0.00 238299.36 7263.57 283115.52 00:22:21.019 [2024-11-20 16:34:06.894374] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:21.019 [2024-11-20 16:34:06.894404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:22:21.019 [2024-11-20 16:34:06.894703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.019 [2024-11-20 16:34:06.894722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4d190 with addr=10.0.0.2, port=4420 00:22:21.019 [2024-11-20 16:34:06.894732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4d190 is same with the state(6) to be set 00:22:21.019 [2024-11-20 16:34:06.895053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.019 [2024-11-20 16:34:06.895065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb546e0 with addr=10.0.0.2, port=4420 00:22:21.019 [2024-11-20 16:34:06.895072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb546e0 is same with the state(6) to be set 00:22:21.019 [2024-11-20 16:34:06.895453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.019 [2024-11-20 16:34:06.895463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb49c0 with addr=10.0.0.2, port=4420 00:22:21.019 [2024-11-20 16:34:06.895470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb49c0 is same with the state(6) to be set 00:22:21.019 [2024-11-20 16:34:06.895772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.019 [2024-11-20 16:34:06.895782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf751e0 with addr=10.0.0.2, port=4420 00:22:21.019 [2024-11-20 16:34:06.895789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf751e0 is same with the state(6) to be set 00:22:21.019 [2024-11-20 16:34:06.897410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:21.019 [2024-11-20 16:34:06.897427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:22:21.019 [2024-11-20 16:34:06.897437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:21.019 [2024-11-20 16:34:06.897447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:21.019 [2024-11-20 16:34:06.897820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.019 [2024-11-20 16:34:06.897834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb0380 with addr=10.0.0.2, port=4420 00:22:21.019 [2024-11-20 16:34:06.897842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0380 is same with the state(6) to be set 00:22:21.019 [2024-11-20 16:34:06.898164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.019 [2024-11-20 16:34:06.898175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb1490 with addr=10.0.0.2, port=4420 00:22:21.019 [2024-11-20 16:34:06.898182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1490 is same with the state(6) to be set 00:22:21.019 [2024-11-20 16:34:06.898195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb4d190 (9): Bad file descriptor 00:22:21.019 [2024-11-20 16:34:06.898208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb546e0 (9): Bad file descriptor 00:22:21.019 [2024-11-20 16:34:06.898217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb49c0 (9): Bad file descriptor 00:22:21.019 [2024-11-20 16:34:06.898226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf751e0 (9): Bad file descriptor 00:22:21.019 [2024-11-20 16:34:06.898263] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:22:21.020 [2024-11-20 16:34:06.898275] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:22:21.020 [2024-11-20 16:34:06.898286] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:22:21.020 [2024-11-20 16:34:06.898298] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:22:21.020 [2024-11-20 16:34:06.898729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.020 [2024-11-20 16:34:06.898742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4ade0 with addr=10.0.0.2, port=4420 00:22:21.020 [2024-11-20 16:34:06.898750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4ade0 is same with the state(6) to be set 00:22:21.020 [2024-11-20 16:34:06.898938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.020 [2024-11-20 16:34:06.898949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7e250 with addr=10.0.0.2, port=4420 00:22:21.020 [2024-11-20 16:34:06.898956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7e250 is same with the state(6) to be set 00:22:21.020 [2024-11-20 16:34:06.899353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.020 [2024-11-20 16:34:06.899364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4afe0 with addr=10.0.0.2, port=4420 00:22:21.020 [2024-11-20 16:34:06.899371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4afe0 is same with the state(6) to be set 00:22:21.020 [2024-11-20 16:34:06.899687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.020 [2024-11-20 16:34:06.899697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb571b0 with addr=10.0.0.2, port=4420 00:22:21.020 [2024-11-20 16:34:06.899708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb571b0 is same with the state(6) to be set 00:22:21.020 [2024-11-20 16:34:06.899718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb0380 (9): Bad file descriptor 00:22:21.020 [2024-11-20 16:34:06.899727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb1490 (9): Bad file descriptor 00:22:21.020 [2024-11-20 16:34:06.899736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:22:21.020 [2024-11-20 16:34:06.899743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:22:21.020 [2024-11-20 16:34:06.899751] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:22:21.020 [2024-11-20 16:34:06.899760] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:22:21.020 [2024-11-20 16:34:06.899768] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:21.020 [2024-11-20 16:34:06.899775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:21.020 [2024-11-20 16:34:06.899782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:21.020 [2024-11-20 16:34:06.899789] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:21.020 [2024-11-20 16:34:06.899796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:22:21.020 [2024-11-20 16:34:06.899803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:22:21.020 [2024-11-20 16:34:06.899810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:22:21.020 [2024-11-20 16:34:06.899816] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:22:21.020 [2024-11-20 16:34:06.899823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:22:21.020 [2024-11-20 16:34:06.899830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:22:21.020 [2024-11-20 16:34:06.899837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:22:21.020 [2024-11-20 16:34:06.899843] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:22:21.020 [2024-11-20 16:34:06.899927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb4ade0 (9): Bad file descriptor 00:22:21.020 [2024-11-20 16:34:06.899938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7e250 (9): Bad file descriptor 00:22:21.020 [2024-11-20 16:34:06.899947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb4afe0 (9): Bad file descriptor 00:22:21.020 [2024-11-20 16:34:06.899957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb571b0 (9): Bad file descriptor 00:22:21.020 [2024-11-20 16:34:06.899965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:22:21.020 [2024-11-20 16:34:06.899972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:22:21.020 [2024-11-20 16:34:06.899979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:22:21.020 [2024-11-20 16:34:06.899989] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:22:21.020 [2024-11-20 16:34:06.899996] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:22:21.020 [2024-11-20 16:34:06.900014] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:22:21.020 [2024-11-20 16:34:06.900020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:22:21.020 [2024-11-20 16:34:06.900027] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:22:21.020 [2024-11-20 16:34:06.900052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:21.020 [2024-11-20 16:34:06.900059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:21.020 [2024-11-20 16:34:06.900067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:21.020 [2024-11-20 16:34:06.900073] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:21.020 [2024-11-20 16:34:06.900080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:22:21.020 [2024-11-20 16:34:06.900087] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:22:21.020 [2024-11-20 16:34:06.900094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:22:21.020 [2024-11-20 16:34:06.900100] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:22:21.020 [2024-11-20 16:34:06.900107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:21.020 [2024-11-20 16:34:06.900114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:21.020 [2024-11-20 16:34:06.900120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:21.020 [2024-11-20 16:34:06.900127] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:21.020 [2024-11-20 16:34:06.900134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:21.020 [2024-11-20 16:34:06.900140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:21.020 [2024-11-20 16:34:06.900147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:21.020 [2024-11-20 16:34:06.900153] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:21.281 16:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:22:22.222 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2272473 00:22:22.222 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:22:22.222 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2272473 00:22:22.222 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:22:22.222 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:22.222 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:22:22.222 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:22.222 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 2272473 00:22:22.222 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:22:22.222 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:22.222 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:22:22.222 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:22:22.222 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:22:22.222 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:22.222 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:22:22.222 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:22.222 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:22.222 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:22.222 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:22.222 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:22.222 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:22:22.222 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:22.222 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:22:22.222 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:22.222 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:22.222 rmmod nvme_tcp 00:22:22.222 rmmod nvme_fabrics 00:22:22.222 rmmod nvme_keyring 00:22:22.222 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:22.222 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:22:22.222 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:22:22.222 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 2272165 ']' 00:22:22.222 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 2272165 00:22:22.222 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2272165 ']' 00:22:22.222 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2272165 00:22:22.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2272165) - No such process 00:22:22.222 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2272165 is not found' 00:22:22.222 Process with pid 2272165 is not found 00:22:22.222 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:22.222 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:22.222 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:22.222 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:22:22.222 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:22:22.222 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:22:22.222 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:22.222 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:22.222 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:22.222 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.223 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:22.223 16:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:24.767 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:24.767 00:22:24.767 real 0m7.392s 00:22:24.767 user 0m17.383s 00:22:24.767 sys 0m1.195s 00:22:24.767 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:24.767 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:24.767 ************************************ 00:22:24.767 END TEST nvmf_shutdown_tc3 00:22:24.767 ************************************ 00:22:24.767 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:22:24.767 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:22:24.767 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:22:24.767 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:24.767 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:24.767 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:24.767 ************************************ 00:22:24.767 START TEST nvmf_shutdown_tc4 00:22:24.767 ************************************ 00:22:24.767 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:22:24.767 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:22:24.767 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:24.767 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:24.767 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:24.767 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:24.767 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:24.767 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:24.767 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:24.767 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:24.767 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:24.767 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:24.767 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:24.767 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:24.767 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:24.767 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:24.767 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:24.767 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:24.767 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:24.767 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:24.768 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:24.768 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:24.768 Found net devices under 0000:31:00.0: cvl_0_0 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:24.768 Found net devices under 0000:31:00.1: cvl_0_1 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:24.768 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:24.768 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:24.768 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.533 ms 00:22:24.768 00:22:24.768 --- 10.0.0.2 ping statistics --- 00:22:24.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:24.768 rtt min/avg/max/mdev = 0.533/0.533/0.533/0.000 ms 00:22:24.769 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:24.769 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:24.769 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:22:24.769 00:22:24.769 --- 10.0.0.1 ping statistics --- 00:22:24.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:24.769 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:22:24.769 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:24.769 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:22:24.769 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:24.769 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:24.769 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:24.769 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:24.769 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:24.769 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:24.769 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:24.769 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:24.769 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:24.769 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:24.769 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:24.769 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=2273682 00:22:24.769 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 2273682 00:22:24.769 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:24.769 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 2273682 ']' 00:22:24.769 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:24.769 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:24.769 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:24.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:24.769 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:24.769 16:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:25.029 [2024-11-20 16:34:10.744943] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:22:25.029 [2024-11-20 16:34:10.745002] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:25.029 [2024-11-20 16:34:10.836204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:25.029 [2024-11-20 16:34:10.868291] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:25.029 [2024-11-20 16:34:10.868318] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:25.029 [2024-11-20 16:34:10.868324] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:25.029 [2024-11-20 16:34:10.868328] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:25.029 [2024-11-20 16:34:10.868332] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:25.029 [2024-11-20 16:34:10.869686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:25.029 [2024-11-20 16:34:10.869844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:25.029 [2024-11-20 16:34:10.869959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:25.029 [2024-11-20 16:34:10.869961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:25.600 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:25.600 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:22:25.600 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:25.600 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:25.600 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:25.860 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:25.860 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:25.860 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.860 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:25.860 [2024-11-20 16:34:11.580965] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:25.860 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.860 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:25.860 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:25.860 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:25.860 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:25.860 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:25.860 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.860 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:25.860 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.860 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:25.860 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.860 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:25.860 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.860 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:25.860 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.860 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:25.860 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.860 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:25.860 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.860 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:25.860 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.860 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:25.860 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.860 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:25.860 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:25.860 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:25.860 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:25.860 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.860 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:25.860 Malloc1 00:22:25.860 [2024-11-20 16:34:11.694022] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:25.860 Malloc2 00:22:25.860 Malloc3 00:22:25.860 Malloc4 00:22:26.120 Malloc5 00:22:26.120 Malloc6 00:22:26.120 Malloc7 00:22:26.120 Malloc8 00:22:26.120 Malloc9 00:22:26.120 Malloc10 00:22:26.120 16:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.120 16:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:26.120 16:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:26.120 16:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:26.379 16:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2274063 00:22:26.379 16:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:22:26.379 16:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:22:26.379 [2024-11-20 16:34:12.153397] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:31.673 16:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:31.673 16:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2273682 00:22:31.673 16:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2273682 ']' 00:22:31.673 16:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2273682 00:22:31.673 16:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:22:31.673 16:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:31.673 16:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2273682 00:22:31.673 16:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:31.673 16:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:31.673 16:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2273682' 00:22:31.673 killing process with pid 2273682 00:22:31.673 16:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 2273682 00:22:31.673 16:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 2273682 00:22:31.673 Write completed with error (sct=0, sc=8) 00:22:31.673 Write completed with error (sct=0, sc=8) 00:22:31.673 starting I/O failed: -6 00:22:31.673 Write completed with error (sct=0, sc=8) 00:22:31.673 Write completed with error (sct=0, sc=8) 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 [2024-11-20 16:34:17.169230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ec950 is same with the state(6) to be set 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 [2024-11-20 16:34:17.169272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ec950 is same with the state(6) to be set 00:22:31.674 [2024-11-20 16:34:17.169278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ec950 is same with the state(6) to be set 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 [2024-11-20 16:34:17.169283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ec950 is same with the state(6) to be set 00:22:31.674 starting I/O failed: -6 00:22:31.674 [2024-11-20 16:34:17.169289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ec950 is same with the state(6) to be set 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 [2024-11-20 16:34:17.169430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ebae0 is same with the state(6) to be set 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 [2024-11-20 16:34:17.169711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.674 starting I/O failed: -6 00:22:31.674 starting I/O failed: -6 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 [2024-11-20 16:34:17.170869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 Write completed with error (sct=0, sc=8) 00:22:31.674 starting I/O failed: -6 00:22:31.674 [2024-11-20 16:34:17.171826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:31.674 starting I/O failed: -6 00:22:31.675 starting I/O failed: -6 00:22:31.675 starting I/O failed: -6 00:22:31.675 starting I/O failed: -6 00:22:31.675 starting I/O failed: -6 00:22:31.675 starting I/O failed: -6 00:22:31.675 starting I/O failed: -6 00:22:31.675 starting I/O failed: -6 00:22:31.675 starting I/O failed: -6 00:22:31.675 starting I/O failed: -6 00:22:31.675 starting I/O failed: -6 00:22:31.675 starting I/O failed: -6 00:22:31.675 NVMe io qpair process completion error 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 [2024-11-20 16:34:17.173493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 [2024-11-20 16:34:17.174394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.675 starting I/O failed: -6 00:22:31.675 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 [2024-11-20 16:34:17.175281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 [2024-11-20 16:34:17.176048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eb140 is same with the state(6) to be set 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 [2024-11-20 16:34:17.176071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eb140 is same with the state(6) to be set 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 [2024-11-20 16:34:17.176077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eb140 is same with the state(6) to be set 00:22:31.676 [2024-11-20 16:34:17.176083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eb140 is same with the state(6) to be set 00:22:31.676 starting I/O failed: -6 00:22:31.676 [2024-11-20 16:34:17.176088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eb140 is same with the state(6) to be set 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 [2024-11-20 16:34:17.176093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eb140 is same with the state(6) to be set 00:22:31.676 [2024-11-20 16:34:17.176099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eb140 is same with the state(6) to be set 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 [2024-11-20 16:34:17.176657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:31.676 NVMe io qpair process completion error 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 [2024-11-20 16:34:17.177724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.676 starting I/O failed: -6 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.676 starting I/O failed: -6 00:22:31.676 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 [2024-11-20 16:34:17.178694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 [2024-11-20 16:34:17.179617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.677 starting I/O failed: -6 00:22:31.677 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 [2024-11-20 16:34:17.181066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:31.678 NVMe io qpair process completion error 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 [2024-11-20 16:34:17.182347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 [2024-11-20 16:34:17.183139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 Write completed with error (sct=0, sc=8) 00:22:31.678 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 [2024-11-20 16:34:17.184051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 [2024-11-20 16:34:17.187327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:31.679 NVMe io qpair process completion error 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 Write completed with error (sct=0, sc=8) 00:22:31.679 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 [2024-11-20 16:34:17.188330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 [2024-11-20 16:34:17.189156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 [2024-11-20 16:34:17.190078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.680 starting I/O failed: -6 00:22:31.680 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 [2024-11-20 16:34:17.191723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.681 NVMe io qpair process completion error 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 [2024-11-20 16:34:17.192913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:31.681 starting I/O failed: -6 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 [2024-11-20 16:34:17.193864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 starting I/O failed: -6 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.681 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 [2024-11-20 16:34:17.194777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 [2024-11-20 16:34:17.199822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.682 NVMe io qpair process completion error 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 starting I/O failed: -6 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.682 Write completed with error (sct=0, sc=8) 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 [2024-11-20 16:34:17.201237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 [2024-11-20 16:34:17.202057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 [2024-11-20 16:34:17.202956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.683 starting I/O failed: -6 00:22:31.683 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 [2024-11-20 16:34:17.204632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.684 NVMe io qpair process completion error 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 [2024-11-20 16:34:17.205821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 starting I/O failed: -6 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.684 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 [2024-11-20 16:34:17.206636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 [2024-11-20 16:34:17.207563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.685 Write completed with error (sct=0, sc=8) 00:22:31.685 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 [2024-11-20 16:34:17.209933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:31.686 NVMe io qpair process completion error 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 [2024-11-20 16:34:17.211306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 [2024-11-20 16:34:17.212151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 [2024-11-20 16:34:17.213093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.686 Write completed with error (sct=0, sc=8) 00:22:31.686 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 [2024-11-20 16:34:17.214748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:31.687 NVMe io qpair process completion error 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 [2024-11-20 16:34:17.215884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.687 starting I/O failed: -6 00:22:31.687 Write completed with error (sct=0, sc=8) 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 [2024-11-20 16:34:17.216703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:31.688 starting I/O failed: -6 00:22:31.688 starting I/O failed: -6 00:22:31.688 starting I/O failed: -6 00:22:31.688 starting I/O failed: -6 00:22:31.688 starting I/O failed: -6 00:22:31.688 starting I/O failed: -6 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 [2024-11-20 16:34:17.218052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.688 Write completed with error (sct=0, sc=8) 00:22:31.688 starting I/O failed: -6 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 starting I/O failed: -6 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 starting I/O failed: -6 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 starting I/O failed: -6 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 starting I/O failed: -6 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 starting I/O failed: -6 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 starting I/O failed: -6 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 starting I/O failed: -6 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 starting I/O failed: -6 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 starting I/O failed: -6 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 starting I/O failed: -6 00:22:31.689 [2024-11-20 16:34:17.220297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:31.689 NVMe io qpair process completion error 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 Write completed with error (sct=0, sc=8) 00:22:31.689 Initializing NVMe Controllers 00:22:31.689 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:22:31.689 Controller IO queue size 128, less than required. 00:22:31.689 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:31.689 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:22:31.689 Controller IO queue size 128, less than required. 00:22:31.689 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:31.689 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:22:31.689 Controller IO queue size 128, less than required. 00:22:31.689 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:31.689 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:22:31.689 Controller IO queue size 128, less than required. 00:22:31.689 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:31.689 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:31.689 Controller IO queue size 128, less than required. 00:22:31.689 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:31.689 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:22:31.689 Controller IO queue size 128, less than required. 00:22:31.689 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:31.689 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:22:31.689 Controller IO queue size 128, less than required. 00:22:31.689 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:31.689 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:22:31.689 Controller IO queue size 128, less than required. 00:22:31.689 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:31.689 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:22:31.689 Controller IO queue size 128, less than required. 00:22:31.689 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:31.689 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:22:31.689 Controller IO queue size 128, less than required. 00:22:31.689 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:31.689 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:22:31.689 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:22:31.689 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:22:31.689 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:22:31.689 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:31.689 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:22:31.689 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:22:31.689 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:22:31.689 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:22:31.689 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:22:31.689 Initialization complete. Launching workers. 00:22:31.689 ======================================================== 00:22:31.689 Latency(us) 00:22:31.689 Device Information : IOPS MiB/s Average min max 00:22:31.689 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1870.16 80.36 68458.15 887.16 117362.57 00:22:31.689 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1869.31 80.32 68507.41 694.01 144754.22 00:22:31.689 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1919.24 82.47 66747.13 841.38 117151.91 00:22:31.689 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1909.72 82.06 67119.39 856.88 144938.75 00:22:31.689 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1921.99 82.59 66714.57 617.07 116652.99 00:22:31.689 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1924.53 82.69 66697.30 853.53 128849.78 00:22:31.689 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1875.87 80.60 68449.90 848.60 119641.36 00:22:31.689 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1919.88 82.49 66915.57 644.98 117117.48 00:22:31.689 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1924.11 82.68 66792.96 713.22 118008.56 00:22:31.689 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1849.21 79.46 69050.30 570.36 117829.37 00:22:31.689 ======================================================== 00:22:31.689 Total : 18984.03 815.72 67532.40 570.36 144938.75 00:22:31.689 00:22:31.689 [2024-11-20 16:34:17.227721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21506c0 is same with the state(6) to be set 00:22:31.689 [2024-11-20 16:34:17.227768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21516b0 is same with the state(6) to be set 00:22:31.689 [2024-11-20 16:34:17.227798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151050 is same with the state(6) to be set 00:22:31.689 [2024-11-20 16:34:17.227827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2150060 is same with the state(6) to be set 00:22:31.689 [2024-11-20 16:34:17.227855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2152360 is same with the state(6) to be set 00:22:31.689 [2024-11-20 16:34:17.227884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2150390 is same with the state(6) to be set 00:22:31.689 [2024-11-20 16:34:17.227912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21509f0 is same with the state(6) to be set 00:22:31.689 [2024-11-20 16:34:17.227941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2151380 is same with the state(6) to be set 00:22:31.689 [2024-11-20 16:34:17.227969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21519e0 is same with the state(6) to be set 00:22:31.689 [2024-11-20 16:34:17.228004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2152540 is same with the state(6) to be set 00:22:31.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:22:31.689 16:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:22:32.631 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2274063 00:22:32.631 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:22:32.631 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2274063 00:22:32.631 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:22:32.631 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:32.631 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:22:32.631 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:32.631 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 2274063 00:22:32.631 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:22:32.631 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:32.631 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:32.631 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:32.631 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:22:32.631 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:32.631 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:32.631 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:32.631 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:32.631 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:32.631 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:22:32.631 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:32.631 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:22:32.631 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:32.631 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:32.631 rmmod nvme_tcp 00:22:32.631 rmmod nvme_fabrics 00:22:32.631 rmmod nvme_keyring 00:22:32.631 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:32.631 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:22:32.631 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:22:32.631 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 2273682 ']' 00:22:32.631 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 2273682 00:22:32.631 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2273682 ']' 00:22:32.631 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2273682 00:22:32.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2273682) - No such process 00:22:32.631 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2273682 is not found' 00:22:32.631 Process with pid 2273682 is not found 00:22:32.631 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:32.631 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:32.631 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:32.631 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:22:32.631 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:22:32.631 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:32.631 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:22:32.631 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:32.631 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:32.631 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.631 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:32.631 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.175 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:35.175 00:22:35.175 real 0m10.265s 00:22:35.175 user 0m28.137s 00:22:35.175 sys 0m3.864s 00:22:35.175 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:35.175 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:35.175 ************************************ 00:22:35.175 END TEST nvmf_shutdown_tc4 00:22:35.176 ************************************ 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:22:35.176 00:22:35.176 real 0m42.766s 00:22:35.176 user 1m44.060s 00:22:35.176 sys 0m13.289s 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:35.176 ************************************ 00:22:35.176 END TEST nvmf_shutdown 00:22:35.176 ************************************ 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:35.176 ************************************ 00:22:35.176 START TEST nvmf_nsid 00:22:35.176 ************************************ 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:35.176 * Looking for test storage... 00:22:35.176 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:35.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.176 --rc genhtml_branch_coverage=1 00:22:35.176 --rc genhtml_function_coverage=1 00:22:35.176 --rc genhtml_legend=1 00:22:35.176 --rc geninfo_all_blocks=1 00:22:35.176 --rc geninfo_unexecuted_blocks=1 00:22:35.176 00:22:35.176 ' 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:35.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.176 --rc genhtml_branch_coverage=1 00:22:35.176 --rc genhtml_function_coverage=1 00:22:35.176 --rc genhtml_legend=1 00:22:35.176 --rc geninfo_all_blocks=1 00:22:35.176 --rc geninfo_unexecuted_blocks=1 00:22:35.176 00:22:35.176 ' 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:35.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.176 --rc genhtml_branch_coverage=1 00:22:35.176 --rc genhtml_function_coverage=1 00:22:35.176 --rc genhtml_legend=1 00:22:35.176 --rc geninfo_all_blocks=1 00:22:35.176 --rc geninfo_unexecuted_blocks=1 00:22:35.176 00:22:35.176 ' 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:35.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.176 --rc genhtml_branch_coverage=1 00:22:35.176 --rc genhtml_function_coverage=1 00:22:35.176 --rc genhtml_legend=1 00:22:35.176 --rc geninfo_all_blocks=1 00:22:35.176 --rc geninfo_unexecuted_blocks=1 00:22:35.176 00:22:35.176 ' 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.176 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:22:35.177 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:35.177 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:35.177 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:35.177 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:35.177 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:35.177 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:35.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:35.177 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:35.177 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:35.177 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:35.177 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:22:35.177 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:22:35.177 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:22:35.177 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:22:35.177 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:22:35.177 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:22:35.177 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:35.177 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:35.177 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:35.177 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:35.177 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:35.177 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.177 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:35.177 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.177 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:35.177 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:35.177 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:22:35.177 16:34:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:43.437 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:43.438 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:43.438 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:43.438 Found net devices under 0000:31:00.0: cvl_0_0 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:43.438 Found net devices under 0000:31:00.1: cvl_0_1 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:43.438 16:34:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:43.438 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:43.438 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:43.438 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:43.438 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:43.438 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:43.438 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:43.438 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:43.438 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:43.438 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:43.438 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:22:43.438 00:22:43.438 --- 10.0.0.2 ping statistics --- 00:22:43.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.438 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:22:43.438 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:43.438 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:43.438 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:22:43.438 00:22:43.438 --- 10.0.0.1 ping statistics --- 00:22:43.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.438 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:22:43.438 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:43.438 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:22:43.438 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:43.438 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:43.438 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:43.438 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:43.438 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:43.438 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:43.438 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:43.438 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:22:43.438 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:43.438 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:43.438 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:43.438 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=2279447 00:22:43.438 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 2279447 00:22:43.438 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:22:43.438 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2279447 ']' 00:22:43.438 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:43.438 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:43.438 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:43.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:43.438 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:43.438 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:43.438 [2024-11-20 16:34:28.283923] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:22:43.438 [2024-11-20 16:34:28.283999] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:43.438 [2024-11-20 16:34:28.367793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.438 [2024-11-20 16:34:28.408372] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:43.438 [2024-11-20 16:34:28.408403] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:43.438 [2024-11-20 16:34:28.408415] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:43.438 [2024-11-20 16:34:28.408422] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:43.438 [2024-11-20 16:34:28.408428] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:43.438 [2024-11-20 16:34:28.409016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:43.438 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:43.438 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:22:43.439 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:43.439 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:43.439 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:43.439 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:43.439 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:43.439 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=2279665 00:22:43.439 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:22:43.439 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:22:43.439 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:22:43.439 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:22:43.439 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:43.439 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:43.439 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:43.439 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:43.439 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:43.439 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:43.439 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:43.439 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:43.439 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:43.439 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:22:43.439 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:22:43.439 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=53830637-6d6c-44d8-bb25-8c3d4a64a6d3 00:22:43.439 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:22:43.439 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=8ba14e6a-74da-4cc7-aaed-b166a1921d12 00:22:43.439 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:22:43.439 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=e0c757fb-a303-4fcc-accf-abd1b856b7dc 00:22:43.439 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:22:43.439 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.439 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:43.439 null0 00:22:43.439 null1 00:22:43.439 null2 00:22:43.439 [2024-11-20 16:34:29.176751] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:22:43.439 [2024-11-20 16:34:29.176800] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2279665 ] 00:22:43.439 [2024-11-20 16:34:29.179842] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:43.439 [2024-11-20 16:34:29.204049] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:43.439 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.439 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 2279665 /var/tmp/tgt2.sock 00:22:43.439 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2279665 ']' 00:22:43.439 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:22:43.439 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:43.439 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:22:43.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:22:43.439 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:43.439 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:43.439 [2024-11-20 16:34:29.262796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.439 [2024-11-20 16:34:29.298740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:43.699 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:43.699 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:22:43.699 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:22:43.960 [2024-11-20 16:34:29.790923] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:43.960 [2024-11-20 16:34:29.807062] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:22:43.960 nvme0n1 nvme0n2 00:22:43.960 nvme1n1 00:22:43.960 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:22:43.960 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:22:43.960 16:34:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:45.346 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:22:45.346 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:22:45.346 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:22:45.346 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:22:45.346 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:22:45.346 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:22:45.346 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:22:45.346 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:45.346 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:45.346 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:45.346 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:22:45.346 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:22:45.346 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:22:46.730 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:46.730 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:46.730 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:46.730 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:46.730 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:46.730 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 53830637-6d6c-44d8-bb25-8c3d4a64a6d3 00:22:46.730 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:46.730 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:22:46.730 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:22:46.730 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:22:46.730 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:46.730 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=538306376d6c44d8bb258c3d4a64a6d3 00:22:46.730 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 538306376D6C44D8BB258C3D4A64A6D3 00:22:46.730 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 538306376D6C44D8BB258C3D4A64A6D3 == \5\3\8\3\0\6\3\7\6\D\6\C\4\4\D\8\B\B\2\5\8\C\3\D\4\A\6\4\A\6\D\3 ]] 00:22:46.730 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:22:46.730 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:46.730 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:46.730 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:22:46.730 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:46.730 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:22:46.730 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:46.730 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 8ba14e6a-74da-4cc7-aaed-b166a1921d12 00:22:46.730 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:46.730 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:22:46.730 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:22:46.730 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:22:46.730 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:46.730 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=8ba14e6a74da4cc7aaedb166a1921d12 00:22:46.730 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 8BA14E6A74DA4CC7AAEDB166A1921D12 00:22:46.730 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 8BA14E6A74DA4CC7AAEDB166A1921D12 == \8\B\A\1\4\E\6\A\7\4\D\A\4\C\C\7\A\A\E\D\B\1\6\6\A\1\9\2\1\D\1\2 ]] 00:22:46.730 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:22:46.730 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:46.730 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:46.730 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:22:46.730 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:46.730 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:22:46.730 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:46.730 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid e0c757fb-a303-4fcc-accf-abd1b856b7dc 00:22:46.730 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:46.730 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:22:46.730 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:22:46.730 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:22:46.730 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:46.730 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=e0c757fba3034fccaccfabd1b856b7dc 00:22:46.730 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo E0C757FBA3034FCCACCFABD1B856B7DC 00:22:46.730 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ E0C757FBA3034FCCACCFABD1B856B7DC == \E\0\C\7\5\7\F\B\A\3\0\3\4\F\C\C\A\C\C\F\A\B\D\1\B\8\5\6\B\7\D\C ]] 00:22:46.730 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:22:46.992 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:22:46.992 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:22:46.992 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 2279665 00:22:46.992 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2279665 ']' 00:22:46.992 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2279665 00:22:46.992 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:22:46.992 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:46.992 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2279665 00:22:46.992 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:46.992 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:46.992 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2279665' 00:22:46.992 killing process with pid 2279665 00:22:46.992 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2279665 00:22:46.992 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2279665 00:22:47.253 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:22:47.253 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:47.253 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:22:47.253 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:47.253 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:22:47.253 16:34:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:47.253 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:47.253 rmmod nvme_tcp 00:22:47.253 rmmod nvme_fabrics 00:22:47.253 rmmod nvme_keyring 00:22:47.253 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:47.253 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:22:47.253 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:22:47.253 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 2279447 ']' 00:22:47.253 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 2279447 00:22:47.253 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2279447 ']' 00:22:47.253 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2279447 00:22:47.253 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:22:47.253 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:47.253 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2279447 00:22:47.253 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:47.253 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:47.253 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2279447' 00:22:47.253 killing process with pid 2279447 00:22:47.253 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2279447 00:22:47.253 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2279447 00:22:47.513 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:47.513 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:47.513 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:47.513 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:22:47.513 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:22:47.513 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:47.513 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:22:47.513 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:47.513 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:47.514 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.514 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:47.514 16:34:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.424 16:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:49.424 00:22:49.424 real 0m14.637s 00:22:49.424 user 0m11.276s 00:22:49.424 sys 0m6.536s 00:22:49.424 16:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:49.424 16:34:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:49.424 ************************************ 00:22:49.424 END TEST nvmf_nsid 00:22:49.424 ************************************ 00:22:49.424 16:34:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:22:49.424 00:22:49.424 real 12m59.286s 00:22:49.424 user 27m18.840s 00:22:49.424 sys 3m48.169s 00:22:49.424 16:34:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:49.424 16:34:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:49.424 ************************************ 00:22:49.424 END TEST nvmf_target_extra 00:22:49.424 ************************************ 00:22:49.684 16:34:35 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:49.684 16:34:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:49.684 16:34:35 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:49.684 16:34:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:49.684 ************************************ 00:22:49.684 START TEST nvmf_host 00:22:49.684 ************************************ 00:22:49.684 16:34:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:49.684 * Looking for test storage... 00:22:49.684 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:22:49.684 16:34:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:49.684 16:34:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:22:49.684 16:34:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:49.684 16:34:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:49.684 16:34:35 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:49.684 16:34:35 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:49.684 16:34:35 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:49.684 16:34:35 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:49.684 16:34:35 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:49.684 16:34:35 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:49.684 16:34:35 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:49.684 16:34:35 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:49.684 16:34:35 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:49.684 16:34:35 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:49.684 16:34:35 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:49.684 16:34:35 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:22:49.684 16:34:35 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:22:49.684 16:34:35 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:49.684 16:34:35 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:49.945 16:34:35 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:22:49.945 16:34:35 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:22:49.945 16:34:35 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:49.945 16:34:35 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:22:49.945 16:34:35 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:49.945 16:34:35 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:22:49.945 16:34:35 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:22:49.945 16:34:35 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:49.945 16:34:35 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:22:49.945 16:34:35 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:49.945 16:34:35 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:49.945 16:34:35 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:49.945 16:34:35 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:22:49.945 16:34:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:49.945 16:34:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:49.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.945 --rc genhtml_branch_coverage=1 00:22:49.945 --rc genhtml_function_coverage=1 00:22:49.945 --rc genhtml_legend=1 00:22:49.945 --rc geninfo_all_blocks=1 00:22:49.945 --rc geninfo_unexecuted_blocks=1 00:22:49.945 00:22:49.945 ' 00:22:49.945 16:34:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:49.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.945 --rc genhtml_branch_coverage=1 00:22:49.945 --rc genhtml_function_coverage=1 00:22:49.945 --rc genhtml_legend=1 00:22:49.945 --rc geninfo_all_blocks=1 00:22:49.945 --rc geninfo_unexecuted_blocks=1 00:22:49.945 00:22:49.945 ' 00:22:49.945 16:34:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:49.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.945 --rc genhtml_branch_coverage=1 00:22:49.945 --rc genhtml_function_coverage=1 00:22:49.945 --rc genhtml_legend=1 00:22:49.945 --rc geninfo_all_blocks=1 00:22:49.945 --rc geninfo_unexecuted_blocks=1 00:22:49.945 00:22:49.945 ' 00:22:49.945 16:34:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:49.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.945 --rc genhtml_branch_coverage=1 00:22:49.945 --rc genhtml_function_coverage=1 00:22:49.945 --rc genhtml_legend=1 00:22:49.945 --rc geninfo_all_blocks=1 00:22:49.945 --rc geninfo_unexecuted_blocks=1 00:22:49.945 00:22:49.945 ' 00:22:49.945 16:34:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:49.945 16:34:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:22:49.945 16:34:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:49.945 16:34:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:49.945 16:34:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:49.945 16:34:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:49.945 16:34:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:49.945 16:34:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:49.945 16:34:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:49.945 16:34:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:49.945 16:34:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:49.945 16:34:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:49.945 16:34:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:49.945 16:34:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:49.945 16:34:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:49.945 16:34:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:49.945 16:34:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:49.945 16:34:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:49.945 16:34:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:49.945 16:34:35 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:49.945 16:34:35 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:49.945 16:34:35 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:49.945 16:34:35 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:49.945 16:34:35 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.945 16:34:35 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.945 16:34:35 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.945 16:34:35 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:22:49.945 16:34:35 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.945 16:34:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:22:49.945 16:34:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:49.945 16:34:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:49.945 16:34:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:49.945 16:34:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:49.945 16:34:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:49.945 16:34:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:49.945 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:49.945 16:34:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:49.946 16:34:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:49.946 16:34:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:49.946 16:34:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:22:49.946 16:34:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:22:49.946 16:34:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:22:49.946 16:34:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:49.946 16:34:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:49.946 16:34:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:49.946 16:34:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.946 ************************************ 00:22:49.946 START TEST nvmf_multicontroller 00:22:49.946 ************************************ 00:22:49.946 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:49.946 * Looking for test storage... 00:22:49.946 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:49.946 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:49.946 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:22:49.946 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:50.206 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:50.206 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:50.206 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:50.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.207 --rc genhtml_branch_coverage=1 00:22:50.207 --rc genhtml_function_coverage=1 00:22:50.207 --rc genhtml_legend=1 00:22:50.207 --rc geninfo_all_blocks=1 00:22:50.207 --rc geninfo_unexecuted_blocks=1 00:22:50.207 00:22:50.207 ' 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:50.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.207 --rc genhtml_branch_coverage=1 00:22:50.207 --rc genhtml_function_coverage=1 00:22:50.207 --rc genhtml_legend=1 00:22:50.207 --rc geninfo_all_blocks=1 00:22:50.207 --rc geninfo_unexecuted_blocks=1 00:22:50.207 00:22:50.207 ' 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:50.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.207 --rc genhtml_branch_coverage=1 00:22:50.207 --rc genhtml_function_coverage=1 00:22:50.207 --rc genhtml_legend=1 00:22:50.207 --rc geninfo_all_blocks=1 00:22:50.207 --rc geninfo_unexecuted_blocks=1 00:22:50.207 00:22:50.207 ' 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:50.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.207 --rc genhtml_branch_coverage=1 00:22:50.207 --rc genhtml_function_coverage=1 00:22:50.207 --rc genhtml_legend=1 00:22:50.207 --rc geninfo_all_blocks=1 00:22:50.207 --rc geninfo_unexecuted_blocks=1 00:22:50.207 00:22:50.207 ' 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:50.207 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:50.207 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:50.208 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:50.208 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:50.208 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.208 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:50.208 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.208 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:50.208 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:50.208 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:22:50.208 16:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:58.343 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:58.343 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:22:58.343 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:58.343 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:58.343 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:58.343 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:58.343 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:58.343 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:22:58.343 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:58.343 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:22:58.343 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:22:58.343 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:22:58.343 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:22:58.343 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:58.344 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:58.344 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:58.344 Found net devices under 0000:31:00.0: cvl_0_0 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:58.344 Found net devices under 0000:31:00.1: cvl_0_1 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:58.344 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:58.344 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:22:58.344 00:22:58.344 --- 10.0.0.2 ping statistics --- 00:22:58.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.344 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:58.344 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:58.344 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:22:58.344 00:22:58.344 --- 10.0.0.1 ping statistics --- 00:22:58.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.344 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=2284789 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 2284789 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2284789 ']' 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:58.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:58.344 16:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:58.344 [2024-11-20 16:34:43.451235] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:22:58.344 [2024-11-20 16:34:43.451304] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:58.344 [2024-11-20 16:34:43.551324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:58.344 [2024-11-20 16:34:43.604092] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:58.344 [2024-11-20 16:34:43.604140] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:58.344 [2024-11-20 16:34:43.604149] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:58.344 [2024-11-20 16:34:43.604156] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:58.344 [2024-11-20 16:34:43.604163] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:58.344 [2024-11-20 16:34:43.606107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:58.345 [2024-11-20 16:34:43.606282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:58.345 [2024-11-20 16:34:43.606284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:58.345 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:58.345 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:22:58.345 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:58.345 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:58.345 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:58.605 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:58.605 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:58.605 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.605 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:58.605 [2024-11-20 16:34:44.311235] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:58.605 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.605 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:58.605 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.605 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:58.605 Malloc0 00:22:58.605 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.605 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:58.605 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.605 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:58.605 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.605 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:58.605 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.605 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:58.605 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.605 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:58.605 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.605 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:58.605 [2024-11-20 16:34:44.374099] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:58.605 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.605 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:58.606 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.606 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:58.606 [2024-11-20 16:34:44.386056] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:58.606 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.606 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:58.606 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.606 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:58.606 Malloc1 00:22:58.606 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.606 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:58.606 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.606 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:58.606 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.606 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:58.606 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.606 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:58.606 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.606 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:58.606 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.606 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:58.606 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.606 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:58.606 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.606 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:58.606 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.606 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2284966 00:22:58.606 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:58.606 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:58.606 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2284966 /var/tmp/bdevperf.sock 00:22:58.606 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2284966 ']' 00:22:58.606 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:58.606 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:58.606 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:58.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:58.606 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:58.606 16:34:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:59.546 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:59.546 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:22:59.546 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:59.546 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.546 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:59.806 NVMe0n1 00:22:59.806 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.806 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:59.806 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:59.806 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.806 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:59.806 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.806 1 00:22:59.806 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:59.806 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:59.806 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:59.806 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:59.806 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:59.806 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:59.806 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:59.806 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:59.806 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.806 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:59.806 request: 00:22:59.806 { 00:22:59.806 "name": "NVMe0", 00:22:59.806 "trtype": "tcp", 00:22:59.807 "traddr": "10.0.0.2", 00:22:59.807 "adrfam": "ipv4", 00:22:59.807 "trsvcid": "4420", 00:22:59.807 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:59.807 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:59.807 "hostaddr": "10.0.0.1", 00:22:59.807 "prchk_reftag": false, 00:22:59.807 "prchk_guard": false, 00:22:59.807 "hdgst": false, 00:22:59.807 "ddgst": false, 00:22:59.807 "allow_unrecognized_csi": false, 00:22:59.807 "method": "bdev_nvme_attach_controller", 00:22:59.807 "req_id": 1 00:22:59.807 } 00:22:59.807 Got JSON-RPC error response 00:22:59.807 response: 00:22:59.807 { 00:22:59.807 "code": -114, 00:22:59.807 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:59.807 } 00:22:59.807 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:59.807 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:59.807 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:59.807 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:59.807 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:59.807 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:59.807 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:59.807 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:59.807 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:59.807 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:59.807 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:59.807 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:59.807 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:59.807 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.807 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:59.807 request: 00:22:59.807 { 00:22:59.807 "name": "NVMe0", 00:22:59.807 "trtype": "tcp", 00:22:59.807 "traddr": "10.0.0.2", 00:22:59.807 "adrfam": "ipv4", 00:22:59.807 "trsvcid": "4420", 00:22:59.807 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:59.807 "hostaddr": "10.0.0.1", 00:22:59.807 "prchk_reftag": false, 00:22:59.807 "prchk_guard": false, 00:22:59.807 "hdgst": false, 00:22:59.807 "ddgst": false, 00:22:59.807 "allow_unrecognized_csi": false, 00:22:59.807 "method": "bdev_nvme_attach_controller", 00:22:59.807 "req_id": 1 00:22:59.807 } 00:22:59.807 Got JSON-RPC error response 00:22:59.807 response: 00:22:59.807 { 00:22:59.807 "code": -114, 00:22:59.807 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:59.807 } 00:22:59.807 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:59.807 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:59.807 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:59.807 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:59.807 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:59.807 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:59.807 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:59.807 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:59.807 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:59.807 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:59.807 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:59.807 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:59.807 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:59.807 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.807 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:59.807 request: 00:22:59.807 { 00:22:59.807 "name": "NVMe0", 00:22:59.807 "trtype": "tcp", 00:22:59.807 "traddr": "10.0.0.2", 00:22:59.807 "adrfam": "ipv4", 00:22:59.807 "trsvcid": "4420", 00:22:59.807 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:59.807 "hostaddr": "10.0.0.1", 00:22:59.807 "prchk_reftag": false, 00:22:59.807 "prchk_guard": false, 00:22:59.807 "hdgst": false, 00:22:59.807 "ddgst": false, 00:22:59.807 "multipath": "disable", 00:22:59.807 "allow_unrecognized_csi": false, 00:22:59.807 "method": "bdev_nvme_attach_controller", 00:22:59.807 "req_id": 1 00:22:59.807 } 00:22:59.807 Got JSON-RPC error response 00:22:59.807 response: 00:22:59.807 { 00:22:59.807 "code": -114, 00:22:59.807 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:22:59.807 } 00:22:59.807 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:59.807 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:59.807 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:59.807 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:59.807 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:59.807 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:59.807 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:59.807 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:59.807 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:59.807 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:59.807 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:59.807 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:59.807 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:59.807 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.807 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:59.807 request: 00:22:59.807 { 00:22:59.807 "name": "NVMe0", 00:22:59.807 "trtype": "tcp", 00:22:59.807 "traddr": "10.0.0.2", 00:22:59.807 "adrfam": "ipv4", 00:22:59.807 "trsvcid": "4420", 00:22:59.807 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:59.807 "hostaddr": "10.0.0.1", 00:22:59.807 "prchk_reftag": false, 00:22:59.807 "prchk_guard": false, 00:22:59.807 "hdgst": false, 00:22:59.807 "ddgst": false, 00:22:59.807 "multipath": "failover", 00:22:59.807 "allow_unrecognized_csi": false, 00:22:59.807 "method": "bdev_nvme_attach_controller", 00:22:59.807 "req_id": 1 00:22:59.807 } 00:22:59.807 Got JSON-RPC error response 00:22:59.807 response: 00:22:59.807 { 00:22:59.807 "code": -114, 00:22:59.807 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:59.807 } 00:22:59.807 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:59.807 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:59.807 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:59.807 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:59.807 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:59.807 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:59.807 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.807 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.069 NVMe0n1 00:23:00.069 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.069 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:00.069 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.069 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.069 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.069 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:00.069 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.069 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.069 00:23:00.069 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.069 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:00.069 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:00.069 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.069 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.069 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.069 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:00.069 16:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:01.451 { 00:23:01.451 "results": [ 00:23:01.451 { 00:23:01.451 "job": "NVMe0n1", 00:23:01.451 "core_mask": "0x1", 00:23:01.451 "workload": "write", 00:23:01.451 "status": "finished", 00:23:01.451 "queue_depth": 128, 00:23:01.452 "io_size": 4096, 00:23:01.452 "runtime": 1.006028, 00:23:01.452 "iops": 26443.597991308394, 00:23:01.452 "mibps": 103.29530465354841, 00:23:01.452 "io_failed": 0, 00:23:01.452 "io_timeout": 0, 00:23:01.452 "avg_latency_us": 4828.748476988811, 00:23:01.452 "min_latency_us": 2116.266666666667, 00:23:01.452 "max_latency_us": 13271.04 00:23:01.452 } 00:23:01.452 ], 00:23:01.452 "core_count": 1 00:23:01.452 } 00:23:01.452 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:01.452 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.452 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:01.452 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.452 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:23:01.452 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2284966 00:23:01.452 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2284966 ']' 00:23:01.452 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2284966 00:23:01.452 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:23:01.452 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:01.452 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2284966 00:23:01.452 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:01.452 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:01.452 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2284966' 00:23:01.452 killing process with pid 2284966 00:23:01.452 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2284966 00:23:01.452 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2284966 00:23:01.452 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:01.452 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.452 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:01.452 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.452 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:01.452 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.452 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:01.452 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.452 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:23:01.452 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:01.452 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:23:01.452 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:01.452 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:23:01.452 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:23:01.452 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:01.452 [2024-11-20 16:34:44.508355] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:23:01.452 [2024-11-20 16:34:44.508414] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2284966 ] 00:23:01.452 [2024-11-20 16:34:44.579833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.452 [2024-11-20 16:34:44.616106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:01.452 [2024-11-20 16:34:45.964378] bdev.c:4906:bdev_name_add: *ERROR*: Bdev name bca57c17-66f6-4748-9130-15a7d35c25e9 already exists 00:23:01.452 [2024-11-20 16:34:45.964407] bdev.c:8106:bdev_register: *ERROR*: Unable to add uuid:bca57c17-66f6-4748-9130-15a7d35c25e9 alias for bdev NVMe1n1 00:23:01.452 [2024-11-20 16:34:45.964416] bdev_nvme.c:4659:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:01.452 Running I/O for 1 seconds... 00:23:01.452 26410.00 IOPS, 103.16 MiB/s 00:23:01.452 Latency(us) 00:23:01.452 [2024-11-20T15:34:47.411Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.452 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:01.452 NVMe0n1 : 1.01 26443.60 103.30 0.00 0.00 4828.75 2116.27 13271.04 00:23:01.452 [2024-11-20T15:34:47.411Z] =================================================================================================================== 00:23:01.452 [2024-11-20T15:34:47.411Z] Total : 26443.60 103.30 0.00 0.00 4828.75 2116.27 13271.04 00:23:01.452 Received shutdown signal, test time was about 1.000000 seconds 00:23:01.452 00:23:01.452 Latency(us) 00:23:01.452 [2024-11-20T15:34:47.411Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.452 [2024-11-20T15:34:47.411Z] =================================================================================================================== 00:23:01.452 [2024-11-20T15:34:47.411Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:01.452 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:01.452 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:01.452 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:23:01.452 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:23:01.452 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:01.452 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:23:01.452 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:01.452 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:23:01.452 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:01.452 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:01.452 rmmod nvme_tcp 00:23:01.452 rmmod nvme_fabrics 00:23:01.452 rmmod nvme_keyring 00:23:01.712 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:01.712 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:23:01.712 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:23:01.712 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 2284789 ']' 00:23:01.712 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 2284789 00:23:01.712 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2284789 ']' 00:23:01.712 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2284789 00:23:01.712 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:23:01.712 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:01.712 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2284789 00:23:01.712 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:01.712 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:01.712 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2284789' 00:23:01.712 killing process with pid 2284789 00:23:01.712 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2284789 00:23:01.712 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2284789 00:23:01.712 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:01.712 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:01.712 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:01.712 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:23:01.712 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:23:01.712 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:01.712 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:23:01.712 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:01.712 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:01.712 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:01.712 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:01.712 16:34:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:04.254 00:23:04.254 real 0m13.975s 00:23:04.254 user 0m17.464s 00:23:04.254 sys 0m6.334s 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:04.254 ************************************ 00:23:04.254 END TEST nvmf_multicontroller 00:23:04.254 ************************************ 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.254 ************************************ 00:23:04.254 START TEST nvmf_aer 00:23:04.254 ************************************ 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:04.254 * Looking for test storage... 00:23:04.254 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:04.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:04.254 --rc genhtml_branch_coverage=1 00:23:04.254 --rc genhtml_function_coverage=1 00:23:04.254 --rc genhtml_legend=1 00:23:04.254 --rc geninfo_all_blocks=1 00:23:04.254 --rc geninfo_unexecuted_blocks=1 00:23:04.254 00:23:04.254 ' 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:04.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:04.254 --rc genhtml_branch_coverage=1 00:23:04.254 --rc genhtml_function_coverage=1 00:23:04.254 --rc genhtml_legend=1 00:23:04.254 --rc geninfo_all_blocks=1 00:23:04.254 --rc geninfo_unexecuted_blocks=1 00:23:04.254 00:23:04.254 ' 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:04.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:04.254 --rc genhtml_branch_coverage=1 00:23:04.254 --rc genhtml_function_coverage=1 00:23:04.254 --rc genhtml_legend=1 00:23:04.254 --rc geninfo_all_blocks=1 00:23:04.254 --rc geninfo_unexecuted_blocks=1 00:23:04.254 00:23:04.254 ' 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:04.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:04.254 --rc genhtml_branch_coverage=1 00:23:04.254 --rc genhtml_function_coverage=1 00:23:04.254 --rc genhtml_legend=1 00:23:04.254 --rc geninfo_all_blocks=1 00:23:04.254 --rc geninfo_unexecuted_blocks=1 00:23:04.254 00:23:04.254 ' 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:04.254 16:34:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:04.254 16:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:04.254 16:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:04.254 16:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:04.254 16:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:04.254 16:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:04.254 16:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:04.254 16:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:04.254 16:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:23:04.254 16:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:04.254 16:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:04.254 16:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:04.254 16:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.254 16:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.254 16:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.254 16:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:04.254 16:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.254 16:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:23:04.254 16:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:04.254 16:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:04.254 16:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:04.254 16:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:04.254 16:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:04.254 16:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:04.254 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:04.254 16:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:04.254 16:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:04.254 16:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:04.254 16:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:04.254 16:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:04.254 16:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:04.254 16:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:04.254 16:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:04.254 16:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:04.254 16:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.254 16:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:04.254 16:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:04.254 16:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:04.254 16:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:04.254 16:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:23:04.254 16:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:12.393 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:12.393 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:12.393 Found net devices under 0000:31:00.0: cvl_0_0 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:12.393 Found net devices under 0000:31:00.1: cvl_0_1 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:12.393 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:12.394 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:12.394 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:12.394 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:12.394 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:12.394 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:12.394 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:12.394 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:12.394 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:12.394 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:12.394 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:12.394 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:12.394 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:12.394 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:12.394 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:12.394 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:12.394 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:12.394 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:12.394 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:12.394 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:12.394 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:12.394 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:12.394 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:12.394 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:12.394 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:12.394 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.673 ms 00:23:12.394 00:23:12.394 --- 10.0.0.2 ping statistics --- 00:23:12.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:12.394 rtt min/avg/max/mdev = 0.673/0.673/0.673/0.000 ms 00:23:12.394 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:12.394 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:12.394 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:23:12.394 00:23:12.394 --- 10.0.0.1 ping statistics --- 00:23:12.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:12.394 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:23:12.394 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:12.394 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:23:12.394 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:12.394 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:12.394 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:12.394 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:12.394 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:12.394 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:12.394 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:12.394 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:12.394 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:12.394 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:12.394 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:12.394 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=2289793 00:23:12.394 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 2289793 00:23:12.394 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:12.394 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 2289793 ']' 00:23:12.394 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:12.394 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:12.394 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:12.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:12.394 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:12.394 16:34:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:12.394 [2024-11-20 16:34:57.515791] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:23:12.394 [2024-11-20 16:34:57.515859] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:12.394 [2024-11-20 16:34:57.600006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:12.394 [2024-11-20 16:34:57.643048] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:12.394 [2024-11-20 16:34:57.643089] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:12.394 [2024-11-20 16:34:57.643097] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:12.394 [2024-11-20 16:34:57.643104] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:12.394 [2024-11-20 16:34:57.643111] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:12.394 [2024-11-20 16:34:57.644721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:12.394 [2024-11-20 16:34:57.644836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:12.394 [2024-11-20 16:34:57.645007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:12.394 [2024-11-20 16:34:57.645008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:12.394 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:12.394 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:23:12.394 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:12.394 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:12.394 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:12.655 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:12.655 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:12.655 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.655 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:12.655 [2024-11-20 16:34:58.372978] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:12.655 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.655 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:12.655 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.655 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:12.655 Malloc0 00:23:12.655 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.655 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:12.655 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.655 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:12.655 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.655 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:12.655 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.655 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:12.655 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.655 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:12.655 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.655 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:12.655 [2024-11-20 16:34:58.439336] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:12.655 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.655 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:12.655 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.655 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:12.655 [ 00:23:12.655 { 00:23:12.655 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:12.655 "subtype": "Discovery", 00:23:12.655 "listen_addresses": [], 00:23:12.655 "allow_any_host": true, 00:23:12.655 "hosts": [] 00:23:12.655 }, 00:23:12.655 { 00:23:12.655 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:12.655 "subtype": "NVMe", 00:23:12.655 "listen_addresses": [ 00:23:12.655 { 00:23:12.655 "trtype": "TCP", 00:23:12.655 "adrfam": "IPv4", 00:23:12.655 "traddr": "10.0.0.2", 00:23:12.655 "trsvcid": "4420" 00:23:12.655 } 00:23:12.655 ], 00:23:12.655 "allow_any_host": true, 00:23:12.655 "hosts": [], 00:23:12.655 "serial_number": "SPDK00000000000001", 00:23:12.655 "model_number": "SPDK bdev Controller", 00:23:12.655 "max_namespaces": 2, 00:23:12.655 "min_cntlid": 1, 00:23:12.655 "max_cntlid": 65519, 00:23:12.655 "namespaces": [ 00:23:12.655 { 00:23:12.655 "nsid": 1, 00:23:12.655 "bdev_name": "Malloc0", 00:23:12.655 "name": "Malloc0", 00:23:12.655 "nguid": "F8987B775AAB4E119DB2D4EFECAB139F", 00:23:12.655 "uuid": "f8987b77-5aab-4e11-9db2-d4efecab139f" 00:23:12.655 } 00:23:12.655 ] 00:23:12.655 } 00:23:12.655 ] 00:23:12.655 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.655 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:12.655 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:12.655 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2290031 00:23:12.655 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:12.656 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:12.656 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:23:12.656 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:12.656 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:23:12.656 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:23:12.656 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:12.656 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:12.656 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:23:12.656 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:23:12.656 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:12.916 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:12.916 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:12.916 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:23:12.916 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:12.916 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.916 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:12.916 Malloc1 00:23:12.916 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.916 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:12.916 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.916 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:12.916 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.916 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:12.916 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.916 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:12.916 Asynchronous Event Request test 00:23:12.916 Attaching to 10.0.0.2 00:23:12.916 Attached to 10.0.0.2 00:23:12.916 Registering asynchronous event callbacks... 00:23:12.916 Starting namespace attribute notice tests for all controllers... 00:23:12.916 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:12.916 aer_cb - Changed Namespace 00:23:12.916 Cleaning up... 00:23:12.916 [ 00:23:12.916 { 00:23:12.916 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:12.916 "subtype": "Discovery", 00:23:12.916 "listen_addresses": [], 00:23:12.916 "allow_any_host": true, 00:23:12.916 "hosts": [] 00:23:12.916 }, 00:23:12.916 { 00:23:12.916 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:12.916 "subtype": "NVMe", 00:23:12.916 "listen_addresses": [ 00:23:12.916 { 00:23:12.916 "trtype": "TCP", 00:23:12.916 "adrfam": "IPv4", 00:23:12.916 "traddr": "10.0.0.2", 00:23:12.916 "trsvcid": "4420" 00:23:12.916 } 00:23:12.916 ], 00:23:12.916 "allow_any_host": true, 00:23:12.916 "hosts": [], 00:23:12.916 "serial_number": "SPDK00000000000001", 00:23:12.916 "model_number": "SPDK bdev Controller", 00:23:12.916 "max_namespaces": 2, 00:23:12.916 "min_cntlid": 1, 00:23:12.916 "max_cntlid": 65519, 00:23:12.916 "namespaces": [ 00:23:12.916 { 00:23:12.916 "nsid": 1, 00:23:12.916 "bdev_name": "Malloc0", 00:23:12.916 "name": "Malloc0", 00:23:12.916 "nguid": "F8987B775AAB4E119DB2D4EFECAB139F", 00:23:12.916 "uuid": "f8987b77-5aab-4e11-9db2-d4efecab139f" 00:23:12.916 }, 00:23:12.916 { 00:23:12.916 "nsid": 2, 00:23:12.916 "bdev_name": "Malloc1", 00:23:12.916 "name": "Malloc1", 00:23:12.916 "nguid": "BCED233207304635AD91DE4A61828112", 00:23:12.916 "uuid": "bced2332-0730-4635-ad91-de4a61828112" 00:23:12.916 } 00:23:12.916 ] 00:23:12.916 } 00:23:12.916 ] 00:23:12.916 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.916 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2290031 00:23:12.916 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:12.916 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.916 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:12.916 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.916 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:12.916 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.916 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:12.916 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.916 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:12.916 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.916 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:12.916 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.916 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:12.916 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:12.916 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:12.916 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:23:12.916 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:12.916 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:23:12.916 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:12.916 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:12.916 rmmod nvme_tcp 00:23:12.916 rmmod nvme_fabrics 00:23:12.916 rmmod nvme_keyring 00:23:12.916 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:12.916 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:23:12.916 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:23:12.916 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 2289793 ']' 00:23:12.916 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 2289793 00:23:12.916 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 2289793 ']' 00:23:12.916 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 2289793 00:23:12.916 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:23:12.916 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:12.916 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2289793 00:23:13.176 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:13.176 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:13.176 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2289793' 00:23:13.176 killing process with pid 2289793 00:23:13.176 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 2289793 00:23:13.176 16:34:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 2289793 00:23:13.176 16:34:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:13.176 16:34:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:13.176 16:34:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:13.176 16:34:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:23:13.176 16:34:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:23:13.176 16:34:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:13.176 16:34:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:23:13.176 16:34:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:13.176 16:34:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:13.176 16:34:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.176 16:34:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:13.176 16:34:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:15.722 00:23:15.722 real 0m11.346s 00:23:15.722 user 0m7.946s 00:23:15.722 sys 0m5.982s 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:15.722 ************************************ 00:23:15.722 END TEST nvmf_aer 00:23:15.722 ************************************ 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.722 ************************************ 00:23:15.722 START TEST nvmf_async_init 00:23:15.722 ************************************ 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:15.722 * Looking for test storage... 00:23:15.722 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:15.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.722 --rc genhtml_branch_coverage=1 00:23:15.722 --rc genhtml_function_coverage=1 00:23:15.722 --rc genhtml_legend=1 00:23:15.722 --rc geninfo_all_blocks=1 00:23:15.722 --rc geninfo_unexecuted_blocks=1 00:23:15.722 00:23:15.722 ' 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:15.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.722 --rc genhtml_branch_coverage=1 00:23:15.722 --rc genhtml_function_coverage=1 00:23:15.722 --rc genhtml_legend=1 00:23:15.722 --rc geninfo_all_blocks=1 00:23:15.722 --rc geninfo_unexecuted_blocks=1 00:23:15.722 00:23:15.722 ' 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:15.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.722 --rc genhtml_branch_coverage=1 00:23:15.722 --rc genhtml_function_coverage=1 00:23:15.722 --rc genhtml_legend=1 00:23:15.722 --rc geninfo_all_blocks=1 00:23:15.722 --rc geninfo_unexecuted_blocks=1 00:23:15.722 00:23:15.722 ' 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:15.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.722 --rc genhtml_branch_coverage=1 00:23:15.722 --rc genhtml_function_coverage=1 00:23:15.722 --rc genhtml_legend=1 00:23:15.722 --rc geninfo_all_blocks=1 00:23:15.722 --rc geninfo_unexecuted_blocks=1 00:23:15.722 00:23:15.722 ' 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:15.722 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:15.723 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.723 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.723 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.723 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:15.723 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.723 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:23:15.723 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:15.723 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:15.723 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:15.723 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:15.723 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:15.723 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:15.723 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:15.723 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:15.723 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:15.723 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:15.723 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:15.723 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:15.723 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:15.723 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:15.723 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:15.723 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:15.723 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=7d6895970cf744d8a429aa47d06f7987 00:23:15.723 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:15.723 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:15.723 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:15.723 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:15.723 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:15.723 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:15.723 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.723 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:15.723 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:15.723 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:15.723 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:15.723 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:23:15.723 16:35:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:23.862 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:23.862 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:23.862 Found net devices under 0000:31:00.0: cvl_0_0 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:23.862 Found net devices under 0000:31:00.1: cvl_0_1 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:23.862 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:23.863 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:23.863 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:23.863 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:23.863 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:23.863 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:23.863 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:23.863 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:23.863 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:23.863 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:23.863 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:23.863 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:23.863 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:23.863 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:23.863 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:23.863 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:23.863 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:23.863 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:23:23.863 00:23:23.863 --- 10.0.0.2 ping statistics --- 00:23:23.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.863 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:23:23.863 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:23.863 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:23.863 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:23:23.863 00:23:23.863 --- 10.0.0.1 ping statistics --- 00:23:23.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.863 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:23:23.863 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:23.863 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:23:23.863 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:23.863 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:23.863 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:23.863 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:23.863 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:23.863 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:23.863 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:23.863 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:23.863 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:23.863 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:23.863 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:23.863 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=2294389 00:23:23.863 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 2294389 00:23:23.863 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:23.863 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 2294389 ']' 00:23:23.863 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:23.863 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:23.863 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:23.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:23.863 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:23.863 16:35:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:23.863 [2024-11-20 16:35:08.895658] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:23:23.863 [2024-11-20 16:35:08.895709] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:23.863 [2024-11-20 16:35:08.974102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.863 [2024-11-20 16:35:09.008885] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:23.863 [2024-11-20 16:35:09.008915] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:23.863 [2024-11-20 16:35:09.008923] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:23.863 [2024-11-20 16:35:09.008930] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:23.863 [2024-11-20 16:35:09.008935] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:23.863 [2024-11-20 16:35:09.009488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:23.863 16:35:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:23.863 16:35:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:23:23.863 16:35:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:23.863 16:35:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:23.863 16:35:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:23.863 16:35:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:23.863 16:35:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:23.863 16:35:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.863 16:35:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:23.863 [2024-11-20 16:35:09.738808] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:23.863 16:35:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.863 16:35:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:23.863 16:35:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.863 16:35:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:23.863 null0 00:23:23.863 16:35:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.863 16:35:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:23.863 16:35:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.863 16:35:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:23.863 16:35:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.863 16:35:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:23.863 16:35:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.863 16:35:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:23.863 16:35:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.863 16:35:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 7d6895970cf744d8a429aa47d06f7987 00:23:23.863 16:35:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.863 16:35:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:23.863 16:35:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.863 16:35:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:23.863 16:35:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.863 16:35:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:23.863 [2024-11-20 16:35:09.799112] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:23.863 16:35:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.863 16:35:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:23.863 16:35:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.863 16:35:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:24.125 nvme0n1 00:23:24.125 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.125 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:24.125 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.125 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:24.125 [ 00:23:24.125 { 00:23:24.125 "name": "nvme0n1", 00:23:24.125 "aliases": [ 00:23:24.125 "7d689597-0cf7-44d8-a429-aa47d06f7987" 00:23:24.125 ], 00:23:24.125 "product_name": "NVMe disk", 00:23:24.125 "block_size": 512, 00:23:24.125 "num_blocks": 2097152, 00:23:24.125 "uuid": "7d689597-0cf7-44d8-a429-aa47d06f7987", 00:23:24.125 "numa_id": 0, 00:23:24.125 "assigned_rate_limits": { 00:23:24.125 "rw_ios_per_sec": 0, 00:23:24.125 "rw_mbytes_per_sec": 0, 00:23:24.125 "r_mbytes_per_sec": 0, 00:23:24.125 "w_mbytes_per_sec": 0 00:23:24.125 }, 00:23:24.125 "claimed": false, 00:23:24.125 "zoned": false, 00:23:24.125 "supported_io_types": { 00:23:24.125 "read": true, 00:23:24.125 "write": true, 00:23:24.125 "unmap": false, 00:23:24.125 "flush": true, 00:23:24.125 "reset": true, 00:23:24.125 "nvme_admin": true, 00:23:24.125 "nvme_io": true, 00:23:24.125 "nvme_io_md": false, 00:23:24.125 "write_zeroes": true, 00:23:24.125 "zcopy": false, 00:23:24.125 "get_zone_info": false, 00:23:24.125 "zone_management": false, 00:23:24.125 "zone_append": false, 00:23:24.125 "compare": true, 00:23:24.125 "compare_and_write": true, 00:23:24.125 "abort": true, 00:23:24.125 "seek_hole": false, 00:23:24.125 "seek_data": false, 00:23:24.125 "copy": true, 00:23:24.125 "nvme_iov_md": false 00:23:24.125 }, 00:23:24.125 "memory_domains": [ 00:23:24.125 { 00:23:24.125 "dma_device_id": "system", 00:23:24.125 "dma_device_type": 1 00:23:24.125 } 00:23:24.125 ], 00:23:24.125 "driver_specific": { 00:23:24.125 "nvme": [ 00:23:24.125 { 00:23:24.125 "trid": { 00:23:24.125 "trtype": "TCP", 00:23:24.125 "adrfam": "IPv4", 00:23:24.125 "traddr": "10.0.0.2", 00:23:24.125 "trsvcid": "4420", 00:23:24.125 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:24.125 }, 00:23:24.125 "ctrlr_data": { 00:23:24.125 "cntlid": 1, 00:23:24.125 "vendor_id": "0x8086", 00:23:24.125 "model_number": "SPDK bdev Controller", 00:23:24.125 "serial_number": "00000000000000000000", 00:23:24.125 "firmware_revision": "25.01", 00:23:24.125 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:24.125 "oacs": { 00:23:24.125 "security": 0, 00:23:24.125 "format": 0, 00:23:24.125 "firmware": 0, 00:23:24.125 "ns_manage": 0 00:23:24.125 }, 00:23:24.125 "multi_ctrlr": true, 00:23:24.125 "ana_reporting": false 00:23:24.125 }, 00:23:24.125 "vs": { 00:23:24.125 "nvme_version": "1.3" 00:23:24.125 }, 00:23:24.125 "ns_data": { 00:23:24.125 "id": 1, 00:23:24.125 "can_share": true 00:23:24.125 } 00:23:24.125 } 00:23:24.125 ], 00:23:24.125 "mp_policy": "active_passive" 00:23:24.125 } 00:23:24.125 } 00:23:24.125 ] 00:23:24.125 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.125 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:24.125 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.125 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:24.125 [2024-11-20 16:35:10.073397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:24.125 [2024-11-20 16:35:10.073463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be9090 (9): Bad file descriptor 00:23:24.387 [2024-11-20 16:35:10.205079] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:23:24.387 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.387 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:24.387 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.387 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:24.387 [ 00:23:24.387 { 00:23:24.387 "name": "nvme0n1", 00:23:24.387 "aliases": [ 00:23:24.387 "7d689597-0cf7-44d8-a429-aa47d06f7987" 00:23:24.387 ], 00:23:24.387 "product_name": "NVMe disk", 00:23:24.387 "block_size": 512, 00:23:24.387 "num_blocks": 2097152, 00:23:24.387 "uuid": "7d689597-0cf7-44d8-a429-aa47d06f7987", 00:23:24.387 "numa_id": 0, 00:23:24.387 "assigned_rate_limits": { 00:23:24.387 "rw_ios_per_sec": 0, 00:23:24.387 "rw_mbytes_per_sec": 0, 00:23:24.387 "r_mbytes_per_sec": 0, 00:23:24.387 "w_mbytes_per_sec": 0 00:23:24.387 }, 00:23:24.387 "claimed": false, 00:23:24.387 "zoned": false, 00:23:24.387 "supported_io_types": { 00:23:24.387 "read": true, 00:23:24.387 "write": true, 00:23:24.387 "unmap": false, 00:23:24.387 "flush": true, 00:23:24.387 "reset": true, 00:23:24.387 "nvme_admin": true, 00:23:24.387 "nvme_io": true, 00:23:24.387 "nvme_io_md": false, 00:23:24.387 "write_zeroes": true, 00:23:24.387 "zcopy": false, 00:23:24.387 "get_zone_info": false, 00:23:24.387 "zone_management": false, 00:23:24.387 "zone_append": false, 00:23:24.387 "compare": true, 00:23:24.387 "compare_and_write": true, 00:23:24.387 "abort": true, 00:23:24.387 "seek_hole": false, 00:23:24.387 "seek_data": false, 00:23:24.387 "copy": true, 00:23:24.387 "nvme_iov_md": false 00:23:24.387 }, 00:23:24.387 "memory_domains": [ 00:23:24.387 { 00:23:24.387 "dma_device_id": "system", 00:23:24.387 "dma_device_type": 1 00:23:24.387 } 00:23:24.387 ], 00:23:24.387 "driver_specific": { 00:23:24.387 "nvme": [ 00:23:24.387 { 00:23:24.387 "trid": { 00:23:24.387 "trtype": "TCP", 00:23:24.387 "adrfam": "IPv4", 00:23:24.387 "traddr": "10.0.0.2", 00:23:24.387 "trsvcid": "4420", 00:23:24.387 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:24.387 }, 00:23:24.387 "ctrlr_data": { 00:23:24.387 "cntlid": 2, 00:23:24.387 "vendor_id": "0x8086", 00:23:24.387 "model_number": "SPDK bdev Controller", 00:23:24.387 "serial_number": "00000000000000000000", 00:23:24.387 "firmware_revision": "25.01", 00:23:24.387 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:24.387 "oacs": { 00:23:24.387 "security": 0, 00:23:24.387 "format": 0, 00:23:24.387 "firmware": 0, 00:23:24.387 "ns_manage": 0 00:23:24.387 }, 00:23:24.387 "multi_ctrlr": true, 00:23:24.387 "ana_reporting": false 00:23:24.387 }, 00:23:24.387 "vs": { 00:23:24.387 "nvme_version": "1.3" 00:23:24.387 }, 00:23:24.387 "ns_data": { 00:23:24.387 "id": 1, 00:23:24.387 "can_share": true 00:23:24.387 } 00:23:24.387 } 00:23:24.387 ], 00:23:24.387 "mp_policy": "active_passive" 00:23:24.387 } 00:23:24.387 } 00:23:24.387 ] 00:23:24.387 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.387 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:24.387 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.387 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:24.387 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.387 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:24.387 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.0D1UWtFthN 00:23:24.387 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:24.387 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.0D1UWtFthN 00:23:24.387 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.0D1UWtFthN 00:23:24.387 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.387 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:24.387 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.387 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:24.387 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.387 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:24.387 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.387 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:24.387 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.387 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:24.387 [2024-11-20 16:35:10.294082] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:24.387 [2024-11-20 16:35:10.294210] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:24.387 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.387 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:23:24.387 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.387 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:24.387 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.388 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:24.388 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.388 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:24.388 [2024-11-20 16:35:10.318161] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:24.648 nvme0n1 00:23:24.648 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.648 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:24.648 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.648 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:24.648 [ 00:23:24.648 { 00:23:24.648 "name": "nvme0n1", 00:23:24.648 "aliases": [ 00:23:24.648 "7d689597-0cf7-44d8-a429-aa47d06f7987" 00:23:24.648 ], 00:23:24.648 "product_name": "NVMe disk", 00:23:24.649 "block_size": 512, 00:23:24.649 "num_blocks": 2097152, 00:23:24.649 "uuid": "7d689597-0cf7-44d8-a429-aa47d06f7987", 00:23:24.649 "numa_id": 0, 00:23:24.649 "assigned_rate_limits": { 00:23:24.649 "rw_ios_per_sec": 0, 00:23:24.649 "rw_mbytes_per_sec": 0, 00:23:24.649 "r_mbytes_per_sec": 0, 00:23:24.649 "w_mbytes_per_sec": 0 00:23:24.649 }, 00:23:24.649 "claimed": false, 00:23:24.649 "zoned": false, 00:23:24.649 "supported_io_types": { 00:23:24.649 "read": true, 00:23:24.649 "write": true, 00:23:24.649 "unmap": false, 00:23:24.649 "flush": true, 00:23:24.649 "reset": true, 00:23:24.649 "nvme_admin": true, 00:23:24.649 "nvme_io": true, 00:23:24.649 "nvme_io_md": false, 00:23:24.649 "write_zeroes": true, 00:23:24.649 "zcopy": false, 00:23:24.649 "get_zone_info": false, 00:23:24.649 "zone_management": false, 00:23:24.649 "zone_append": false, 00:23:24.649 "compare": true, 00:23:24.649 "compare_and_write": true, 00:23:24.649 "abort": true, 00:23:24.649 "seek_hole": false, 00:23:24.649 "seek_data": false, 00:23:24.649 "copy": true, 00:23:24.649 "nvme_iov_md": false 00:23:24.649 }, 00:23:24.649 "memory_domains": [ 00:23:24.649 { 00:23:24.649 "dma_device_id": "system", 00:23:24.649 "dma_device_type": 1 00:23:24.649 } 00:23:24.649 ], 00:23:24.649 "driver_specific": { 00:23:24.649 "nvme": [ 00:23:24.649 { 00:23:24.649 "trid": { 00:23:24.649 "trtype": "TCP", 00:23:24.649 "adrfam": "IPv4", 00:23:24.649 "traddr": "10.0.0.2", 00:23:24.649 "trsvcid": "4421", 00:23:24.649 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:24.649 }, 00:23:24.649 "ctrlr_data": { 00:23:24.649 "cntlid": 3, 00:23:24.649 "vendor_id": "0x8086", 00:23:24.649 "model_number": "SPDK bdev Controller", 00:23:24.649 "serial_number": "00000000000000000000", 00:23:24.649 "firmware_revision": "25.01", 00:23:24.649 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:24.649 "oacs": { 00:23:24.649 "security": 0, 00:23:24.649 "format": 0, 00:23:24.649 "firmware": 0, 00:23:24.649 "ns_manage": 0 00:23:24.649 }, 00:23:24.649 "multi_ctrlr": true, 00:23:24.649 "ana_reporting": false 00:23:24.649 }, 00:23:24.649 "vs": { 00:23:24.649 "nvme_version": "1.3" 00:23:24.649 }, 00:23:24.649 "ns_data": { 00:23:24.649 "id": 1, 00:23:24.649 "can_share": true 00:23:24.649 } 00:23:24.649 } 00:23:24.649 ], 00:23:24.649 "mp_policy": "active_passive" 00:23:24.649 } 00:23:24.649 } 00:23:24.649 ] 00:23:24.649 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.649 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:24.649 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.649 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:24.649 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.649 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.0D1UWtFthN 00:23:24.649 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:23:24.649 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:23:24.649 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:24.649 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:23:24.649 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:24.649 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:23:24.649 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:24.649 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:24.649 rmmod nvme_tcp 00:23:24.649 rmmod nvme_fabrics 00:23:24.649 rmmod nvme_keyring 00:23:24.649 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:24.649 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:23:24.649 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:23:24.649 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 2294389 ']' 00:23:24.649 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 2294389 00:23:24.649 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 2294389 ']' 00:23:24.649 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 2294389 00:23:24.649 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:23:24.649 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:24.649 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2294389 00:23:24.649 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:24.649 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:24.649 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2294389' 00:23:24.649 killing process with pid 2294389 00:23:24.649 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 2294389 00:23:24.649 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 2294389 00:23:24.910 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:24.910 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:24.910 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:24.910 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:23:24.910 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:23:24.910 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:24.910 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:23:24.910 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:24.910 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:24.910 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.910 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:24.910 16:35:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.823 16:35:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:26.823 00:23:26.823 real 0m11.561s 00:23:26.823 user 0m4.122s 00:23:26.823 sys 0m5.967s 00:23:26.823 16:35:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:26.823 16:35:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:26.823 ************************************ 00:23:26.823 END TEST nvmf_async_init 00:23:26.823 ************************************ 00:23:27.083 16:35:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:27.083 16:35:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:27.083 16:35:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:27.083 16:35:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.083 ************************************ 00:23:27.083 START TEST dma 00:23:27.083 ************************************ 00:23:27.083 16:35:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:27.083 * Looking for test storage... 00:23:27.083 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:27.083 16:35:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:27.083 16:35:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:23:27.083 16:35:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:27.083 16:35:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:27.083 16:35:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:27.083 16:35:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:27.083 16:35:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:27.083 16:35:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:23:27.344 16:35:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:23:27.344 16:35:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:23:27.344 16:35:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:23:27.344 16:35:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:23:27.344 16:35:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:23:27.344 16:35:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:23:27.344 16:35:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:27.344 16:35:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:23:27.344 16:35:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:23:27.344 16:35:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:27.344 16:35:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:27.344 16:35:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:23:27.344 16:35:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:23:27.344 16:35:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:27.344 16:35:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:23:27.344 16:35:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:23:27.344 16:35:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:23:27.344 16:35:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:23:27.344 16:35:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:27.344 16:35:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:23:27.344 16:35:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:23:27.344 16:35:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:27.344 16:35:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:27.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.345 --rc genhtml_branch_coverage=1 00:23:27.345 --rc genhtml_function_coverage=1 00:23:27.345 --rc genhtml_legend=1 00:23:27.345 --rc geninfo_all_blocks=1 00:23:27.345 --rc geninfo_unexecuted_blocks=1 00:23:27.345 00:23:27.345 ' 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:27.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.345 --rc genhtml_branch_coverage=1 00:23:27.345 --rc genhtml_function_coverage=1 00:23:27.345 --rc genhtml_legend=1 00:23:27.345 --rc geninfo_all_blocks=1 00:23:27.345 --rc geninfo_unexecuted_blocks=1 00:23:27.345 00:23:27.345 ' 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:27.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.345 --rc genhtml_branch_coverage=1 00:23:27.345 --rc genhtml_function_coverage=1 00:23:27.345 --rc genhtml_legend=1 00:23:27.345 --rc geninfo_all_blocks=1 00:23:27.345 --rc geninfo_unexecuted_blocks=1 00:23:27.345 00:23:27.345 ' 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:27.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.345 --rc genhtml_branch_coverage=1 00:23:27.345 --rc genhtml_function_coverage=1 00:23:27.345 --rc genhtml_legend=1 00:23:27.345 --rc geninfo_all_blocks=1 00:23:27.345 --rc geninfo_unexecuted_blocks=1 00:23:27.345 00:23:27.345 ' 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:27.345 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:23:27.345 00:23:27.345 real 0m0.234s 00:23:27.345 user 0m0.137s 00:23:27.345 sys 0m0.112s 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:27.345 ************************************ 00:23:27.345 END TEST dma 00:23:27.345 ************************************ 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.345 ************************************ 00:23:27.345 START TEST nvmf_identify 00:23:27.345 ************************************ 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:27.345 * Looking for test storage... 00:23:27.345 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:23:27.345 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:27.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.607 --rc genhtml_branch_coverage=1 00:23:27.607 --rc genhtml_function_coverage=1 00:23:27.607 --rc genhtml_legend=1 00:23:27.607 --rc geninfo_all_blocks=1 00:23:27.607 --rc geninfo_unexecuted_blocks=1 00:23:27.607 00:23:27.607 ' 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:27.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.607 --rc genhtml_branch_coverage=1 00:23:27.607 --rc genhtml_function_coverage=1 00:23:27.607 --rc genhtml_legend=1 00:23:27.607 --rc geninfo_all_blocks=1 00:23:27.607 --rc geninfo_unexecuted_blocks=1 00:23:27.607 00:23:27.607 ' 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:27.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.607 --rc genhtml_branch_coverage=1 00:23:27.607 --rc genhtml_function_coverage=1 00:23:27.607 --rc genhtml_legend=1 00:23:27.607 --rc geninfo_all_blocks=1 00:23:27.607 --rc geninfo_unexecuted_blocks=1 00:23:27.607 00:23:27.607 ' 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:27.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.607 --rc genhtml_branch_coverage=1 00:23:27.607 --rc genhtml_function_coverage=1 00:23:27.607 --rc genhtml_legend=1 00:23:27.607 --rc geninfo_all_blocks=1 00:23:27.607 --rc geninfo_unexecuted_blocks=1 00:23:27.607 00:23:27.607 ' 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:27.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:27.607 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:27.608 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:27.608 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:27.608 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.608 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:27.608 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:27.608 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:27.608 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:27.608 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:23:27.608 16:35:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:35.742 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:35.742 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:35.742 Found net devices under 0000:31:00.0: cvl_0_0 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:35.742 Found net devices under 0000:31:00.1: cvl_0_1 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:35.742 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:35.742 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.667 ms 00:23:35.742 00:23:35.742 --- 10.0.0.2 ping statistics --- 00:23:35.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.742 rtt min/avg/max/mdev = 0.667/0.667/0.667/0.000 ms 00:23:35.742 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:35.742 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:35.742 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:23:35.742 00:23:35.743 --- 10.0.0.1 ping statistics --- 00:23:35.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.743 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:23:35.743 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:35.743 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:23:35.743 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:35.743 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:35.743 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:35.743 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:35.743 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:35.743 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:35.743 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:35.743 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:35.743 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:35.743 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:35.743 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2299041 00:23:35.743 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:35.743 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:35.743 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2299041 00:23:35.743 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 2299041 ']' 00:23:35.743 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:35.743 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:35.743 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:35.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:35.743 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:35.743 16:35:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:35.743 [2024-11-20 16:35:20.995153] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:23:35.743 [2024-11-20 16:35:20.995227] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:35.743 [2024-11-20 16:35:21.080247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:35.743 [2024-11-20 16:35:21.124157] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:35.743 [2024-11-20 16:35:21.124194] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:35.743 [2024-11-20 16:35:21.124202] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:35.743 [2024-11-20 16:35:21.124208] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:35.743 [2024-11-20 16:35:21.124215] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:35.743 [2024-11-20 16:35:21.126021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:35.743 [2024-11-20 16:35:21.126241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:35.743 [2024-11-20 16:35:21.126242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:35.743 [2024-11-20 16:35:21.126098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:36.003 16:35:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:36.003 16:35:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:23:36.003 16:35:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:36.003 16:35:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.003 16:35:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:36.003 [2024-11-20 16:35:21.814307] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:36.003 16:35:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.003 16:35:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:36.003 16:35:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:36.003 16:35:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:36.003 16:35:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:36.003 16:35:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.003 16:35:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:36.003 Malloc0 00:23:36.003 16:35:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.003 16:35:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:36.003 16:35:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.003 16:35:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:36.003 16:35:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.003 16:35:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:36.003 16:35:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.003 16:35:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:36.003 16:35:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.003 16:35:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:36.003 16:35:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.003 16:35:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:36.003 [2024-11-20 16:35:21.922369] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:36.003 16:35:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.003 16:35:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:36.003 16:35:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.003 16:35:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:36.003 16:35:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.003 16:35:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:36.003 16:35:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.003 16:35:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:36.003 [ 00:23:36.003 { 00:23:36.003 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:36.003 "subtype": "Discovery", 00:23:36.003 "listen_addresses": [ 00:23:36.003 { 00:23:36.003 "trtype": "TCP", 00:23:36.003 "adrfam": "IPv4", 00:23:36.003 "traddr": "10.0.0.2", 00:23:36.003 "trsvcid": "4420" 00:23:36.003 } 00:23:36.003 ], 00:23:36.003 "allow_any_host": true, 00:23:36.003 "hosts": [] 00:23:36.003 }, 00:23:36.003 { 00:23:36.003 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.003 "subtype": "NVMe", 00:23:36.003 "listen_addresses": [ 00:23:36.003 { 00:23:36.003 "trtype": "TCP", 00:23:36.003 "adrfam": "IPv4", 00:23:36.003 "traddr": "10.0.0.2", 00:23:36.003 "trsvcid": "4420" 00:23:36.003 } 00:23:36.003 ], 00:23:36.004 "allow_any_host": true, 00:23:36.004 "hosts": [], 00:23:36.004 "serial_number": "SPDK00000000000001", 00:23:36.004 "model_number": "SPDK bdev Controller", 00:23:36.004 "max_namespaces": 32, 00:23:36.004 "min_cntlid": 1, 00:23:36.004 "max_cntlid": 65519, 00:23:36.004 "namespaces": [ 00:23:36.004 { 00:23:36.004 "nsid": 1, 00:23:36.004 "bdev_name": "Malloc0", 00:23:36.004 "name": "Malloc0", 00:23:36.004 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:36.004 "eui64": "ABCDEF0123456789", 00:23:36.004 "uuid": "148b6fc3-2a38-4aa8-ae99-aee19f4caacc" 00:23:36.004 } 00:23:36.004 ] 00:23:36.004 } 00:23:36.004 ] 00:23:36.004 16:35:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.004 16:35:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:36.266 [2024-11-20 16:35:21.985281] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:23:36.266 [2024-11-20 16:35:21.985322] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2299189 ] 00:23:36.266 [2024-11-20 16:35:22.041045] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:23:36.266 [2024-11-20 16:35:22.041091] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:36.266 [2024-11-20 16:35:22.041097] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:36.266 [2024-11-20 16:35:22.041109] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:36.266 [2024-11-20 16:35:22.041118] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:36.266 [2024-11-20 16:35:22.041810] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:23:36.266 [2024-11-20 16:35:22.041840] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x19a6550 0 00:23:36.266 [2024-11-20 16:35:22.047995] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:36.266 [2024-11-20 16:35:22.048007] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:36.266 [2024-11-20 16:35:22.048012] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:36.266 [2024-11-20 16:35:22.048015] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:36.266 [2024-11-20 16:35:22.048044] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.266 [2024-11-20 16:35:22.048050] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.266 [2024-11-20 16:35:22.048054] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19a6550) 00:23:36.266 [2024-11-20 16:35:22.048066] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:36.266 [2024-11-20 16:35:22.048087] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a08100, cid 0, qid 0 00:23:36.266 [2024-11-20 16:35:22.055994] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.266 [2024-11-20 16:35:22.056004] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.266 [2024-11-20 16:35:22.056007] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.266 [2024-11-20 16:35:22.056012] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a08100) on tqpair=0x19a6550 00:23:36.266 [2024-11-20 16:35:22.056024] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:36.266 [2024-11-20 16:35:22.056031] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:23:36.266 [2024-11-20 16:35:22.056037] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:23:36.266 [2024-11-20 16:35:22.056049] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.266 [2024-11-20 16:35:22.056054] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.266 [2024-11-20 16:35:22.056057] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19a6550) 00:23:36.266 [2024-11-20 16:35:22.056065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.266 [2024-11-20 16:35:22.056079] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a08100, cid 0, qid 0 00:23:36.266 [2024-11-20 16:35:22.056286] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.266 [2024-11-20 16:35:22.056292] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.266 [2024-11-20 16:35:22.056296] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.266 [2024-11-20 16:35:22.056300] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a08100) on tqpair=0x19a6550 00:23:36.266 [2024-11-20 16:35:22.056305] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:23:36.266 [2024-11-20 16:35:22.056313] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:23:36.266 [2024-11-20 16:35:22.056320] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.266 [2024-11-20 16:35:22.056324] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.266 [2024-11-20 16:35:22.056327] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19a6550) 00:23:36.266 [2024-11-20 16:35:22.056334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.266 [2024-11-20 16:35:22.056345] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a08100, cid 0, qid 0 00:23:36.266 [2024-11-20 16:35:22.056502] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.266 [2024-11-20 16:35:22.056509] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.266 [2024-11-20 16:35:22.056512] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.266 [2024-11-20 16:35:22.056516] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a08100) on tqpair=0x19a6550 00:23:36.266 [2024-11-20 16:35:22.056522] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:23:36.266 [2024-11-20 16:35:22.056530] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:23:36.266 [2024-11-20 16:35:22.056537] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.266 [2024-11-20 16:35:22.056541] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.266 [2024-11-20 16:35:22.056544] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19a6550) 00:23:36.266 [2024-11-20 16:35:22.056551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.266 [2024-11-20 16:35:22.056564] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a08100, cid 0, qid 0 00:23:36.266 [2024-11-20 16:35:22.056738] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.266 [2024-11-20 16:35:22.056745] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.266 [2024-11-20 16:35:22.056748] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.266 [2024-11-20 16:35:22.056752] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a08100) on tqpair=0x19a6550 00:23:36.266 [2024-11-20 16:35:22.056757] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:36.266 [2024-11-20 16:35:22.056766] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.266 [2024-11-20 16:35:22.056770] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.266 [2024-11-20 16:35:22.056774] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19a6550) 00:23:36.266 [2024-11-20 16:35:22.056781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.266 [2024-11-20 16:35:22.056790] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a08100, cid 0, qid 0 00:23:36.266 [2024-11-20 16:35:22.056992] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.266 [2024-11-20 16:35:22.056999] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.266 [2024-11-20 16:35:22.057003] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.266 [2024-11-20 16:35:22.057006] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a08100) on tqpair=0x19a6550 00:23:36.266 [2024-11-20 16:35:22.057011] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:23:36.266 [2024-11-20 16:35:22.057016] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:23:36.266 [2024-11-20 16:35:22.057024] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:36.267 [2024-11-20 16:35:22.057131] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:23:36.267 [2024-11-20 16:35:22.057136] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:36.267 [2024-11-20 16:35:22.057145] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.267 [2024-11-20 16:35:22.057148] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.267 [2024-11-20 16:35:22.057152] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19a6550) 00:23:36.267 [2024-11-20 16:35:22.057159] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.267 [2024-11-20 16:35:22.057169] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a08100, cid 0, qid 0 00:23:36.267 [2024-11-20 16:35:22.057367] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.267 [2024-11-20 16:35:22.057373] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.267 [2024-11-20 16:35:22.057377] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.267 [2024-11-20 16:35:22.057381] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a08100) on tqpair=0x19a6550 00:23:36.267 [2024-11-20 16:35:22.057386] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:36.267 [2024-11-20 16:35:22.057395] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.267 [2024-11-20 16:35:22.057399] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.267 [2024-11-20 16:35:22.057402] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19a6550) 00:23:36.267 [2024-11-20 16:35:22.057414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.267 [2024-11-20 16:35:22.057424] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a08100, cid 0, qid 0 00:23:36.267 [2024-11-20 16:35:22.057609] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.267 [2024-11-20 16:35:22.057615] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.267 [2024-11-20 16:35:22.057619] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.267 [2024-11-20 16:35:22.057623] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a08100) on tqpair=0x19a6550 00:23:36.267 [2024-11-20 16:35:22.057627] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:36.267 [2024-11-20 16:35:22.057632] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:23:36.267 [2024-11-20 16:35:22.057640] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:23:36.267 [2024-11-20 16:35:22.057647] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:23:36.267 [2024-11-20 16:35:22.057656] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.267 [2024-11-20 16:35:22.057660] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19a6550) 00:23:36.267 [2024-11-20 16:35:22.057667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.267 [2024-11-20 16:35:22.057677] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a08100, cid 0, qid 0 00:23:36.267 [2024-11-20 16:35:22.057872] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:36.267 [2024-11-20 16:35:22.057879] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:36.267 [2024-11-20 16:35:22.057883] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:36.267 [2024-11-20 16:35:22.057887] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19a6550): datao=0, datal=4096, cccid=0 00:23:36.267 [2024-11-20 16:35:22.057892] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a08100) on tqpair(0x19a6550): expected_datao=0, payload_size=4096 00:23:36.267 [2024-11-20 16:35:22.057896] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.267 [2024-11-20 16:35:22.057904] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:36.267 [2024-11-20 16:35:22.057908] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:36.267 [2024-11-20 16:35:22.058054] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.267 [2024-11-20 16:35:22.058061] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.267 [2024-11-20 16:35:22.058065] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.267 [2024-11-20 16:35:22.058068] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a08100) on tqpair=0x19a6550 00:23:36.267 [2024-11-20 16:35:22.058076] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:23:36.267 [2024-11-20 16:35:22.058081] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:23:36.267 [2024-11-20 16:35:22.058085] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:23:36.267 [2024-11-20 16:35:22.058093] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:23:36.267 [2024-11-20 16:35:22.058097] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:23:36.267 [2024-11-20 16:35:22.058102] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:23:36.267 [2024-11-20 16:35:22.058114] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:23:36.267 [2024-11-20 16:35:22.058121] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.267 [2024-11-20 16:35:22.058125] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.267 [2024-11-20 16:35:22.058129] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19a6550) 00:23:36.267 [2024-11-20 16:35:22.058136] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:36.267 [2024-11-20 16:35:22.058148] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a08100, cid 0, qid 0 00:23:36.267 [2024-11-20 16:35:22.058358] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.267 [2024-11-20 16:35:22.058364] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.267 [2024-11-20 16:35:22.058368] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.267 [2024-11-20 16:35:22.058372] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a08100) on tqpair=0x19a6550 00:23:36.267 [2024-11-20 16:35:22.058379] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.267 [2024-11-20 16:35:22.058383] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.267 [2024-11-20 16:35:22.058387] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19a6550) 00:23:36.267 [2024-11-20 16:35:22.058393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.267 [2024-11-20 16:35:22.058399] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.267 [2024-11-20 16:35:22.058403] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.267 [2024-11-20 16:35:22.058407] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x19a6550) 00:23:36.267 [2024-11-20 16:35:22.058413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.267 [2024-11-20 16:35:22.058419] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.267 [2024-11-20 16:35:22.058423] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.267 [2024-11-20 16:35:22.058426] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x19a6550) 00:23:36.267 [2024-11-20 16:35:22.058432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.267 [2024-11-20 16:35:22.058438] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.267 [2024-11-20 16:35:22.058442] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.267 [2024-11-20 16:35:22.058445] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a6550) 00:23:36.267 [2024-11-20 16:35:22.058451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.267 [2024-11-20 16:35:22.058456] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:36.267 [2024-11-20 16:35:22.058464] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:36.267 [2024-11-20 16:35:22.058470] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.267 [2024-11-20 16:35:22.058474] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19a6550) 00:23:36.267 [2024-11-20 16:35:22.058481] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.267 [2024-11-20 16:35:22.058492] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a08100, cid 0, qid 0 00:23:36.267 [2024-11-20 16:35:22.058498] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a08280, cid 1, qid 0 00:23:36.267 [2024-11-20 16:35:22.058505] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a08400, cid 2, qid 0 00:23:36.267 [2024-11-20 16:35:22.058510] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a08580, cid 3, qid 0 00:23:36.267 [2024-11-20 16:35:22.058514] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a08700, cid 4, qid 0 00:23:36.267 [2024-11-20 16:35:22.058750] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.267 [2024-11-20 16:35:22.058756] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.267 [2024-11-20 16:35:22.058760] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.267 [2024-11-20 16:35:22.058764] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a08700) on tqpair=0x19a6550 00:23:36.267 [2024-11-20 16:35:22.058771] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:23:36.267 [2024-11-20 16:35:22.058776] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:23:36.267 [2024-11-20 16:35:22.058786] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.267 [2024-11-20 16:35:22.058790] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19a6550) 00:23:36.267 [2024-11-20 16:35:22.058796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.267 [2024-11-20 16:35:22.058806] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a08700, cid 4, qid 0 00:23:36.268 [2024-11-20 16:35:22.058996] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:36.268 [2024-11-20 16:35:22.059003] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:36.268 [2024-11-20 16:35:22.059006] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:36.268 [2024-11-20 16:35:22.059010] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19a6550): datao=0, datal=4096, cccid=4 00:23:36.268 [2024-11-20 16:35:22.059015] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a08700) on tqpair(0x19a6550): expected_datao=0, payload_size=4096 00:23:36.268 [2024-11-20 16:35:22.059019] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.268 [2024-11-20 16:35:22.059026] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:36.268 [2024-11-20 16:35:22.059029] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:36.268 [2024-11-20 16:35:22.059247] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.268 [2024-11-20 16:35:22.059254] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.268 [2024-11-20 16:35:22.059257] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.268 [2024-11-20 16:35:22.059261] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a08700) on tqpair=0x19a6550 00:23:36.268 [2024-11-20 16:35:22.059272] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:23:36.268 [2024-11-20 16:35:22.059291] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.268 [2024-11-20 16:35:22.059296] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19a6550) 00:23:36.268 [2024-11-20 16:35:22.059302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.268 [2024-11-20 16:35:22.059309] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.268 [2024-11-20 16:35:22.059313] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.268 [2024-11-20 16:35:22.059317] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19a6550) 00:23:36.268 [2024-11-20 16:35:22.059323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.268 [2024-11-20 16:35:22.059336] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a08700, cid 4, qid 0 00:23:36.268 [2024-11-20 16:35:22.059343] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a08880, cid 5, qid 0 00:23:36.268 [2024-11-20 16:35:22.059555] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:36.268 [2024-11-20 16:35:22.059561] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:36.268 [2024-11-20 16:35:22.059565] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:36.268 [2024-11-20 16:35:22.059568] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19a6550): datao=0, datal=1024, cccid=4 00:23:36.268 [2024-11-20 16:35:22.059573] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a08700) on tqpair(0x19a6550): expected_datao=0, payload_size=1024 00:23:36.268 [2024-11-20 16:35:22.059577] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.268 [2024-11-20 16:35:22.059584] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:36.268 [2024-11-20 16:35:22.059587] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:36.268 [2024-11-20 16:35:22.059593] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.268 [2024-11-20 16:35:22.059599] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.268 [2024-11-20 16:35:22.059602] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.268 [2024-11-20 16:35:22.059606] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a08880) on tqpair=0x19a6550 00:23:36.268 [2024-11-20 16:35:22.103989] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.268 [2024-11-20 16:35:22.104001] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.268 [2024-11-20 16:35:22.104005] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.268 [2024-11-20 16:35:22.104009] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a08700) on tqpair=0x19a6550 00:23:36.268 [2024-11-20 16:35:22.104021] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.268 [2024-11-20 16:35:22.104025] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19a6550) 00:23:36.268 [2024-11-20 16:35:22.104032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.268 [2024-11-20 16:35:22.104049] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a08700, cid 4, qid 0 00:23:36.268 [2024-11-20 16:35:22.104229] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:36.268 [2024-11-20 16:35:22.104235] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:36.268 [2024-11-20 16:35:22.104239] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:36.268 [2024-11-20 16:35:22.104243] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19a6550): datao=0, datal=3072, cccid=4 00:23:36.268 [2024-11-20 16:35:22.104247] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a08700) on tqpair(0x19a6550): expected_datao=0, payload_size=3072 00:23:36.268 [2024-11-20 16:35:22.104252] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.268 [2024-11-20 16:35:22.104268] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:36.268 [2024-11-20 16:35:22.104273] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:36.268 [2024-11-20 16:35:22.104433] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.268 [2024-11-20 16:35:22.104439] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.268 [2024-11-20 16:35:22.104443] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.268 [2024-11-20 16:35:22.104446] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a08700) on tqpair=0x19a6550 00:23:36.268 [2024-11-20 16:35:22.104455] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.268 [2024-11-20 16:35:22.104459] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19a6550) 00:23:36.268 [2024-11-20 16:35:22.104465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.268 [2024-11-20 16:35:22.104479] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a08700, cid 4, qid 0 00:23:36.268 [2024-11-20 16:35:22.104694] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:36.268 [2024-11-20 16:35:22.104701] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:36.268 [2024-11-20 16:35:22.104704] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:36.268 [2024-11-20 16:35:22.104708] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19a6550): datao=0, datal=8, cccid=4 00:23:36.268 [2024-11-20 16:35:22.104713] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a08700) on tqpair(0x19a6550): expected_datao=0, payload_size=8 00:23:36.268 [2024-11-20 16:35:22.104717] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.268 [2024-11-20 16:35:22.104723] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:36.268 [2024-11-20 16:35:22.104727] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:36.268 [2024-11-20 16:35:22.145156] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.268 [2024-11-20 16:35:22.145165] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.268 [2024-11-20 16:35:22.145169] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.268 [2024-11-20 16:35:22.145173] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a08700) on tqpair=0x19a6550 00:23:36.268 ===================================================== 00:23:36.268 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:36.268 ===================================================== 00:23:36.268 Controller Capabilities/Features 00:23:36.268 ================================ 00:23:36.268 Vendor ID: 0000 00:23:36.268 Subsystem Vendor ID: 0000 00:23:36.268 Serial Number: .................... 00:23:36.268 Model Number: ........................................ 00:23:36.268 Firmware Version: 25.01 00:23:36.268 Recommended Arb Burst: 0 00:23:36.268 IEEE OUI Identifier: 00 00 00 00:23:36.268 Multi-path I/O 00:23:36.268 May have multiple subsystem ports: No 00:23:36.268 May have multiple controllers: No 00:23:36.268 Associated with SR-IOV VF: No 00:23:36.268 Max Data Transfer Size: 131072 00:23:36.268 Max Number of Namespaces: 0 00:23:36.268 Max Number of I/O Queues: 1024 00:23:36.268 NVMe Specification Version (VS): 1.3 00:23:36.268 NVMe Specification Version (Identify): 1.3 00:23:36.268 Maximum Queue Entries: 128 00:23:36.268 Contiguous Queues Required: Yes 00:23:36.268 Arbitration Mechanisms Supported 00:23:36.268 Weighted Round Robin: Not Supported 00:23:36.268 Vendor Specific: Not Supported 00:23:36.268 Reset Timeout: 15000 ms 00:23:36.268 Doorbell Stride: 4 bytes 00:23:36.268 NVM Subsystem Reset: Not Supported 00:23:36.268 Command Sets Supported 00:23:36.268 NVM Command Set: Supported 00:23:36.268 Boot Partition: Not Supported 00:23:36.268 Memory Page Size Minimum: 4096 bytes 00:23:36.268 Memory Page Size Maximum: 4096 bytes 00:23:36.268 Persistent Memory Region: Not Supported 00:23:36.268 Optional Asynchronous Events Supported 00:23:36.268 Namespace Attribute Notices: Not Supported 00:23:36.268 Firmware Activation Notices: Not Supported 00:23:36.268 ANA Change Notices: Not Supported 00:23:36.268 PLE Aggregate Log Change Notices: Not Supported 00:23:36.268 LBA Status Info Alert Notices: Not Supported 00:23:36.268 EGE Aggregate Log Change Notices: Not Supported 00:23:36.268 Normal NVM Subsystem Shutdown event: Not Supported 00:23:36.268 Zone Descriptor Change Notices: Not Supported 00:23:36.268 Discovery Log Change Notices: Supported 00:23:36.268 Controller Attributes 00:23:36.268 128-bit Host Identifier: Not Supported 00:23:36.268 Non-Operational Permissive Mode: Not Supported 00:23:36.268 NVM Sets: Not Supported 00:23:36.268 Read Recovery Levels: Not Supported 00:23:36.268 Endurance Groups: Not Supported 00:23:36.268 Predictable Latency Mode: Not Supported 00:23:36.269 Traffic Based Keep ALive: Not Supported 00:23:36.269 Namespace Granularity: Not Supported 00:23:36.269 SQ Associations: Not Supported 00:23:36.269 UUID List: Not Supported 00:23:36.269 Multi-Domain Subsystem: Not Supported 00:23:36.269 Fixed Capacity Management: Not Supported 00:23:36.269 Variable Capacity Management: Not Supported 00:23:36.269 Delete Endurance Group: Not Supported 00:23:36.269 Delete NVM Set: Not Supported 00:23:36.269 Extended LBA Formats Supported: Not Supported 00:23:36.269 Flexible Data Placement Supported: Not Supported 00:23:36.269 00:23:36.269 Controller Memory Buffer Support 00:23:36.269 ================================ 00:23:36.269 Supported: No 00:23:36.269 00:23:36.269 Persistent Memory Region Support 00:23:36.269 ================================ 00:23:36.269 Supported: No 00:23:36.269 00:23:36.269 Admin Command Set Attributes 00:23:36.269 ============================ 00:23:36.269 Security Send/Receive: Not Supported 00:23:36.269 Format NVM: Not Supported 00:23:36.269 Firmware Activate/Download: Not Supported 00:23:36.269 Namespace Management: Not Supported 00:23:36.269 Device Self-Test: Not Supported 00:23:36.269 Directives: Not Supported 00:23:36.269 NVMe-MI: Not Supported 00:23:36.269 Virtualization Management: Not Supported 00:23:36.269 Doorbell Buffer Config: Not Supported 00:23:36.269 Get LBA Status Capability: Not Supported 00:23:36.269 Command & Feature Lockdown Capability: Not Supported 00:23:36.269 Abort Command Limit: 1 00:23:36.269 Async Event Request Limit: 4 00:23:36.269 Number of Firmware Slots: N/A 00:23:36.269 Firmware Slot 1 Read-Only: N/A 00:23:36.269 Firmware Activation Without Reset: N/A 00:23:36.269 Multiple Update Detection Support: N/A 00:23:36.269 Firmware Update Granularity: No Information Provided 00:23:36.269 Per-Namespace SMART Log: No 00:23:36.269 Asymmetric Namespace Access Log Page: Not Supported 00:23:36.269 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:36.269 Command Effects Log Page: Not Supported 00:23:36.269 Get Log Page Extended Data: Supported 00:23:36.269 Telemetry Log Pages: Not Supported 00:23:36.269 Persistent Event Log Pages: Not Supported 00:23:36.269 Supported Log Pages Log Page: May Support 00:23:36.269 Commands Supported & Effects Log Page: Not Supported 00:23:36.269 Feature Identifiers & Effects Log Page:May Support 00:23:36.269 NVMe-MI Commands & Effects Log Page: May Support 00:23:36.269 Data Area 4 for Telemetry Log: Not Supported 00:23:36.269 Error Log Page Entries Supported: 128 00:23:36.269 Keep Alive: Not Supported 00:23:36.269 00:23:36.269 NVM Command Set Attributes 00:23:36.269 ========================== 00:23:36.269 Submission Queue Entry Size 00:23:36.269 Max: 1 00:23:36.269 Min: 1 00:23:36.269 Completion Queue Entry Size 00:23:36.269 Max: 1 00:23:36.269 Min: 1 00:23:36.269 Number of Namespaces: 0 00:23:36.269 Compare Command: Not Supported 00:23:36.269 Write Uncorrectable Command: Not Supported 00:23:36.269 Dataset Management Command: Not Supported 00:23:36.269 Write Zeroes Command: Not Supported 00:23:36.269 Set Features Save Field: Not Supported 00:23:36.269 Reservations: Not Supported 00:23:36.269 Timestamp: Not Supported 00:23:36.269 Copy: Not Supported 00:23:36.269 Volatile Write Cache: Not Present 00:23:36.269 Atomic Write Unit (Normal): 1 00:23:36.269 Atomic Write Unit (PFail): 1 00:23:36.269 Atomic Compare & Write Unit: 1 00:23:36.269 Fused Compare & Write: Supported 00:23:36.269 Scatter-Gather List 00:23:36.269 SGL Command Set: Supported 00:23:36.269 SGL Keyed: Supported 00:23:36.269 SGL Bit Bucket Descriptor: Not Supported 00:23:36.269 SGL Metadata Pointer: Not Supported 00:23:36.269 Oversized SGL: Not Supported 00:23:36.269 SGL Metadata Address: Not Supported 00:23:36.269 SGL Offset: Supported 00:23:36.269 Transport SGL Data Block: Not Supported 00:23:36.269 Replay Protected Memory Block: Not Supported 00:23:36.269 00:23:36.269 Firmware Slot Information 00:23:36.269 ========================= 00:23:36.269 Active slot: 0 00:23:36.269 00:23:36.269 00:23:36.269 Error Log 00:23:36.269 ========= 00:23:36.269 00:23:36.269 Active Namespaces 00:23:36.269 ================= 00:23:36.269 Discovery Log Page 00:23:36.269 ================== 00:23:36.269 Generation Counter: 2 00:23:36.269 Number of Records: 2 00:23:36.269 Record Format: 0 00:23:36.269 00:23:36.269 Discovery Log Entry 0 00:23:36.269 ---------------------- 00:23:36.269 Transport Type: 3 (TCP) 00:23:36.269 Address Family: 1 (IPv4) 00:23:36.269 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:36.269 Entry Flags: 00:23:36.269 Duplicate Returned Information: 1 00:23:36.269 Explicit Persistent Connection Support for Discovery: 1 00:23:36.269 Transport Requirements: 00:23:36.269 Secure Channel: Not Required 00:23:36.269 Port ID: 0 (0x0000) 00:23:36.269 Controller ID: 65535 (0xffff) 00:23:36.269 Admin Max SQ Size: 128 00:23:36.269 Transport Service Identifier: 4420 00:23:36.269 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:36.269 Transport Address: 10.0.0.2 00:23:36.269 Discovery Log Entry 1 00:23:36.269 ---------------------- 00:23:36.269 Transport Type: 3 (TCP) 00:23:36.269 Address Family: 1 (IPv4) 00:23:36.269 Subsystem Type: 2 (NVM Subsystem) 00:23:36.269 Entry Flags: 00:23:36.269 Duplicate Returned Information: 0 00:23:36.269 Explicit Persistent Connection Support for Discovery: 0 00:23:36.269 Transport Requirements: 00:23:36.269 Secure Channel: Not Required 00:23:36.269 Port ID: 0 (0x0000) 00:23:36.269 Controller ID: 65535 (0xffff) 00:23:36.269 Admin Max SQ Size: 128 00:23:36.269 Transport Service Identifier: 4420 00:23:36.269 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:36.269 Transport Address: 10.0.0.2 [2024-11-20 16:35:22.145256] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:23:36.269 [2024-11-20 16:35:22.145267] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a08100) on tqpair=0x19a6550 00:23:36.269 [2024-11-20 16:35:22.145273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.269 [2024-11-20 16:35:22.145279] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a08280) on tqpair=0x19a6550 00:23:36.269 [2024-11-20 16:35:22.145284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.269 [2024-11-20 16:35:22.145289] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a08400) on tqpair=0x19a6550 00:23:36.269 [2024-11-20 16:35:22.145294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.269 [2024-11-20 16:35:22.145299] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a08580) on tqpair=0x19a6550 00:23:36.269 [2024-11-20 16:35:22.145303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.269 [2024-11-20 16:35:22.145314] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.269 [2024-11-20 16:35:22.145318] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.269 [2024-11-20 16:35:22.145322] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a6550) 00:23:36.269 [2024-11-20 16:35:22.145330] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.269 [2024-11-20 16:35:22.145343] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a08580, cid 3, qid 0 00:23:36.269 [2024-11-20 16:35:22.145447] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.269 [2024-11-20 16:35:22.145453] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.269 [2024-11-20 16:35:22.145457] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.269 [2024-11-20 16:35:22.145461] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a08580) on tqpair=0x19a6550 00:23:36.269 [2024-11-20 16:35:22.145468] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.269 [2024-11-20 16:35:22.145472] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.269 [2024-11-20 16:35:22.145475] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a6550) 00:23:36.269 [2024-11-20 16:35:22.145482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.269 [2024-11-20 16:35:22.145496] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a08580, cid 3, qid 0 00:23:36.270 [2024-11-20 16:35:22.145668] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.270 [2024-11-20 16:35:22.145674] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.270 [2024-11-20 16:35:22.145678] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.270 [2024-11-20 16:35:22.145682] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a08580) on tqpair=0x19a6550 00:23:36.270 [2024-11-20 16:35:22.145687] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:23:36.270 [2024-11-20 16:35:22.145692] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:23:36.270 [2024-11-20 16:35:22.145701] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.270 [2024-11-20 16:35:22.145705] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.270 [2024-11-20 16:35:22.145708] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a6550) 00:23:36.270 [2024-11-20 16:35:22.145715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.270 [2024-11-20 16:35:22.145726] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a08580, cid 3, qid 0 00:23:36.270 [2024-11-20 16:35:22.145934] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.270 [2024-11-20 16:35:22.145940] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.270 [2024-11-20 16:35:22.145944] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.270 [2024-11-20 16:35:22.145948] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a08580) on tqpair=0x19a6550 00:23:36.270 [2024-11-20 16:35:22.145958] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.270 [2024-11-20 16:35:22.145962] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.270 [2024-11-20 16:35:22.145966] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a6550) 00:23:36.270 [2024-11-20 16:35:22.145973] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.270 [2024-11-20 16:35:22.145989] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a08580, cid 3, qid 0 00:23:36.270 [2024-11-20 16:35:22.146159] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.270 [2024-11-20 16:35:22.146166] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.270 [2024-11-20 16:35:22.146169] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.270 [2024-11-20 16:35:22.146173] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a08580) on tqpair=0x19a6550 00:23:36.270 [2024-11-20 16:35:22.146183] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.270 [2024-11-20 16:35:22.146187] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.270 [2024-11-20 16:35:22.146191] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a6550) 00:23:36.270 [2024-11-20 16:35:22.146197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.270 [2024-11-20 16:35:22.146207] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a08580, cid 3, qid 0 00:23:36.270 [2024-11-20 16:35:22.146402] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.270 [2024-11-20 16:35:22.146409] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.270 [2024-11-20 16:35:22.146412] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.270 [2024-11-20 16:35:22.146416] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a08580) on tqpair=0x19a6550 00:23:36.270 [2024-11-20 16:35:22.146426] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.270 [2024-11-20 16:35:22.146430] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.270 [2024-11-20 16:35:22.146434] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a6550) 00:23:36.270 [2024-11-20 16:35:22.146443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.270 [2024-11-20 16:35:22.146453] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a08580, cid 3, qid 0 00:23:36.270 [2024-11-20 16:35:22.146623] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.270 [2024-11-20 16:35:22.146629] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.270 [2024-11-20 16:35:22.146633] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.270 [2024-11-20 16:35:22.146637] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a08580) on tqpair=0x19a6550 00:23:36.270 [2024-11-20 16:35:22.146646] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.270 [2024-11-20 16:35:22.146650] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.270 [2024-11-20 16:35:22.146654] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a6550) 00:23:36.270 [2024-11-20 16:35:22.146661] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.270 [2024-11-20 16:35:22.146671] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a08580, cid 3, qid 0 00:23:36.270 [2024-11-20 16:35:22.146893] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.270 [2024-11-20 16:35:22.146900] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.270 [2024-11-20 16:35:22.146903] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.270 [2024-11-20 16:35:22.146907] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a08580) on tqpair=0x19a6550 00:23:36.270 [2024-11-20 16:35:22.146917] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.270 [2024-11-20 16:35:22.146921] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.270 [2024-11-20 16:35:22.146924] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a6550) 00:23:36.270 [2024-11-20 16:35:22.146931] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.270 [2024-11-20 16:35:22.146941] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a08580, cid 3, qid 0 00:23:36.270 [2024-11-20 16:35:22.147126] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.270 [2024-11-20 16:35:22.147133] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.270 [2024-11-20 16:35:22.147137] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.270 [2024-11-20 16:35:22.147140] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a08580) on tqpair=0x19a6550 00:23:36.270 [2024-11-20 16:35:22.147150] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.270 [2024-11-20 16:35:22.147154] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.270 [2024-11-20 16:35:22.147158] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a6550) 00:23:36.270 [2024-11-20 16:35:22.147165] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.270 [2024-11-20 16:35:22.147175] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a08580, cid 3, qid 0 00:23:36.270 [2024-11-20 16:35:22.147395] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.270 [2024-11-20 16:35:22.147401] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.270 [2024-11-20 16:35:22.147405] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.270 [2024-11-20 16:35:22.147409] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a08580) on tqpair=0x19a6550 00:23:36.270 [2024-11-20 16:35:22.147418] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.270 [2024-11-20 16:35:22.147422] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.270 [2024-11-20 16:35:22.147426] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a6550) 00:23:36.270 [2024-11-20 16:35:22.147433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.270 [2024-11-20 16:35:22.147445] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a08580, cid 3, qid 0 00:23:36.270 [2024-11-20 16:35:22.147618] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.270 [2024-11-20 16:35:22.147624] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.270 [2024-11-20 16:35:22.147627] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.270 [2024-11-20 16:35:22.147631] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a08580) on tqpair=0x19a6550 00:23:36.270 [2024-11-20 16:35:22.147641] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.270 [2024-11-20 16:35:22.147645] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.270 [2024-11-20 16:35:22.147649] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a6550) 00:23:36.270 [2024-11-20 16:35:22.147655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.270 [2024-11-20 16:35:22.147665] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a08580, cid 3, qid 0 00:23:36.270 [2024-11-20 16:35:22.147888] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.270 [2024-11-20 16:35:22.147894] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.270 [2024-11-20 16:35:22.147898] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.270 [2024-11-20 16:35:22.147901] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a08580) on tqpair=0x19a6550 00:23:36.270 [2024-11-20 16:35:22.147911] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.270 [2024-11-20 16:35:22.147915] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.270 [2024-11-20 16:35:22.147919] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19a6550) 00:23:36.270 [2024-11-20 16:35:22.147925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.270 [2024-11-20 16:35:22.147935] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a08580, cid 3, qid 0 00:23:36.270 [2024-11-20 16:35:22.151988] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.270 [2024-11-20 16:35:22.151997] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.270 [2024-11-20 16:35:22.152000] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.270 [2024-11-20 16:35:22.152004] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a08580) on tqpair=0x19a6550 00:23:36.270 [2024-11-20 16:35:22.152012] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:23:36.270 00:23:36.270 16:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:36.270 [2024-11-20 16:35:22.191016] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:23:36.270 [2024-11-20 16:35:22.191060] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2299197 ] 00:23:36.535 [2024-11-20 16:35:22.245052] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:23:36.535 [2024-11-20 16:35:22.245098] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:36.535 [2024-11-20 16:35:22.245104] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:36.535 [2024-11-20 16:35:22.245117] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:36.535 [2024-11-20 16:35:22.245130] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:36.535 [2024-11-20 16:35:22.249192] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:23:36.535 [2024-11-20 16:35:22.249223] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1c18550 0 00:23:36.535 [2024-11-20 16:35:22.256998] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:36.535 [2024-11-20 16:35:22.257011] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:36.535 [2024-11-20 16:35:22.257016] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:36.535 [2024-11-20 16:35:22.257019] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:36.535 [2024-11-20 16:35:22.257046] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.535 [2024-11-20 16:35:22.257051] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.535 [2024-11-20 16:35:22.257055] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c18550) 00:23:36.535 [2024-11-20 16:35:22.257066] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:36.535 [2024-11-20 16:35:22.257083] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c7a100, cid 0, qid 0 00:23:36.535 [2024-11-20 16:35:22.264991] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.535 [2024-11-20 16:35:22.265001] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.535 [2024-11-20 16:35:22.265005] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.535 [2024-11-20 16:35:22.265009] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c7a100) on tqpair=0x1c18550 00:23:36.535 [2024-11-20 16:35:22.265018] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:36.535 [2024-11-20 16:35:22.265024] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:23:36.535 [2024-11-20 16:35:22.265029] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:23:36.535 [2024-11-20 16:35:22.265041] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.535 [2024-11-20 16:35:22.265045] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.535 [2024-11-20 16:35:22.265049] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c18550) 00:23:36.535 [2024-11-20 16:35:22.265057] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.535 [2024-11-20 16:35:22.265070] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c7a100, cid 0, qid 0 00:23:36.535 [2024-11-20 16:35:22.265246] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.535 [2024-11-20 16:35:22.265254] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.535 [2024-11-20 16:35:22.265257] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.535 [2024-11-20 16:35:22.265261] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c7a100) on tqpair=0x1c18550 00:23:36.535 [2024-11-20 16:35:22.265266] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:23:36.535 [2024-11-20 16:35:22.265274] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:23:36.535 [2024-11-20 16:35:22.265282] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.535 [2024-11-20 16:35:22.265286] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.535 [2024-11-20 16:35:22.265290] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c18550) 00:23:36.535 [2024-11-20 16:35:22.265297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.535 [2024-11-20 16:35:22.265307] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c7a100, cid 0, qid 0 00:23:36.535 [2024-11-20 16:35:22.265538] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.535 [2024-11-20 16:35:22.265544] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.535 [2024-11-20 16:35:22.265548] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.535 [2024-11-20 16:35:22.265552] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c7a100) on tqpair=0x1c18550 00:23:36.535 [2024-11-20 16:35:22.265557] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:23:36.535 [2024-11-20 16:35:22.265565] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:23:36.535 [2024-11-20 16:35:22.265571] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.535 [2024-11-20 16:35:22.265575] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.535 [2024-11-20 16:35:22.265579] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c18550) 00:23:36.535 [2024-11-20 16:35:22.265585] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.535 [2024-11-20 16:35:22.265596] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c7a100, cid 0, qid 0 00:23:36.535 [2024-11-20 16:35:22.265790] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.535 [2024-11-20 16:35:22.265797] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.535 [2024-11-20 16:35:22.265800] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.535 [2024-11-20 16:35:22.265804] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c7a100) on tqpair=0x1c18550 00:23:36.535 [2024-11-20 16:35:22.265809] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:36.535 [2024-11-20 16:35:22.265820] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.535 [2024-11-20 16:35:22.265825] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.535 [2024-11-20 16:35:22.265828] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c18550) 00:23:36.535 [2024-11-20 16:35:22.265835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.535 [2024-11-20 16:35:22.265845] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c7a100, cid 0, qid 0 00:23:36.535 [2024-11-20 16:35:22.266043] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.535 [2024-11-20 16:35:22.266050] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.535 [2024-11-20 16:35:22.266053] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.535 [2024-11-20 16:35:22.266059] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c7a100) on tqpair=0x1c18550 00:23:36.535 [2024-11-20 16:35:22.266064] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:23:36.535 [2024-11-20 16:35:22.266069] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:23:36.535 [2024-11-20 16:35:22.266077] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:36.535 [2024-11-20 16:35:22.266186] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:23:36.535 [2024-11-20 16:35:22.266193] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:36.535 [2024-11-20 16:35:22.266201] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.535 [2024-11-20 16:35:22.266205] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.535 [2024-11-20 16:35:22.266209] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c18550) 00:23:36.535 [2024-11-20 16:35:22.266217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.535 [2024-11-20 16:35:22.266228] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c7a100, cid 0, qid 0 00:23:36.535 [2024-11-20 16:35:22.266367] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.535 [2024-11-20 16:35:22.266374] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.535 [2024-11-20 16:35:22.266378] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.535 [2024-11-20 16:35:22.266381] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c7a100) on tqpair=0x1c18550 00:23:36.535 [2024-11-20 16:35:22.266386] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:36.535 [2024-11-20 16:35:22.266395] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.535 [2024-11-20 16:35:22.266399] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.535 [2024-11-20 16:35:22.266403] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c18550) 00:23:36.536 [2024-11-20 16:35:22.266409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.536 [2024-11-20 16:35:22.266419] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c7a100, cid 0, qid 0 00:23:36.536 [2024-11-20 16:35:22.266645] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.536 [2024-11-20 16:35:22.266652] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.536 [2024-11-20 16:35:22.266655] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.536 [2024-11-20 16:35:22.266659] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c7a100) on tqpair=0x1c18550 00:23:36.536 [2024-11-20 16:35:22.266664] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:36.536 [2024-11-20 16:35:22.266668] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:23:36.536 [2024-11-20 16:35:22.266676] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:23:36.536 [2024-11-20 16:35:22.266686] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:23:36.536 [2024-11-20 16:35:22.266694] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.536 [2024-11-20 16:35:22.266698] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c18550) 00:23:36.536 [2024-11-20 16:35:22.266705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.536 [2024-11-20 16:35:22.266715] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c7a100, cid 0, qid 0 00:23:36.536 [2024-11-20 16:35:22.266902] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:36.536 [2024-11-20 16:35:22.266909] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:36.536 [2024-11-20 16:35:22.266912] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:36.536 [2024-11-20 16:35:22.266916] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c18550): datao=0, datal=4096, cccid=0 00:23:36.536 [2024-11-20 16:35:22.266921] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c7a100) on tqpair(0x1c18550): expected_datao=0, payload_size=4096 00:23:36.536 [2024-11-20 16:35:22.266926] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.536 [2024-11-20 16:35:22.266933] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:36.536 [2024-11-20 16:35:22.266937] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:36.536 [2024-11-20 16:35:22.267150] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.536 [2024-11-20 16:35:22.267157] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.536 [2024-11-20 16:35:22.267162] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.536 [2024-11-20 16:35:22.267166] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c7a100) on tqpair=0x1c18550 00:23:36.536 [2024-11-20 16:35:22.267174] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:23:36.536 [2024-11-20 16:35:22.267178] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:23:36.536 [2024-11-20 16:35:22.267183] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:23:36.536 [2024-11-20 16:35:22.267193] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:23:36.536 [2024-11-20 16:35:22.267197] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:23:36.536 [2024-11-20 16:35:22.267202] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:23:36.536 [2024-11-20 16:35:22.267212] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:23:36.536 [2024-11-20 16:35:22.267219] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.536 [2024-11-20 16:35:22.267223] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.536 [2024-11-20 16:35:22.267226] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c18550) 00:23:36.536 [2024-11-20 16:35:22.267233] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:36.536 [2024-11-20 16:35:22.267244] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c7a100, cid 0, qid 0 00:23:36.536 [2024-11-20 16:35:22.267454] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.536 [2024-11-20 16:35:22.267461] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.536 [2024-11-20 16:35:22.267464] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.536 [2024-11-20 16:35:22.267468] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c7a100) on tqpair=0x1c18550 00:23:36.536 [2024-11-20 16:35:22.267475] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.536 [2024-11-20 16:35:22.267478] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.536 [2024-11-20 16:35:22.267482] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c18550) 00:23:36.536 [2024-11-20 16:35:22.267488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.536 [2024-11-20 16:35:22.267494] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.536 [2024-11-20 16:35:22.267498] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.536 [2024-11-20 16:35:22.267501] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1c18550) 00:23:36.536 [2024-11-20 16:35:22.267507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.536 [2024-11-20 16:35:22.267513] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.536 [2024-11-20 16:35:22.267517] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.536 [2024-11-20 16:35:22.267520] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1c18550) 00:23:36.536 [2024-11-20 16:35:22.267526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.536 [2024-11-20 16:35:22.267532] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.536 [2024-11-20 16:35:22.267536] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.536 [2024-11-20 16:35:22.267539] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c18550) 00:23:36.536 [2024-11-20 16:35:22.267545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.536 [2024-11-20 16:35:22.267552] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:36.536 [2024-11-20 16:35:22.267559] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:36.536 [2024-11-20 16:35:22.267566] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.536 [2024-11-20 16:35:22.267570] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c18550) 00:23:36.536 [2024-11-20 16:35:22.267576] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.536 [2024-11-20 16:35:22.267588] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c7a100, cid 0, qid 0 00:23:36.536 [2024-11-20 16:35:22.267593] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c7a280, cid 1, qid 0 00:23:36.536 [2024-11-20 16:35:22.267598] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c7a400, cid 2, qid 0 00:23:36.536 [2024-11-20 16:35:22.267603] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c7a580, cid 3, qid 0 00:23:36.536 [2024-11-20 16:35:22.267608] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c7a700, cid 4, qid 0 00:23:36.536 [2024-11-20 16:35:22.267824] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.536 [2024-11-20 16:35:22.267831] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.536 [2024-11-20 16:35:22.267834] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.536 [2024-11-20 16:35:22.267838] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c7a700) on tqpair=0x1c18550 00:23:36.536 [2024-11-20 16:35:22.267845] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:23:36.536 [2024-11-20 16:35:22.267850] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:36.536 [2024-11-20 16:35:22.267858] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:23:36.536 [2024-11-20 16:35:22.267864] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:36.536 [2024-11-20 16:35:22.267870] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.536 [2024-11-20 16:35:22.267874] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.536 [2024-11-20 16:35:22.267878] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c18550) 00:23:36.536 [2024-11-20 16:35:22.267884] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:36.536 [2024-11-20 16:35:22.267894] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c7a700, cid 4, qid 0 00:23:36.536 [2024-11-20 16:35:22.268042] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.536 [2024-11-20 16:35:22.268049] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.536 [2024-11-20 16:35:22.268053] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.536 [2024-11-20 16:35:22.268056] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c7a700) on tqpair=0x1c18550 00:23:36.536 [2024-11-20 16:35:22.268120] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:23:36.536 [2024-11-20 16:35:22.268129] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:36.536 [2024-11-20 16:35:22.268136] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.536 [2024-11-20 16:35:22.268140] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c18550) 00:23:36.536 [2024-11-20 16:35:22.268148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.536 [2024-11-20 16:35:22.268158] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c7a700, cid 4, qid 0 00:23:36.536 [2024-11-20 16:35:22.268391] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:36.536 [2024-11-20 16:35:22.268397] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:36.536 [2024-11-20 16:35:22.268401] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:36.536 [2024-11-20 16:35:22.268404] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c18550): datao=0, datal=4096, cccid=4 00:23:36.536 [2024-11-20 16:35:22.268409] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c7a700) on tqpair(0x1c18550): expected_datao=0, payload_size=4096 00:23:36.536 [2024-11-20 16:35:22.268413] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.536 [2024-11-20 16:35:22.268420] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:36.537 [2024-11-20 16:35:22.268424] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:36.537 [2024-11-20 16:35:22.268533] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.537 [2024-11-20 16:35:22.268540] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.537 [2024-11-20 16:35:22.268543] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.537 [2024-11-20 16:35:22.268547] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c7a700) on tqpair=0x1c18550 00:23:36.537 [2024-11-20 16:35:22.268555] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:23:36.537 [2024-11-20 16:35:22.268564] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:23:36.537 [2024-11-20 16:35:22.268572] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:23:36.537 [2024-11-20 16:35:22.268579] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.537 [2024-11-20 16:35:22.268583] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c18550) 00:23:36.537 [2024-11-20 16:35:22.268589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.537 [2024-11-20 16:35:22.268600] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c7a700, cid 4, qid 0 00:23:36.537 [2024-11-20 16:35:22.268842] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:36.537 [2024-11-20 16:35:22.268849] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:36.537 [2024-11-20 16:35:22.268852] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:36.537 [2024-11-20 16:35:22.268856] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c18550): datao=0, datal=4096, cccid=4 00:23:36.537 [2024-11-20 16:35:22.268860] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c7a700) on tqpair(0x1c18550): expected_datao=0, payload_size=4096 00:23:36.537 [2024-11-20 16:35:22.268864] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.537 [2024-11-20 16:35:22.268871] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:36.537 [2024-11-20 16:35:22.268875] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:36.537 [2024-11-20 16:35:22.272990] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.537 [2024-11-20 16:35:22.272998] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.537 [2024-11-20 16:35:22.273001] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.537 [2024-11-20 16:35:22.273007] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c7a700) on tqpair=0x1c18550 00:23:36.537 [2024-11-20 16:35:22.273020] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:36.537 [2024-11-20 16:35:22.273029] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:36.537 [2024-11-20 16:35:22.273038] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.537 [2024-11-20 16:35:22.273042] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c18550) 00:23:36.537 [2024-11-20 16:35:22.273049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.537 [2024-11-20 16:35:22.273060] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c7a700, cid 4, qid 0 00:23:36.537 [2024-11-20 16:35:22.273239] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:36.537 [2024-11-20 16:35:22.273247] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:36.537 [2024-11-20 16:35:22.273250] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:36.537 [2024-11-20 16:35:22.273254] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c18550): datao=0, datal=4096, cccid=4 00:23:36.537 [2024-11-20 16:35:22.273258] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c7a700) on tqpair(0x1c18550): expected_datao=0, payload_size=4096 00:23:36.537 [2024-11-20 16:35:22.273263] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.537 [2024-11-20 16:35:22.273269] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:36.537 [2024-11-20 16:35:22.273273] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:36.537 [2024-11-20 16:35:22.273417] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.537 [2024-11-20 16:35:22.273424] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.537 [2024-11-20 16:35:22.273427] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.537 [2024-11-20 16:35:22.273431] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c7a700) on tqpair=0x1c18550 00:23:36.537 [2024-11-20 16:35:22.273438] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:36.537 [2024-11-20 16:35:22.273446] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:23:36.537 [2024-11-20 16:35:22.273454] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:23:36.537 [2024-11-20 16:35:22.273461] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:23:36.537 [2024-11-20 16:35:22.273466] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:36.537 [2024-11-20 16:35:22.273471] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:23:36.537 [2024-11-20 16:35:22.273476] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:23:36.537 [2024-11-20 16:35:22.273481] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:23:36.537 [2024-11-20 16:35:22.273486] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:23:36.537 [2024-11-20 16:35:22.273499] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.537 [2024-11-20 16:35:22.273503] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c18550) 00:23:36.537 [2024-11-20 16:35:22.273509] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.537 [2024-11-20 16:35:22.273516] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.537 [2024-11-20 16:35:22.273520] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.537 [2024-11-20 16:35:22.273523] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c18550) 00:23:36.537 [2024-11-20 16:35:22.273531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.537 [2024-11-20 16:35:22.273544] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c7a700, cid 4, qid 0 00:23:36.537 [2024-11-20 16:35:22.273550] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c7a880, cid 5, qid 0 00:23:36.537 [2024-11-20 16:35:22.273766] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.537 [2024-11-20 16:35:22.273773] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.537 [2024-11-20 16:35:22.273776] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.537 [2024-11-20 16:35:22.273780] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c7a700) on tqpair=0x1c18550 00:23:36.537 [2024-11-20 16:35:22.273787] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.537 [2024-11-20 16:35:22.273793] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.537 [2024-11-20 16:35:22.273796] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.537 [2024-11-20 16:35:22.273800] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c7a880) on tqpair=0x1c18550 00:23:36.537 [2024-11-20 16:35:22.273809] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.537 [2024-11-20 16:35:22.273813] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c18550) 00:23:36.537 [2024-11-20 16:35:22.273819] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.537 [2024-11-20 16:35:22.273829] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c7a880, cid 5, qid 0 00:23:36.537 [2024-11-20 16:35:22.274017] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.537 [2024-11-20 16:35:22.274024] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.537 [2024-11-20 16:35:22.274027] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.537 [2024-11-20 16:35:22.274031] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c7a880) on tqpair=0x1c18550 00:23:36.537 [2024-11-20 16:35:22.274040] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.537 [2024-11-20 16:35:22.274044] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c18550) 00:23:36.537 [2024-11-20 16:35:22.274051] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.537 [2024-11-20 16:35:22.274061] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c7a880, cid 5, qid 0 00:23:36.537 [2024-11-20 16:35:22.274269] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.537 [2024-11-20 16:35:22.274276] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.537 [2024-11-20 16:35:22.274279] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.537 [2024-11-20 16:35:22.274283] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c7a880) on tqpair=0x1c18550 00:23:36.537 [2024-11-20 16:35:22.274292] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.537 [2024-11-20 16:35:22.274296] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c18550) 00:23:36.537 [2024-11-20 16:35:22.274302] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.537 [2024-11-20 16:35:22.274312] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c7a880, cid 5, qid 0 00:23:36.537 [2024-11-20 16:35:22.274527] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.537 [2024-11-20 16:35:22.274533] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.537 [2024-11-20 16:35:22.274536] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.537 [2024-11-20 16:35:22.274540] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c7a880) on tqpair=0x1c18550 00:23:36.537 [2024-11-20 16:35:22.274554] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.537 [2024-11-20 16:35:22.274562] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c18550) 00:23:36.537 [2024-11-20 16:35:22.274568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.537 [2024-11-20 16:35:22.274575] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.537 [2024-11-20 16:35:22.274579] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c18550) 00:23:36.537 [2024-11-20 16:35:22.274585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.537 [2024-11-20 16:35:22.274593] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.538 [2024-11-20 16:35:22.274596] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1c18550) 00:23:36.538 [2024-11-20 16:35:22.274602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.538 [2024-11-20 16:35:22.274610] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.538 [2024-11-20 16:35:22.274613] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1c18550) 00:23:36.538 [2024-11-20 16:35:22.274619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.538 [2024-11-20 16:35:22.274631] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c7a880, cid 5, qid 0 00:23:36.538 [2024-11-20 16:35:22.274636] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c7a700, cid 4, qid 0 00:23:36.538 [2024-11-20 16:35:22.274641] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c7aa00, cid 6, qid 0 00:23:36.538 [2024-11-20 16:35:22.274646] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c7ab80, cid 7, qid 0 00:23:36.538 [2024-11-20 16:35:22.274898] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:36.538 [2024-11-20 16:35:22.274905] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:36.538 [2024-11-20 16:35:22.274909] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:36.538 [2024-11-20 16:35:22.274912] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c18550): datao=0, datal=8192, cccid=5 00:23:36.538 [2024-11-20 16:35:22.274917] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c7a880) on tqpair(0x1c18550): expected_datao=0, payload_size=8192 00:23:36.538 [2024-11-20 16:35:22.274921] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.538 [2024-11-20 16:35:22.275013] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:36.538 [2024-11-20 16:35:22.275019] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:36.538 [2024-11-20 16:35:22.275025] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:36.538 [2024-11-20 16:35:22.275031] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:36.538 [2024-11-20 16:35:22.275035] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:36.538 [2024-11-20 16:35:22.275039] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c18550): datao=0, datal=512, cccid=4 00:23:36.538 [2024-11-20 16:35:22.275043] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c7a700) on tqpair(0x1c18550): expected_datao=0, payload_size=512 00:23:36.538 [2024-11-20 16:35:22.275047] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.538 [2024-11-20 16:35:22.275054] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:36.538 [2024-11-20 16:35:22.275057] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:36.538 [2024-11-20 16:35:22.275063] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:36.538 [2024-11-20 16:35:22.275069] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:36.538 [2024-11-20 16:35:22.275072] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:36.538 [2024-11-20 16:35:22.275078] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c18550): datao=0, datal=512, cccid=6 00:23:36.538 [2024-11-20 16:35:22.275083] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c7aa00) on tqpair(0x1c18550): expected_datao=0, payload_size=512 00:23:36.538 [2024-11-20 16:35:22.275087] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.538 [2024-11-20 16:35:22.275093] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:36.538 [2024-11-20 16:35:22.275097] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:36.538 [2024-11-20 16:35:22.275103] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:36.538 [2024-11-20 16:35:22.275108] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:36.538 [2024-11-20 16:35:22.275112] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:36.538 [2024-11-20 16:35:22.275115] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c18550): datao=0, datal=4096, cccid=7 00:23:36.538 [2024-11-20 16:35:22.275120] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c7ab80) on tqpair(0x1c18550): expected_datao=0, payload_size=4096 00:23:36.538 [2024-11-20 16:35:22.275124] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.538 [2024-11-20 16:35:22.275136] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:36.538 [2024-11-20 16:35:22.275139] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:36.538 [2024-11-20 16:35:22.275355] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.538 [2024-11-20 16:35:22.275363] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.538 [2024-11-20 16:35:22.275367] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.538 [2024-11-20 16:35:22.275371] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c7a880) on tqpair=0x1c18550 00:23:36.538 [2024-11-20 16:35:22.275383] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.538 [2024-11-20 16:35:22.275389] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.538 [2024-11-20 16:35:22.275392] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.538 [2024-11-20 16:35:22.275396] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c7a700) on tqpair=0x1c18550 00:23:36.538 [2024-11-20 16:35:22.275406] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.538 [2024-11-20 16:35:22.275412] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.538 [2024-11-20 16:35:22.275415] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.538 [2024-11-20 16:35:22.275419] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c7aa00) on tqpair=0x1c18550 00:23:36.538 [2024-11-20 16:35:22.275426] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.538 [2024-11-20 16:35:22.275432] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.538 [2024-11-20 16:35:22.275435] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.538 [2024-11-20 16:35:22.275439] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c7ab80) on tqpair=0x1c18550 00:23:36.538 ===================================================== 00:23:36.538 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:36.538 ===================================================== 00:23:36.538 Controller Capabilities/Features 00:23:36.538 ================================ 00:23:36.538 Vendor ID: 8086 00:23:36.538 Subsystem Vendor ID: 8086 00:23:36.538 Serial Number: SPDK00000000000001 00:23:36.538 Model Number: SPDK bdev Controller 00:23:36.538 Firmware Version: 25.01 00:23:36.538 Recommended Arb Burst: 6 00:23:36.538 IEEE OUI Identifier: e4 d2 5c 00:23:36.538 Multi-path I/O 00:23:36.538 May have multiple subsystem ports: Yes 00:23:36.538 May have multiple controllers: Yes 00:23:36.538 Associated with SR-IOV VF: No 00:23:36.538 Max Data Transfer Size: 131072 00:23:36.538 Max Number of Namespaces: 32 00:23:36.538 Max Number of I/O Queues: 127 00:23:36.538 NVMe Specification Version (VS): 1.3 00:23:36.538 NVMe Specification Version (Identify): 1.3 00:23:36.538 Maximum Queue Entries: 128 00:23:36.538 Contiguous Queues Required: Yes 00:23:36.538 Arbitration Mechanisms Supported 00:23:36.538 Weighted Round Robin: Not Supported 00:23:36.538 Vendor Specific: Not Supported 00:23:36.538 Reset Timeout: 15000 ms 00:23:36.538 Doorbell Stride: 4 bytes 00:23:36.538 NVM Subsystem Reset: Not Supported 00:23:36.538 Command Sets Supported 00:23:36.538 NVM Command Set: Supported 00:23:36.538 Boot Partition: Not Supported 00:23:36.538 Memory Page Size Minimum: 4096 bytes 00:23:36.538 Memory Page Size Maximum: 4096 bytes 00:23:36.538 Persistent Memory Region: Not Supported 00:23:36.538 Optional Asynchronous Events Supported 00:23:36.538 Namespace Attribute Notices: Supported 00:23:36.538 Firmware Activation Notices: Not Supported 00:23:36.538 ANA Change Notices: Not Supported 00:23:36.538 PLE Aggregate Log Change Notices: Not Supported 00:23:36.538 LBA Status Info Alert Notices: Not Supported 00:23:36.538 EGE Aggregate Log Change Notices: Not Supported 00:23:36.538 Normal NVM Subsystem Shutdown event: Not Supported 00:23:36.538 Zone Descriptor Change Notices: Not Supported 00:23:36.538 Discovery Log Change Notices: Not Supported 00:23:36.538 Controller Attributes 00:23:36.538 128-bit Host Identifier: Supported 00:23:36.538 Non-Operational Permissive Mode: Not Supported 00:23:36.538 NVM Sets: Not Supported 00:23:36.538 Read Recovery Levels: Not Supported 00:23:36.538 Endurance Groups: Not Supported 00:23:36.538 Predictable Latency Mode: Not Supported 00:23:36.538 Traffic Based Keep ALive: Not Supported 00:23:36.538 Namespace Granularity: Not Supported 00:23:36.538 SQ Associations: Not Supported 00:23:36.538 UUID List: Not Supported 00:23:36.538 Multi-Domain Subsystem: Not Supported 00:23:36.538 Fixed Capacity Management: Not Supported 00:23:36.538 Variable Capacity Management: Not Supported 00:23:36.538 Delete Endurance Group: Not Supported 00:23:36.538 Delete NVM Set: Not Supported 00:23:36.538 Extended LBA Formats Supported: Not Supported 00:23:36.538 Flexible Data Placement Supported: Not Supported 00:23:36.538 00:23:36.538 Controller Memory Buffer Support 00:23:36.538 ================================ 00:23:36.538 Supported: No 00:23:36.538 00:23:36.538 Persistent Memory Region Support 00:23:36.538 ================================ 00:23:36.538 Supported: No 00:23:36.538 00:23:36.538 Admin Command Set Attributes 00:23:36.538 ============================ 00:23:36.538 Security Send/Receive: Not Supported 00:23:36.538 Format NVM: Not Supported 00:23:36.538 Firmware Activate/Download: Not Supported 00:23:36.538 Namespace Management: Not Supported 00:23:36.538 Device Self-Test: Not Supported 00:23:36.538 Directives: Not Supported 00:23:36.538 NVMe-MI: Not Supported 00:23:36.538 Virtualization Management: Not Supported 00:23:36.538 Doorbell Buffer Config: Not Supported 00:23:36.538 Get LBA Status Capability: Not Supported 00:23:36.538 Command & Feature Lockdown Capability: Not Supported 00:23:36.538 Abort Command Limit: 4 00:23:36.538 Async Event Request Limit: 4 00:23:36.538 Number of Firmware Slots: N/A 00:23:36.538 Firmware Slot 1 Read-Only: N/A 00:23:36.538 Firmware Activation Without Reset: N/A 00:23:36.538 Multiple Update Detection Support: N/A 00:23:36.539 Firmware Update Granularity: No Information Provided 00:23:36.539 Per-Namespace SMART Log: No 00:23:36.539 Asymmetric Namespace Access Log Page: Not Supported 00:23:36.539 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:36.539 Command Effects Log Page: Supported 00:23:36.539 Get Log Page Extended Data: Supported 00:23:36.539 Telemetry Log Pages: Not Supported 00:23:36.539 Persistent Event Log Pages: Not Supported 00:23:36.539 Supported Log Pages Log Page: May Support 00:23:36.539 Commands Supported & Effects Log Page: Not Supported 00:23:36.539 Feature Identifiers & Effects Log Page:May Support 00:23:36.539 NVMe-MI Commands & Effects Log Page: May Support 00:23:36.539 Data Area 4 for Telemetry Log: Not Supported 00:23:36.539 Error Log Page Entries Supported: 128 00:23:36.539 Keep Alive: Supported 00:23:36.539 Keep Alive Granularity: 10000 ms 00:23:36.539 00:23:36.539 NVM Command Set Attributes 00:23:36.539 ========================== 00:23:36.539 Submission Queue Entry Size 00:23:36.539 Max: 64 00:23:36.539 Min: 64 00:23:36.539 Completion Queue Entry Size 00:23:36.539 Max: 16 00:23:36.539 Min: 16 00:23:36.539 Number of Namespaces: 32 00:23:36.539 Compare Command: Supported 00:23:36.539 Write Uncorrectable Command: Not Supported 00:23:36.539 Dataset Management Command: Supported 00:23:36.539 Write Zeroes Command: Supported 00:23:36.539 Set Features Save Field: Not Supported 00:23:36.539 Reservations: Supported 00:23:36.539 Timestamp: Not Supported 00:23:36.539 Copy: Supported 00:23:36.539 Volatile Write Cache: Present 00:23:36.539 Atomic Write Unit (Normal): 1 00:23:36.539 Atomic Write Unit (PFail): 1 00:23:36.539 Atomic Compare & Write Unit: 1 00:23:36.539 Fused Compare & Write: Supported 00:23:36.539 Scatter-Gather List 00:23:36.539 SGL Command Set: Supported 00:23:36.539 SGL Keyed: Supported 00:23:36.539 SGL Bit Bucket Descriptor: Not Supported 00:23:36.539 SGL Metadata Pointer: Not Supported 00:23:36.539 Oversized SGL: Not Supported 00:23:36.539 SGL Metadata Address: Not Supported 00:23:36.539 SGL Offset: Supported 00:23:36.539 Transport SGL Data Block: Not Supported 00:23:36.539 Replay Protected Memory Block: Not Supported 00:23:36.539 00:23:36.539 Firmware Slot Information 00:23:36.539 ========================= 00:23:36.539 Active slot: 1 00:23:36.539 Slot 1 Firmware Revision: 25.01 00:23:36.539 00:23:36.539 00:23:36.539 Commands Supported and Effects 00:23:36.539 ============================== 00:23:36.539 Admin Commands 00:23:36.539 -------------- 00:23:36.539 Get Log Page (02h): Supported 00:23:36.539 Identify (06h): Supported 00:23:36.539 Abort (08h): Supported 00:23:36.539 Set Features (09h): Supported 00:23:36.539 Get Features (0Ah): Supported 00:23:36.539 Asynchronous Event Request (0Ch): Supported 00:23:36.539 Keep Alive (18h): Supported 00:23:36.539 I/O Commands 00:23:36.539 ------------ 00:23:36.539 Flush (00h): Supported LBA-Change 00:23:36.539 Write (01h): Supported LBA-Change 00:23:36.539 Read (02h): Supported 00:23:36.539 Compare (05h): Supported 00:23:36.539 Write Zeroes (08h): Supported LBA-Change 00:23:36.539 Dataset Management (09h): Supported LBA-Change 00:23:36.539 Copy (19h): Supported LBA-Change 00:23:36.539 00:23:36.539 Error Log 00:23:36.539 ========= 00:23:36.539 00:23:36.539 Arbitration 00:23:36.539 =========== 00:23:36.539 Arbitration Burst: 1 00:23:36.539 00:23:36.539 Power Management 00:23:36.539 ================ 00:23:36.539 Number of Power States: 1 00:23:36.539 Current Power State: Power State #0 00:23:36.539 Power State #0: 00:23:36.539 Max Power: 0.00 W 00:23:36.539 Non-Operational State: Operational 00:23:36.539 Entry Latency: Not Reported 00:23:36.539 Exit Latency: Not Reported 00:23:36.539 Relative Read Throughput: 0 00:23:36.539 Relative Read Latency: 0 00:23:36.539 Relative Write Throughput: 0 00:23:36.539 Relative Write Latency: 0 00:23:36.539 Idle Power: Not Reported 00:23:36.539 Active Power: Not Reported 00:23:36.539 Non-Operational Permissive Mode: Not Supported 00:23:36.539 00:23:36.539 Health Information 00:23:36.539 ================== 00:23:36.539 Critical Warnings: 00:23:36.539 Available Spare Space: OK 00:23:36.539 Temperature: OK 00:23:36.539 Device Reliability: OK 00:23:36.539 Read Only: No 00:23:36.539 Volatile Memory Backup: OK 00:23:36.539 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:36.539 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:36.539 Available Spare: 0% 00:23:36.539 Available Spare Threshold: 0% 00:23:36.539 Life Percentage Used:[2024-11-20 16:35:22.275535] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.539 [2024-11-20 16:35:22.275541] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1c18550) 00:23:36.539 [2024-11-20 16:35:22.275549] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.539 [2024-11-20 16:35:22.275561] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c7ab80, cid 7, qid 0 00:23:36.539 [2024-11-20 16:35:22.275709] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.539 [2024-11-20 16:35:22.275716] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.539 [2024-11-20 16:35:22.275719] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.539 [2024-11-20 16:35:22.275723] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c7ab80) on tqpair=0x1c18550 00:23:36.539 [2024-11-20 16:35:22.275750] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:23:36.539 [2024-11-20 16:35:22.275763] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c7a100) on tqpair=0x1c18550 00:23:36.539 [2024-11-20 16:35:22.275769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.539 [2024-11-20 16:35:22.275774] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c7a280) on tqpair=0x1c18550 00:23:36.539 [2024-11-20 16:35:22.275779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.539 [2024-11-20 16:35:22.275784] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c7a400) on tqpair=0x1c18550 00:23:36.539 [2024-11-20 16:35:22.275789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.539 [2024-11-20 16:35:22.275794] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c7a580) on tqpair=0x1c18550 00:23:36.539 [2024-11-20 16:35:22.275798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.539 [2024-11-20 16:35:22.275806] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.539 [2024-11-20 16:35:22.275810] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.539 [2024-11-20 16:35:22.275813] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c18550) 00:23:36.539 [2024-11-20 16:35:22.275820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.539 [2024-11-20 16:35:22.275832] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c7a580, cid 3, qid 0 00:23:36.539 [2024-11-20 16:35:22.276048] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.539 [2024-11-20 16:35:22.276054] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.540 [2024-11-20 16:35:22.276058] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.540 [2024-11-20 16:35:22.276062] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c7a580) on tqpair=0x1c18550 00:23:36.540 [2024-11-20 16:35:22.276068] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.540 [2024-11-20 16:35:22.276072] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.540 [2024-11-20 16:35:22.276076] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c18550) 00:23:36.540 [2024-11-20 16:35:22.276083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.540 [2024-11-20 16:35:22.276095] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c7a580, cid 3, qid 0 00:23:36.540 [2024-11-20 16:35:22.276298] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.540 [2024-11-20 16:35:22.276305] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.540 [2024-11-20 16:35:22.276308] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.540 [2024-11-20 16:35:22.276313] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c7a580) on tqpair=0x1c18550 00:23:36.540 [2024-11-20 16:35:22.276319] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:23:36.540 [2024-11-20 16:35:22.276324] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:23:36.540 [2024-11-20 16:35:22.276336] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.540 [2024-11-20 16:35:22.276343] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.540 [2024-11-20 16:35:22.276348] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c18550) 00:23:36.540 [2024-11-20 16:35:22.276357] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.540 [2024-11-20 16:35:22.276371] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c7a580, cid 3, qid 0 00:23:36.540 [2024-11-20 16:35:22.276551] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.540 [2024-11-20 16:35:22.276560] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.540 [2024-11-20 16:35:22.276563] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.540 [2024-11-20 16:35:22.276567] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c7a580) on tqpair=0x1c18550 00:23:36.540 [2024-11-20 16:35:22.276577] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.540 [2024-11-20 16:35:22.276581] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.540 [2024-11-20 16:35:22.276584] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c18550) 00:23:36.540 [2024-11-20 16:35:22.276591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.540 [2024-11-20 16:35:22.276601] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c7a580, cid 3, qid 0 00:23:36.540 [2024-11-20 16:35:22.276745] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.540 [2024-11-20 16:35:22.276751] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.540 [2024-11-20 16:35:22.276754] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.540 [2024-11-20 16:35:22.276758] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c7a580) on tqpair=0x1c18550 00:23:36.540 [2024-11-20 16:35:22.276767] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.540 [2024-11-20 16:35:22.276771] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.540 [2024-11-20 16:35:22.276775] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c18550) 00:23:36.540 [2024-11-20 16:35:22.276781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.540 [2024-11-20 16:35:22.276791] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c7a580, cid 3, qid 0 00:23:36.540 [2024-11-20 16:35:22.280991] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.540 [2024-11-20 16:35:22.280999] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.540 [2024-11-20 16:35:22.281002] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.540 [2024-11-20 16:35:22.281006] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c7a580) on tqpair=0x1c18550 00:23:36.540 [2024-11-20 16:35:22.281016] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:36.540 [2024-11-20 16:35:22.281020] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:36.540 [2024-11-20 16:35:22.281024] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c18550) 00:23:36.540 [2024-11-20 16:35:22.281030] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:36.540 [2024-11-20 16:35:22.281041] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c7a580, cid 3, qid 0 00:23:36.540 [2024-11-20 16:35:22.281224] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:36.540 [2024-11-20 16:35:22.281230] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:36.540 [2024-11-20 16:35:22.281234] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:36.540 [2024-11-20 16:35:22.281238] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c7a580) on tqpair=0x1c18550 00:23:36.540 [2024-11-20 16:35:22.281245] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:23:36.540 0% 00:23:36.540 Data Units Read: 0 00:23:36.540 Data Units Written: 0 00:23:36.540 Host Read Commands: 0 00:23:36.540 Host Write Commands: 0 00:23:36.540 Controller Busy Time: 0 minutes 00:23:36.540 Power Cycles: 0 00:23:36.540 Power On Hours: 0 hours 00:23:36.540 Unsafe Shutdowns: 0 00:23:36.540 Unrecoverable Media Errors: 0 00:23:36.540 Lifetime Error Log Entries: 0 00:23:36.540 Warning Temperature Time: 0 minutes 00:23:36.540 Critical Temperature Time: 0 minutes 00:23:36.540 00:23:36.540 Number of Queues 00:23:36.540 ================ 00:23:36.540 Number of I/O Submission Queues: 127 00:23:36.540 Number of I/O Completion Queues: 127 00:23:36.540 00:23:36.540 Active Namespaces 00:23:36.540 ================= 00:23:36.540 Namespace ID:1 00:23:36.540 Error Recovery Timeout: Unlimited 00:23:36.540 Command Set Identifier: NVM (00h) 00:23:36.540 Deallocate: Supported 00:23:36.540 Deallocated/Unwritten Error: Not Supported 00:23:36.540 Deallocated Read Value: Unknown 00:23:36.540 Deallocate in Write Zeroes: Not Supported 00:23:36.540 Deallocated Guard Field: 0xFFFF 00:23:36.540 Flush: Supported 00:23:36.540 Reservation: Supported 00:23:36.540 Namespace Sharing Capabilities: Multiple Controllers 00:23:36.540 Size (in LBAs): 131072 (0GiB) 00:23:36.540 Capacity (in LBAs): 131072 (0GiB) 00:23:36.540 Utilization (in LBAs): 131072 (0GiB) 00:23:36.540 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:36.540 EUI64: ABCDEF0123456789 00:23:36.540 UUID: 148b6fc3-2a38-4aa8-ae99-aee19f4caacc 00:23:36.540 Thin Provisioning: Not Supported 00:23:36.540 Per-NS Atomic Units: Yes 00:23:36.540 Atomic Boundary Size (Normal): 0 00:23:36.540 Atomic Boundary Size (PFail): 0 00:23:36.540 Atomic Boundary Offset: 0 00:23:36.540 Maximum Single Source Range Length: 65535 00:23:36.540 Maximum Copy Length: 65535 00:23:36.540 Maximum Source Range Count: 1 00:23:36.540 NGUID/EUI64 Never Reused: No 00:23:36.540 Namespace Write Protected: No 00:23:36.540 Number of LBA Formats: 1 00:23:36.540 Current LBA Format: LBA Format #00 00:23:36.540 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:36.540 00:23:36.540 16:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:36.540 16:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:36.540 16:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.540 16:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:36.540 16:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.540 16:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:36.540 16:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:36.540 16:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:36.540 16:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:23:36.540 16:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:36.540 16:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:23:36.540 16:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:36.540 16:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:36.540 rmmod nvme_tcp 00:23:36.540 rmmod nvme_fabrics 00:23:36.540 rmmod nvme_keyring 00:23:36.540 16:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:36.540 16:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:23:36.540 16:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:23:36.540 16:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 2299041 ']' 00:23:36.540 16:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 2299041 00:23:36.540 16:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 2299041 ']' 00:23:36.540 16:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 2299041 00:23:36.540 16:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:23:36.540 16:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:36.540 16:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2299041 00:23:36.540 16:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:36.540 16:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:36.540 16:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2299041' 00:23:36.540 killing process with pid 2299041 00:23:36.540 16:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 2299041 00:23:36.540 16:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 2299041 00:23:36.802 16:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:36.802 16:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:36.802 16:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:36.802 16:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:23:36.802 16:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:36.802 16:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:23:36.802 16:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:23:36.802 16:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:36.802 16:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:36.802 16:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.802 16:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:36.802 16:35:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.712 16:35:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:38.712 00:23:38.712 real 0m11.463s 00:23:38.712 user 0m8.080s 00:23:38.712 sys 0m6.061s 00:23:38.712 16:35:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:38.712 16:35:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:38.712 ************************************ 00:23:38.712 END TEST nvmf_identify 00:23:38.712 ************************************ 00:23:38.973 16:35:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:38.973 16:35:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:38.973 16:35:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:38.973 16:35:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.973 ************************************ 00:23:38.973 START TEST nvmf_perf 00:23:38.973 ************************************ 00:23:38.973 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:38.973 * Looking for test storage... 00:23:38.973 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:38.973 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:38.973 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:23:38.973 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:38.973 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:38.973 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:38.973 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:38.973 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:38.973 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:23:38.973 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:23:38.973 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:23:38.973 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:23:38.973 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:23:38.973 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:23:38.973 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:23:38.973 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:38.973 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:23:38.973 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:23:38.973 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:38.973 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:38.973 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:23:38.973 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:23:38.973 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:38.973 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:23:38.973 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:38.973 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:23:38.973 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:23:38.973 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:38.973 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:23:38.973 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:38.973 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:38.973 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:38.973 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:23:38.973 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:38.973 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:38.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.973 --rc genhtml_branch_coverage=1 00:23:38.973 --rc genhtml_function_coverage=1 00:23:38.973 --rc genhtml_legend=1 00:23:38.973 --rc geninfo_all_blocks=1 00:23:38.973 --rc geninfo_unexecuted_blocks=1 00:23:38.973 00:23:38.973 ' 00:23:38.973 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:38.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.973 --rc genhtml_branch_coverage=1 00:23:38.973 --rc genhtml_function_coverage=1 00:23:38.973 --rc genhtml_legend=1 00:23:38.973 --rc geninfo_all_blocks=1 00:23:38.973 --rc geninfo_unexecuted_blocks=1 00:23:38.973 00:23:38.973 ' 00:23:38.973 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:38.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.973 --rc genhtml_branch_coverage=1 00:23:38.973 --rc genhtml_function_coverage=1 00:23:38.973 --rc genhtml_legend=1 00:23:38.973 --rc geninfo_all_blocks=1 00:23:38.973 --rc geninfo_unexecuted_blocks=1 00:23:38.973 00:23:38.973 ' 00:23:38.973 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:38.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.973 --rc genhtml_branch_coverage=1 00:23:38.973 --rc genhtml_function_coverage=1 00:23:38.973 --rc genhtml_legend=1 00:23:38.973 --rc geninfo_all_blocks=1 00:23:38.973 --rc geninfo_unexecuted_blocks=1 00:23:38.973 00:23:38.973 ' 00:23:38.973 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:38.973 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:39.235 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:39.235 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:39.235 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:39.235 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:39.235 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:39.235 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:39.235 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:39.235 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:39.235 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:39.235 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:39.235 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:39.235 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:39.235 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:39.235 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:39.235 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:39.235 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:39.235 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:39.235 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:39.235 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:39.235 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:39.235 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:39.235 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.235 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.235 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.235 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:39.235 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.235 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:23:39.235 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:39.235 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:39.235 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:39.235 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:39.235 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:39.235 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:39.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:39.235 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:39.235 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:39.235 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:39.235 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:39.235 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:39.235 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:39.235 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:39.235 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:39.235 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:39.235 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:39.235 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:39.235 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:39.235 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.235 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:39.235 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.235 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:39.235 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:39.235 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:39.235 16:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:47.367 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:47.367 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:23:47.367 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:47.367 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:47.367 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:47.367 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:47.367 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:47.367 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:23:47.367 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:47.367 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:23:47.367 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:23:47.367 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:23:47.367 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:23:47.367 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:23:47.367 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:23:47.367 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:47.367 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:47.367 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:47.368 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:47.368 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:47.368 Found net devices under 0000:31:00.0: cvl_0_0 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:47.368 Found net devices under 0000:31:00.1: cvl_0_1 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:47.368 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:47.368 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.522 ms 00:23:47.368 00:23:47.368 --- 10.0.0.2 ping statistics --- 00:23:47.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:47.368 rtt min/avg/max/mdev = 0.522/0.522/0.522/0.000 ms 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:47.368 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:47.368 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:23:47.368 00:23:47.368 --- 10.0.0.1 ping statistics --- 00:23:47.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:47.368 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=2303549 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 2303549 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 2303549 ']' 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:47.368 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:47.369 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:47.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:47.369 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:47.369 16:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:47.369 [2024-11-20 16:35:32.517893] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:23:47.369 [2024-11-20 16:35:32.517958] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:47.369 [2024-11-20 16:35:32.601656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:47.369 [2024-11-20 16:35:32.643593] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:47.369 [2024-11-20 16:35:32.643631] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:47.369 [2024-11-20 16:35:32.643639] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:47.369 [2024-11-20 16:35:32.643646] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:47.369 [2024-11-20 16:35:32.643652] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:47.369 [2024-11-20 16:35:32.645516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:47.369 [2024-11-20 16:35:32.645632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:47.369 [2024-11-20 16:35:32.645789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:47.369 [2024-11-20 16:35:32.645789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:47.629 16:35:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:47.629 16:35:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:23:47.629 16:35:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:47.629 16:35:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:47.629 16:35:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:47.629 16:35:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:47.629 16:35:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:47.630 16:35:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:48.199 16:35:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:48.199 16:35:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:48.199 16:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:23:48.199 16:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:48.460 16:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:48.460 16:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:23:48.460 16:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:48.460 16:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:48.460 16:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:48.460 [2024-11-20 16:35:34.396025] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:48.721 16:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:48.721 16:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:48.721 16:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:48.981 16:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:48.981 16:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:49.241 16:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:49.241 [2024-11-20 16:35:35.118663] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:49.241 16:35:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:49.501 16:35:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:23:49.501 16:35:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:23:49.501 16:35:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:49.501 16:35:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:23:50.883 Initializing NVMe Controllers 00:23:50.883 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:23:50.883 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:23:50.883 Initialization complete. Launching workers. 00:23:50.883 ======================================================== 00:23:50.883 Latency(us) 00:23:50.883 Device Information : IOPS MiB/s Average min max 00:23:50.883 PCIE (0000:65:00.0) NSID 1 from core 0: 79170.73 309.26 403.62 13.33 5581.44 00:23:50.883 ======================================================== 00:23:50.883 Total : 79170.73 309.26 403.62 13.33 5581.44 00:23:50.883 00:23:50.883 16:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:52.264 Initializing NVMe Controllers 00:23:52.264 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:52.264 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:52.264 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:52.264 Initialization complete. Launching workers. 00:23:52.264 ======================================================== 00:23:52.264 Latency(us) 00:23:52.264 Device Information : IOPS MiB/s Average min max 00:23:52.264 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 82.95 0.32 12383.41 261.85 45882.58 00:23:52.264 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 45.97 0.18 21925.84 7957.66 50882.21 00:23:52.264 ======================================================== 00:23:52.264 Total : 128.92 0.50 15786.13 261.85 50882.21 00:23:52.264 00:23:52.264 16:35:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:53.646 Initializing NVMe Controllers 00:23:53.646 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:53.646 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:53.646 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:53.646 Initialization complete. Launching workers. 00:23:53.646 ======================================================== 00:23:53.646 Latency(us) 00:23:53.646 Device Information : IOPS MiB/s Average min max 00:23:53.646 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10376.68 40.53 3084.61 517.38 9959.61 00:23:53.646 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3743.88 14.62 8605.63 5325.52 19157.45 00:23:53.646 ======================================================== 00:23:53.646 Total : 14120.56 55.16 4548.44 517.38 19157.45 00:23:53.646 00:23:53.646 16:35:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:23:53.646 16:35:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:23:53.646 16:35:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:56.315 Initializing NVMe Controllers 00:23:56.315 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:56.315 Controller IO queue size 128, less than required. 00:23:56.315 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:56.315 Controller IO queue size 128, less than required. 00:23:56.315 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:56.315 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:56.315 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:56.315 Initialization complete. Launching workers. 00:23:56.315 ======================================================== 00:23:56.315 Latency(us) 00:23:56.315 Device Information : IOPS MiB/s Average min max 00:23:56.315 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2025.34 506.34 64026.70 39903.82 101851.10 00:23:56.315 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 596.01 149.00 224019.46 70458.96 347174.82 00:23:56.315 ======================================================== 00:23:56.315 Total : 2621.35 655.34 100403.94 39903.82 347174.82 00:23:56.315 00:23:56.315 16:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:56.315 No valid NVMe controllers or AIO or URING devices found 00:23:56.315 Initializing NVMe Controllers 00:23:56.315 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:56.315 Controller IO queue size 128, less than required. 00:23:56.315 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:56.315 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:56.315 Controller IO queue size 128, less than required. 00:23:56.315 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:56.315 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:56.315 WARNING: Some requested NVMe devices were skipped 00:23:56.315 16:35:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:58.855 Initializing NVMe Controllers 00:23:58.855 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:58.855 Controller IO queue size 128, less than required. 00:23:58.855 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:58.855 Controller IO queue size 128, less than required. 00:23:58.855 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:58.855 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:58.855 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:58.855 Initialization complete. Launching workers. 00:23:58.855 00:23:58.855 ==================== 00:23:58.855 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:58.855 TCP transport: 00:23:58.855 polls: 23913 00:23:58.855 idle_polls: 13279 00:23:58.855 sock_completions: 10634 00:23:58.855 nvme_completions: 6477 00:23:58.855 submitted_requests: 9640 00:23:58.855 queued_requests: 1 00:23:58.855 00:23:58.855 ==================== 00:23:58.855 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:58.855 TCP transport: 00:23:58.855 polls: 24590 00:23:58.855 idle_polls: 14818 00:23:58.855 sock_completions: 9772 00:23:58.855 nvme_completions: 6507 00:23:58.855 submitted_requests: 9812 00:23:58.855 queued_requests: 1 00:23:58.855 ======================================================== 00:23:58.855 Latency(us) 00:23:58.855 Device Information : IOPS MiB/s Average min max 00:23:58.855 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1618.95 404.74 80861.85 48254.52 141382.71 00:23:58.855 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1626.45 406.61 78825.97 47095.46 144919.15 00:23:58.855 ======================================================== 00:23:58.855 Total : 3245.41 811.35 79841.56 47095.46 144919.15 00:23:58.855 00:23:58.855 16:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:58.855 16:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:59.113 16:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:59.113 16:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:59.113 16:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:59.113 16:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:59.113 16:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:23:59.113 16:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:59.113 16:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:23:59.113 16:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:59.113 16:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:59.113 rmmod nvme_tcp 00:23:59.113 rmmod nvme_fabrics 00:23:59.113 rmmod nvme_keyring 00:23:59.113 16:35:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:59.113 16:35:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:23:59.113 16:35:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:23:59.113 16:35:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 2303549 ']' 00:23:59.113 16:35:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 2303549 00:23:59.113 16:35:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 2303549 ']' 00:23:59.113 16:35:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 2303549 00:23:59.113 16:35:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:23:59.113 16:35:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:59.113 16:35:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2303549 00:23:59.372 16:35:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:59.373 16:35:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:59.373 16:35:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2303549' 00:23:59.373 killing process with pid 2303549 00:23:59.373 16:35:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 2303549 00:23:59.373 16:35:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 2303549 00:24:01.283 16:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:01.283 16:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:01.283 16:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:01.283 16:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:24:01.283 16:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:24:01.283 16:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:01.283 16:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:24:01.283 16:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:01.283 16:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:01.283 16:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:01.283 16:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:01.283 16:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:03.194 16:35:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:03.194 00:24:03.194 real 0m24.419s 00:24:03.194 user 0m58.860s 00:24:03.194 sys 0m8.613s 00:24:03.195 16:35:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:03.195 16:35:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:03.195 ************************************ 00:24:03.195 END TEST nvmf_perf 00:24:03.195 ************************************ 00:24:03.455 16:35:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:03.455 16:35:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:03.455 16:35:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:03.455 16:35:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.455 ************************************ 00:24:03.455 START TEST nvmf_fio_host 00:24:03.455 ************************************ 00:24:03.455 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:03.455 * Looking for test storage... 00:24:03.455 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:03.455 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:03.455 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:24:03.455 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:03.455 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:03.455 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:03.455 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:03.455 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:03.455 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:03.455 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:03.455 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:03.455 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:03.455 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:03.455 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:03.455 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:03.455 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:03.455 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:24:03.455 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:24:03.455 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:03.455 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:03.455 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:24:03.455 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:24:03.455 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:03.455 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:24:03.455 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:03.455 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:24:03.715 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:24:03.715 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:03.715 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:24:03.715 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:03.715 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:03.715 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:03.715 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:24:03.715 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:03.715 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:03.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:03.715 --rc genhtml_branch_coverage=1 00:24:03.715 --rc genhtml_function_coverage=1 00:24:03.715 --rc genhtml_legend=1 00:24:03.715 --rc geninfo_all_blocks=1 00:24:03.715 --rc geninfo_unexecuted_blocks=1 00:24:03.715 00:24:03.715 ' 00:24:03.715 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:03.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:03.715 --rc genhtml_branch_coverage=1 00:24:03.715 --rc genhtml_function_coverage=1 00:24:03.715 --rc genhtml_legend=1 00:24:03.715 --rc geninfo_all_blocks=1 00:24:03.715 --rc geninfo_unexecuted_blocks=1 00:24:03.715 00:24:03.715 ' 00:24:03.715 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:03.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:03.715 --rc genhtml_branch_coverage=1 00:24:03.715 --rc genhtml_function_coverage=1 00:24:03.715 --rc genhtml_legend=1 00:24:03.715 --rc geninfo_all_blocks=1 00:24:03.715 --rc geninfo_unexecuted_blocks=1 00:24:03.715 00:24:03.715 ' 00:24:03.715 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:03.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:03.715 --rc genhtml_branch_coverage=1 00:24:03.715 --rc genhtml_function_coverage=1 00:24:03.715 --rc genhtml_legend=1 00:24:03.715 --rc geninfo_all_blocks=1 00:24:03.715 --rc geninfo_unexecuted_blocks=1 00:24:03.715 00:24:03.715 ' 00:24:03.715 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:03.715 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:03.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:03.716 16:35:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:11.852 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:11.852 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:11.852 Found net devices under 0000:31:00.0: cvl_0_0 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:11.852 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:11.853 Found net devices under 0000:31:00.1: cvl_0_1 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:11.853 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:11.853 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.407 ms 00:24:11.853 00:24:11.853 --- 10.0.0.2 ping statistics --- 00:24:11.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:11.853 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:11.853 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:11.853 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:24:11.853 00:24:11.853 --- 10.0.0.1 ping statistics --- 00:24:11.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:11.853 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2310650 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2310650 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 2310650 ']' 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:11.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:11.853 16:35:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.853 [2024-11-20 16:35:56.804927] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:24:11.853 [2024-11-20 16:35:56.804977] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:11.853 [2024-11-20 16:35:56.886915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:11.853 [2024-11-20 16:35:56.922464] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:11.853 [2024-11-20 16:35:56.922497] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:11.853 [2024-11-20 16:35:56.922506] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:11.853 [2024-11-20 16:35:56.922513] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:11.853 [2024-11-20 16:35:56.922522] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:11.853 [2024-11-20 16:35:56.924046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:11.853 [2024-11-20 16:35:56.924313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:11.853 [2024-11-20 16:35:56.924330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:11.853 [2024-11-20 16:35:56.924336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.853 16:35:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:11.853 16:35:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:24:11.853 16:35:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:11.853 [2024-11-20 16:35:57.755922] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:11.853 16:35:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:11.853 16:35:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:11.853 16:35:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.114 16:35:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:12.114 Malloc1 00:24:12.114 16:35:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:12.375 16:35:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:12.635 16:35:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:12.635 [2024-11-20 16:35:58.554638] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:12.635 16:35:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:12.896 16:35:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:12.896 16:35:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:12.896 16:35:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:12.896 16:35:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:12.896 16:35:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:12.896 16:35:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:12.896 16:35:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:12.896 16:35:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:24:12.896 16:35:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:12.896 16:35:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:12.896 16:35:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:12.896 16:35:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:24:12.896 16:35:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:12.896 16:35:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:12.896 16:35:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:12.896 16:35:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:12.896 16:35:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:12.896 16:35:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:12.896 16:35:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:12.896 16:35:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:12.896 16:35:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:12.896 16:35:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:12.896 16:35:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:13.472 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:13.472 fio-3.35 00:24:13.472 Starting 1 thread 00:24:16.015 00:24:16.015 test: (groupid=0, jobs=1): err= 0: pid=2311207: Wed Nov 20 16:36:01 2024 00:24:16.015 read: IOPS=13.7k, BW=53.5MiB/s (56.1MB/s)(107MiB/2005msec) 00:24:16.015 slat (usec): min=2, max=297, avg= 2.19, stdev= 2.59 00:24:16.015 clat (usec): min=3312, max=9865, avg=5124.75, stdev=472.17 00:24:16.015 lat (usec): min=3314, max=9871, avg=5126.94, stdev=472.51 00:24:16.016 clat percentiles (usec): 00:24:16.016 | 1.00th=[ 4146], 5.00th=[ 4555], 10.00th=[ 4686], 20.00th=[ 4817], 00:24:16.016 | 30.00th=[ 4948], 40.00th=[ 5014], 50.00th=[ 5080], 60.00th=[ 5211], 00:24:16.016 | 70.00th=[ 5276], 80.00th=[ 5407], 90.00th=[ 5538], 95.00th=[ 5669], 00:24:16.016 | 99.00th=[ 6718], 99.50th=[ 8094], 99.90th=[ 9503], 99.95th=[ 9634], 00:24:16.016 | 99.99th=[ 9765] 00:24:16.016 bw ( KiB/s): min=53256, max=55552, per=100.00%, avg=54822.00, stdev=1083.13, samples=4 00:24:16.016 iops : min=13314, max=13888, avg=13705.50, stdev=270.78, samples=4 00:24:16.016 write: IOPS=13.7k, BW=53.4MiB/s (56.0MB/s)(107MiB/2005msec); 0 zone resets 00:24:16.016 slat (usec): min=2, max=263, avg= 2.26, stdev= 1.79 00:24:16.016 clat (usec): min=2616, max=8479, avg=4152.09, stdev=435.64 00:24:16.016 lat (usec): min=2618, max=8485, avg=4154.34, stdev=436.05 00:24:16.016 clat percentiles (usec): 00:24:16.016 | 1.00th=[ 3392], 5.00th=[ 3654], 10.00th=[ 3785], 20.00th=[ 3916], 00:24:16.016 | 30.00th=[ 3982], 40.00th=[ 4047], 50.00th=[ 4113], 60.00th=[ 4178], 00:24:16.016 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4490], 95.00th=[ 4621], 00:24:16.016 | 99.00th=[ 6063], 99.50th=[ 7111], 99.90th=[ 8094], 99.95th=[ 8291], 00:24:16.016 | 99.99th=[ 8455] 00:24:16.016 bw ( KiB/s): min=53704, max=55680, per=100.00%, avg=54740.00, stdev=866.08, samples=4 00:24:16.016 iops : min=13426, max=13920, avg=13685.00, stdev=216.52, samples=4 00:24:16.016 lat (msec) : 4=16.45%, 10=83.55% 00:24:16.016 cpu : usr=76.20%, sys=23.05%, ctx=31, majf=0, minf=17 00:24:16.016 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:16.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:16.016 issued rwts: total=27478,27430,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:16.016 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:16.016 00:24:16.016 Run status group 0 (all jobs): 00:24:16.016 READ: bw=53.5MiB/s (56.1MB/s), 53.5MiB/s-53.5MiB/s (56.1MB/s-56.1MB/s), io=107MiB (113MB), run=2005-2005msec 00:24:16.016 WRITE: bw=53.4MiB/s (56.0MB/s), 53.4MiB/s-53.4MiB/s (56.0MB/s-56.0MB/s), io=107MiB (112MB), run=2005-2005msec 00:24:16.016 16:36:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:16.016 16:36:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:16.016 16:36:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:16.016 16:36:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:16.016 16:36:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:16.016 16:36:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:16.016 16:36:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:24:16.016 16:36:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:16.016 16:36:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:16.016 16:36:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:16.016 16:36:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:24:16.016 16:36:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:16.016 16:36:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:16.016 16:36:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:16.016 16:36:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:16.016 16:36:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:16.016 16:36:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:16.016 16:36:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:16.016 16:36:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:16.016 16:36:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:16.016 16:36:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:16.016 16:36:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:16.016 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:16.016 fio-3.35 00:24:16.016 Starting 1 thread 00:24:18.585 00:24:18.585 test: (groupid=0, jobs=1): err= 0: pid=2312111: Wed Nov 20 16:36:04 2024 00:24:18.585 read: IOPS=8326, BW=130MiB/s (136MB/s)(261MiB/2006msec) 00:24:18.585 slat (usec): min=3, max=109, avg= 3.64, stdev= 1.54 00:24:18.585 clat (msec): min=2, max=213, avg= 9.34, stdev=14.78 00:24:18.585 lat (msec): min=2, max=213, avg= 9.34, stdev=14.78 00:24:18.585 clat percentiles (msec): 00:24:18.585 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 7], 00:24:18.585 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 9], 00:24:18.585 | 70.00th=[ 10], 80.00th=[ 11], 90.00th=[ 11], 95.00th=[ 12], 00:24:18.585 | 99.00th=[ 15], 99.50th=[ 211], 99.90th=[ 213], 99.95th=[ 213], 00:24:18.585 | 99.99th=[ 213] 00:24:18.585 bw ( KiB/s): min=42784, max=87392, per=51.41%, avg=68488.00, stdev=18972.76, samples=4 00:24:18.585 iops : min= 2674, max= 5462, avg=4280.50, stdev=1185.80, samples=4 00:24:18.585 write: IOPS=4795, BW=74.9MiB/s (78.6MB/s)(140MiB/1864msec); 0 zone resets 00:24:18.585 slat (usec): min=39, max=325, avg=41.31, stdev= 8.60 00:24:18.585 clat (msec): min=2, max=213, avg=10.39, stdev=13.84 00:24:18.585 lat (msec): min=2, max=213, avg=10.43, stdev=13.84 00:24:18.585 clat percentiles (msec): 00:24:18.585 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:24:18.585 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 10], 60.00th=[ 10], 00:24:18.585 | 70.00th=[ 11], 80.00th=[ 11], 90.00th=[ 12], 95.00th=[ 13], 00:24:18.585 | 99.00th=[ 16], 99.50th=[ 21], 99.90th=[ 213], 99.95th=[ 213], 00:24:18.585 | 99.99th=[ 213] 00:24:18.585 bw ( KiB/s): min=44608, max=90592, per=92.85%, avg=71240.00, stdev=19499.96, samples=4 00:24:18.585 iops : min= 2788, max= 5662, avg=4452.50, stdev=1218.75, samples=4 00:24:18.585 lat (msec) : 4=0.45%, 10=74.05%, 20=24.98%, 50=0.02%, 250=0.50% 00:24:18.585 cpu : usr=84.49%, sys=14.26%, ctx=14, majf=0, minf=31 00:24:18.585 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:18.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.585 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:18.585 issued rwts: total=16702,8939,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:18.585 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:18.585 00:24:18.585 Run status group 0 (all jobs): 00:24:18.585 READ: bw=130MiB/s (136MB/s), 130MiB/s-130MiB/s (136MB/s-136MB/s), io=261MiB (274MB), run=2006-2006msec 00:24:18.585 WRITE: bw=74.9MiB/s (78.6MB/s), 74.9MiB/s-74.9MiB/s (78.6MB/s-78.6MB/s), io=140MiB (146MB), run=1864-1864msec 00:24:18.585 16:36:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:18.585 16:36:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:18.585 16:36:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:18.585 16:36:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:18.585 16:36:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:18.585 16:36:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:18.585 16:36:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:24:18.585 16:36:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:18.585 16:36:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:24:18.585 16:36:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:18.585 16:36:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:18.585 rmmod nvme_tcp 00:24:18.585 rmmod nvme_fabrics 00:24:18.585 rmmod nvme_keyring 00:24:18.846 16:36:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:18.846 16:36:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:24:18.846 16:36:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:24:18.846 16:36:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 2310650 ']' 00:24:18.846 16:36:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 2310650 00:24:18.846 16:36:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 2310650 ']' 00:24:18.846 16:36:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 2310650 00:24:18.846 16:36:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:24:18.846 16:36:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:18.846 16:36:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2310650 00:24:18.846 16:36:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:18.846 16:36:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:18.846 16:36:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2310650' 00:24:18.846 killing process with pid 2310650 00:24:18.846 16:36:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 2310650 00:24:18.846 16:36:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 2310650 00:24:18.846 16:36:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:18.846 16:36:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:18.846 16:36:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:18.846 16:36:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:24:18.846 16:36:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:24:18.846 16:36:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:18.846 16:36:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:24:18.846 16:36:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:18.846 16:36:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:18.846 16:36:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:18.846 16:36:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:18.846 16:36:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:21.393 16:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:21.393 00:24:21.393 real 0m17.616s 00:24:21.393 user 1m0.284s 00:24:21.393 sys 0m7.369s 00:24:21.393 16:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:21.393 16:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.393 ************************************ 00:24:21.393 END TEST nvmf_fio_host 00:24:21.393 ************************************ 00:24:21.393 16:36:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:21.393 16:36:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:21.393 16:36:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:21.393 16:36:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.393 ************************************ 00:24:21.393 START TEST nvmf_failover 00:24:21.393 ************************************ 00:24:21.393 16:36:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:21.393 * Looking for test storage... 00:24:21.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:21.393 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:21.393 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:24:21.393 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:21.393 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:21.393 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:21.393 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:21.393 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:21.393 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:24:21.393 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:24:21.393 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:24:21.393 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:24:21.393 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:24:21.393 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:24:21.393 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:24:21.393 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:21.393 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:21.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:21.394 --rc genhtml_branch_coverage=1 00:24:21.394 --rc genhtml_function_coverage=1 00:24:21.394 --rc genhtml_legend=1 00:24:21.394 --rc geninfo_all_blocks=1 00:24:21.394 --rc geninfo_unexecuted_blocks=1 00:24:21.394 00:24:21.394 ' 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:21.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:21.394 --rc genhtml_branch_coverage=1 00:24:21.394 --rc genhtml_function_coverage=1 00:24:21.394 --rc genhtml_legend=1 00:24:21.394 --rc geninfo_all_blocks=1 00:24:21.394 --rc geninfo_unexecuted_blocks=1 00:24:21.394 00:24:21.394 ' 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:21.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:21.394 --rc genhtml_branch_coverage=1 00:24:21.394 --rc genhtml_function_coverage=1 00:24:21.394 --rc genhtml_legend=1 00:24:21.394 --rc geninfo_all_blocks=1 00:24:21.394 --rc geninfo_unexecuted_blocks=1 00:24:21.394 00:24:21.394 ' 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:21.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:21.394 --rc genhtml_branch_coverage=1 00:24:21.394 --rc genhtml_function_coverage=1 00:24:21.394 --rc genhtml_legend=1 00:24:21.394 --rc geninfo_all_blocks=1 00:24:21.394 --rc geninfo_unexecuted_blocks=1 00:24:21.394 00:24:21.394 ' 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:21.394 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:24:21.394 16:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:29.549 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:29.549 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:29.549 Found net devices under 0000:31:00.0: cvl_0_0 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:29.549 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:29.550 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:29.550 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:29.550 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:29.550 Found net devices under 0000:31:00.1: cvl_0_1 00:24:29.550 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:29.550 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:29.550 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:24:29.550 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:29.550 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:29.550 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:29.550 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:29.550 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:29.550 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:29.550 16:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:29.550 16:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:29.550 16:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:29.550 16:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:29.550 16:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:29.550 16:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:29.550 16:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:29.550 16:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:29.550 16:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:29.550 16:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:29.550 16:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:29.550 16:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:29.550 16:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:29.550 16:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:29.550 16:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:29.550 16:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:29.550 16:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:29.550 16:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:29.550 16:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:29.550 16:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:29.550 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:29.550 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:24:29.550 00:24:29.550 --- 10.0.0.2 ping statistics --- 00:24:29.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:29.550 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:24:29.550 16:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:29.550 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:29.550 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:24:29.550 00:24:29.550 --- 10.0.0.1 ping statistics --- 00:24:29.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:29.550 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:24:29.550 16:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:29.550 16:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:24:29.550 16:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:29.550 16:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:29.550 16:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:29.550 16:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:29.550 16:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:29.550 16:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:29.550 16:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:29.550 16:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:29.550 16:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:29.550 16:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:29.550 16:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:29.550 16:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=2317264 00:24:29.550 16:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 2317264 00:24:29.550 16:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:29.550 16:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2317264 ']' 00:24:29.550 16:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:29.550 16:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:29.550 16:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:29.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:29.550 16:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:29.550 16:36:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:29.550 [2024-11-20 16:36:14.403021] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:24:29.550 [2024-11-20 16:36:14.403082] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:29.550 [2024-11-20 16:36:14.502415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:29.550 [2024-11-20 16:36:14.554529] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:29.550 [2024-11-20 16:36:14.554578] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:29.550 [2024-11-20 16:36:14.554587] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:29.550 [2024-11-20 16:36:14.554594] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:29.550 [2024-11-20 16:36:14.554600] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:29.550 [2024-11-20 16:36:14.556513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:29.550 [2024-11-20 16:36:14.556679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:29.550 [2024-11-20 16:36:14.556680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:29.550 16:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:29.550 16:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:29.550 16:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:29.550 16:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:29.550 16:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:29.550 16:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:29.550 16:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:29.550 [2024-11-20 16:36:15.410297] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:29.550 16:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:29.812 Malloc0 00:24:29.812 16:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:30.072 16:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:30.072 16:36:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:30.333 [2024-11-20 16:36:16.161385] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:30.333 16:36:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:30.593 [2024-11-20 16:36:16.337854] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:30.593 16:36:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:30.593 [2024-11-20 16:36:16.514391] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:30.593 16:36:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:30.593 16:36:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2317628 00:24:30.593 16:36:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:30.593 16:36:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2317628 /var/tmp/bdevperf.sock 00:24:30.593 16:36:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2317628 ']' 00:24:30.593 16:36:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:30.593 16:36:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:30.593 16:36:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:30.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:30.593 16:36:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:30.593 16:36:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:30.854 16:36:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:30.854 16:36:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:30.854 16:36:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:31.425 NVMe0n1 00:24:31.425 16:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:31.687 00:24:31.687 16:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2317896 00:24:31.687 16:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:31.687 16:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:32.630 16:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:32.891 [2024-11-20 16:36:18.684766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1828e90 is same with the state(6) to be set 00:24:32.891 [2024-11-20 16:36:18.684804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1828e90 is same with the state(6) to be set 00:24:32.891 [2024-11-20 16:36:18.684810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1828e90 is same with the state(6) to be set 00:24:32.891 [2024-11-20 16:36:18.684816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1828e90 is same with the state(6) to be set 00:24:32.891 [2024-11-20 16:36:18.684820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1828e90 is same with the state(6) to be set 00:24:32.891 [2024-11-20 16:36:18.684825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1828e90 is same with the state(6) to be set 00:24:32.891 [2024-11-20 16:36:18.684829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1828e90 is same with the state(6) to be set 00:24:32.891 [2024-11-20 16:36:18.684834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1828e90 is same with the state(6) to be set 00:24:32.891 [2024-11-20 16:36:18.684839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1828e90 is same with the state(6) to be set 00:24:32.891 [2024-11-20 16:36:18.684843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1828e90 is same with the state(6) to be set 00:24:32.891 [2024-11-20 16:36:18.684848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1828e90 is same with the state(6) to be set 00:24:32.891 [2024-11-20 16:36:18.684852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1828e90 is same with the state(6) to be set 00:24:32.891 [2024-11-20 16:36:18.684857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1828e90 is same with the state(6) to be set 00:24:32.891 [2024-11-20 16:36:18.684861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1828e90 is same with the state(6) to be set 00:24:32.891 [2024-11-20 16:36:18.684866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1828e90 is same with the state(6) to be set 00:24:32.891 [2024-11-20 16:36:18.684870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1828e90 is same with the state(6) to be set 00:24:32.891 [2024-11-20 16:36:18.684875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1828e90 is same with the state(6) to be set 00:24:32.891 [2024-11-20 16:36:18.684879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1828e90 is same with the state(6) to be set 00:24:32.891 [2024-11-20 16:36:18.684884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1828e90 is same with the state(6) to be set 00:24:32.891 [2024-11-20 16:36:18.684888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1828e90 is same with the state(6) to be set 00:24:32.891 [2024-11-20 16:36:18.684893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1828e90 is same with the state(6) to be set 00:24:32.891 [2024-11-20 16:36:18.684898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1828e90 is same with the state(6) to be set 00:24:32.891 [2024-11-20 16:36:18.684902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1828e90 is same with the state(6) to be set 00:24:32.891 [2024-11-20 16:36:18.684912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1828e90 is same with the state(6) to be set 00:24:32.891 [2024-11-20 16:36:18.684917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1828e90 is same with the state(6) to be set 00:24:32.891 [2024-11-20 16:36:18.684921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1828e90 is same with the state(6) to be set 00:24:32.891 [2024-11-20 16:36:18.684926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1828e90 is same with the state(6) to be set 00:24:32.891 [2024-11-20 16:36:18.684930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1828e90 is same with the state(6) to be set 00:24:32.891 [2024-11-20 16:36:18.684935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1828e90 is same with the state(6) to be set 00:24:32.891 [2024-11-20 16:36:18.684940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1828e90 is same with the state(6) to be set 00:24:32.891 [2024-11-20 16:36:18.684945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1828e90 is same with the state(6) to be set 00:24:32.891 16:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:36.204 16:36:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:36.204 00:24:36.204 16:36:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:36.466 [2024-11-20 16:36:22.307749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.307787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.307793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.307797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.307802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.307807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.307812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.307816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.307821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.307825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.307829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.307834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.307839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.307843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.307848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.307857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.307862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.307867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.307871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.307876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.307881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.307885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.307890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.307894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.307898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.307903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.307907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.307912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.307916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.307921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.307926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.307930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.307935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.307939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.307943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.307948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.307952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.307957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.307961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.307965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.307970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.307975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.307985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.307990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.307994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.307999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.308003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.308008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.308012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.308016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.308021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.308025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.308029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.308034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.308039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.308044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.466 [2024-11-20 16:36:22.308048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.467 [2024-11-20 16:36:22.308053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.467 [2024-11-20 16:36:22.308057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.467 [2024-11-20 16:36:22.308062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.467 [2024-11-20 16:36:22.308066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.467 [2024-11-20 16:36:22.308071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.467 [2024-11-20 16:36:22.308075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.467 [2024-11-20 16:36:22.308080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.467 [2024-11-20 16:36:22.308084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.467 [2024-11-20 16:36:22.308088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.467 [2024-11-20 16:36:22.308094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.467 [2024-11-20 16:36:22.308099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.467 [2024-11-20 16:36:22.308103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.467 [2024-11-20 16:36:22.308110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.467 [2024-11-20 16:36:22.308115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1829990 is same with the state(6) to be set 00:24:36.467 16:36:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:39.767 16:36:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:39.767 [2024-11-20 16:36:25.494806] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:39.767 16:36:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:40.709 16:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:40.970 [2024-11-20 16:36:26.685062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16eedf0 is same with the state(6) to be set 00:24:40.970 [2024-11-20 16:36:26.685096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16eedf0 is same with the state(6) to be set 00:24:40.970 [2024-11-20 16:36:26.685102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16eedf0 is same with the state(6) to be set 00:24:40.970 [2024-11-20 16:36:26.685107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16eedf0 is same with the state(6) to be set 00:24:40.970 [2024-11-20 16:36:26.685112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16eedf0 is same with the state(6) to be set 00:24:40.970 [2024-11-20 16:36:26.685116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16eedf0 is same with the state(6) to be set 00:24:40.970 [2024-11-20 16:36:26.685121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16eedf0 is same with the state(6) to be set 00:24:40.970 [2024-11-20 16:36:26.685126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16eedf0 is same with the state(6) to be set 00:24:40.970 [2024-11-20 16:36:26.685130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16eedf0 is same with the state(6) to be set 00:24:40.970 [2024-11-20 16:36:26.685135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16eedf0 is same with the state(6) to be set 00:24:40.970 [2024-11-20 16:36:26.685139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16eedf0 is same with the state(6) to be set 00:24:40.970 [2024-11-20 16:36:26.685144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16eedf0 is same with the state(6) to be set 00:24:40.970 [2024-11-20 16:36:26.685148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16eedf0 is same with the state(6) to be set 00:24:40.970 [2024-11-20 16:36:26.685153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16eedf0 is same with the state(6) to be set 00:24:40.970 16:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2317896 00:24:47.565 { 00:24:47.565 "results": [ 00:24:47.565 { 00:24:47.565 "job": "NVMe0n1", 00:24:47.565 "core_mask": "0x1", 00:24:47.565 "workload": "verify", 00:24:47.565 "status": "finished", 00:24:47.565 "verify_range": { 00:24:47.565 "start": 0, 00:24:47.566 "length": 16384 00:24:47.566 }, 00:24:47.566 "queue_depth": 128, 00:24:47.566 "io_size": 4096, 00:24:47.566 "runtime": 15.005762, 00:24:47.566 "iops": 11168.843008439026, 00:24:47.566 "mibps": 43.628293001714944, 00:24:47.566 "io_failed": 5621, 00:24:47.566 "io_timeout": 0, 00:24:47.566 "avg_latency_us": 11060.902932489695, 00:24:47.566 "min_latency_us": 532.48, 00:24:47.566 "max_latency_us": 28835.84 00:24:47.566 } 00:24:47.566 ], 00:24:47.566 "core_count": 1 00:24:47.566 } 00:24:47.566 16:36:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2317628 00:24:47.566 16:36:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2317628 ']' 00:24:47.566 16:36:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2317628 00:24:47.566 16:36:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:47.566 16:36:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:47.566 16:36:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2317628 00:24:47.566 16:36:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:47.566 16:36:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:47.566 16:36:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2317628' 00:24:47.566 killing process with pid 2317628 00:24:47.566 16:36:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2317628 00:24:47.566 16:36:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2317628 00:24:47.566 16:36:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:47.566 [2024-11-20 16:36:16.587638] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:24:47.566 [2024-11-20 16:36:16.587696] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2317628 ] 00:24:47.566 [2024-11-20 16:36:16.660544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.566 [2024-11-20 16:36:16.695940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:47.566 Running I/O for 15 seconds... 00:24:47.566 11057.00 IOPS, 43.19 MiB/s [2024-11-20T15:36:33.525Z] [2024-11-20 16:36:18.685726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.566 [2024-11-20 16:36:18.685759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.566 [2024-11-20 16:36:18.685770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.566 [2024-11-20 16:36:18.685778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.566 [2024-11-20 16:36:18.685787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.566 [2024-11-20 16:36:18.685794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.566 [2024-11-20 16:36:18.685803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.566 [2024-11-20 16:36:18.685810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.566 [2024-11-20 16:36:18.685817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dafc0 is same with the state(6) to be set 00:24:47.566 [2024-11-20 16:36:18.685882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:95488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.566 [2024-11-20 16:36:18.685893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.566 [2024-11-20 16:36:18.685907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:95496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.566 [2024-11-20 16:36:18.685915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.566 [2024-11-20 16:36:18.685924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:95504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.566 [2024-11-20 16:36:18.685932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.566 [2024-11-20 16:36:18.685941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.566 [2024-11-20 16:36:18.685948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.566 [2024-11-20 16:36:18.685958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:95520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.566 [2024-11-20 16:36:18.685965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.566 [2024-11-20 16:36:18.685975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:95528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.566 [2024-11-20 16:36:18.685989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.566 [2024-11-20 16:36:18.685998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:95536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.566 [2024-11-20 16:36:18.686011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.566 [2024-11-20 16:36:18.686020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:95544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.566 [2024-11-20 16:36:18.686028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.566 [2024-11-20 16:36:18.686037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:95552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.566 [2024-11-20 16:36:18.686044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.566 [2024-11-20 16:36:18.686053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.566 [2024-11-20 16:36:18.686061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.566 [2024-11-20 16:36:18.686070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:95568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.566 [2024-11-20 16:36:18.686077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.566 [2024-11-20 16:36:18.686086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:95576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.566 [2024-11-20 16:36:18.686093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.566 [2024-11-20 16:36:18.686103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:95584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.566 [2024-11-20 16:36:18.686110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.566 [2024-11-20 16:36:18.686119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:95592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.566 [2024-11-20 16:36:18.686127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.566 [2024-11-20 16:36:18.686136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:95600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.566 [2024-11-20 16:36:18.686143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.566 [2024-11-20 16:36:18.686154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:95608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.566 [2024-11-20 16:36:18.686163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.566 [2024-11-20 16:36:18.686172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:95616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.566 [2024-11-20 16:36:18.686179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.566 [2024-11-20 16:36:18.686188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:95624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.566 [2024-11-20 16:36:18.686196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.566 [2024-11-20 16:36:18.686205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:95632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.566 [2024-11-20 16:36:18.686212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.566 [2024-11-20 16:36:18.686223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.566 [2024-11-20 16:36:18.686230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.566 [2024-11-20 16:36:18.686239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:95648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.566 [2024-11-20 16:36:18.686247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.566 [2024-11-20 16:36:18.686256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:95656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.566 [2024-11-20 16:36:18.686263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.566 [2024-11-20 16:36:18.686272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:95664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.566 [2024-11-20 16:36:18.686279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.566 [2024-11-20 16:36:18.686289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:95672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.566 [2024-11-20 16:36:18.686296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.566 [2024-11-20 16:36:18.686305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:95680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.566 [2024-11-20 16:36:18.686312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.566 [2024-11-20 16:36:18.686321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:95688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.567 [2024-11-20 16:36:18.686328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.567 [2024-11-20 16:36:18.686337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:95696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.567 [2024-11-20 16:36:18.686345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.567 [2024-11-20 16:36:18.686354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:95704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.567 [2024-11-20 16:36:18.686361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.567 [2024-11-20 16:36:18.686371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:95712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.567 [2024-11-20 16:36:18.686378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.567 [2024-11-20 16:36:18.686387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:95720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.567 [2024-11-20 16:36:18.686394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.567 [2024-11-20 16:36:18.686404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:95728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.567 [2024-11-20 16:36:18.686411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.567 [2024-11-20 16:36:18.686420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:95736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.567 [2024-11-20 16:36:18.686429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.567 [2024-11-20 16:36:18.686438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:95744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.567 [2024-11-20 16:36:18.686445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.567 [2024-11-20 16:36:18.686455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:95752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.567 [2024-11-20 16:36:18.686462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.567 [2024-11-20 16:36:18.686471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.567 [2024-11-20 16:36:18.686478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.567 [2024-11-20 16:36:18.686487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:95768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.567 [2024-11-20 16:36:18.686494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.567 [2024-11-20 16:36:18.686504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:95776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.567 [2024-11-20 16:36:18.686511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.567 [2024-11-20 16:36:18.686520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:95784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.567 [2024-11-20 16:36:18.686527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.567 [2024-11-20 16:36:18.686537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:95792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.567 [2024-11-20 16:36:18.686544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.567 [2024-11-20 16:36:18.686554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:95800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.567 [2024-11-20 16:36:18.686561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.567 [2024-11-20 16:36:18.686570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.567 [2024-11-20 16:36:18.686578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.567 [2024-11-20 16:36:18.686587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:95816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.567 [2024-11-20 16:36:18.686594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.567 [2024-11-20 16:36:18.686603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:95824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.567 [2024-11-20 16:36:18.686610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.567 [2024-11-20 16:36:18.686620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:95832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.567 [2024-11-20 16:36:18.686627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.567 [2024-11-20 16:36:18.686636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:95840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.567 [2024-11-20 16:36:18.686648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.567 [2024-11-20 16:36:18.686658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:95848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.567 [2024-11-20 16:36:18.686665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.567 [2024-11-20 16:36:18.686675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:95856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.567 [2024-11-20 16:36:18.686682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.567 [2024-11-20 16:36:18.686691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.567 [2024-11-20 16:36:18.686698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.567 [2024-11-20 16:36:18.686707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:95872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.567 [2024-11-20 16:36:18.686714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.567 [2024-11-20 16:36:18.686724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:95880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.567 [2024-11-20 16:36:18.686731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.567 [2024-11-20 16:36:18.686740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:95888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.567 [2024-11-20 16:36:18.686747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.567 [2024-11-20 16:36:18.686756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:95896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.567 [2024-11-20 16:36:18.686763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.567 [2024-11-20 16:36:18.686772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:95904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.567 [2024-11-20 16:36:18.686779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.567 [2024-11-20 16:36:18.686788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:95912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.567 [2024-11-20 16:36:18.686795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.567 [2024-11-20 16:36:18.686804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:95920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.567 [2024-11-20 16:36:18.686811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.567 [2024-11-20 16:36:18.686821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:95928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.567 [2024-11-20 16:36:18.686828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.567 [2024-11-20 16:36:18.686837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:95936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.567 [2024-11-20 16:36:18.686844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.567 [2024-11-20 16:36:18.686855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:95944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.567 [2024-11-20 16:36:18.686862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.567 [2024-11-20 16:36:18.686871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:95952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.567 [2024-11-20 16:36:18.686878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.567 [2024-11-20 16:36:18.686887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:95960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.567 [2024-11-20 16:36:18.686894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.567 [2024-11-20 16:36:18.686905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:95968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.567 [2024-11-20 16:36:18.686913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.567 [2024-11-20 16:36:18.686923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:95976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.567 [2024-11-20 16:36:18.686931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.567 [2024-11-20 16:36:18.686941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:95984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.567 [2024-11-20 16:36:18.686950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.567 [2024-11-20 16:36:18.686961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:95992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.567 [2024-11-20 16:36:18.686971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.567 [2024-11-20 16:36:18.686984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.568 [2024-11-20 16:36:18.686992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.568 [2024-11-20 16:36:18.687004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.568 [2024-11-20 16:36:18.687012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.568 [2024-11-20 16:36:18.687025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.568 [2024-11-20 16:36:18.687033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.568 [2024-11-20 16:36:18.687044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.568 [2024-11-20 16:36:18.687051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.568 [2024-11-20 16:36:18.687061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.568 [2024-11-20 16:36:18.687068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.568 [2024-11-20 16:36:18.687078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.568 [2024-11-20 16:36:18.687087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.568 [2024-11-20 16:36:18.687098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.568 [2024-11-20 16:36:18.687106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.568 [2024-11-20 16:36:18.687115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.568 [2024-11-20 16:36:18.687123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.568 [2024-11-20 16:36:18.687132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.568 [2024-11-20 16:36:18.687140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.568 [2024-11-20 16:36:18.687149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.568 [2024-11-20 16:36:18.687156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.568 [2024-11-20 16:36:18.687166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.568 [2024-11-20 16:36:18.687173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.568 [2024-11-20 16:36:18.687182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.568 [2024-11-20 16:36:18.687189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.568 [2024-11-20 16:36:18.687198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.568 [2024-11-20 16:36:18.687205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.568 [2024-11-20 16:36:18.687215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.568 [2024-11-20 16:36:18.687222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.568 [2024-11-20 16:36:18.687231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.568 [2024-11-20 16:36:18.687238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.568 [2024-11-20 16:36:18.687247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.568 [2024-11-20 16:36:18.687254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.568 [2024-11-20 16:36:18.687263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.568 [2024-11-20 16:36:18.687271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.568 [2024-11-20 16:36:18.687280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.568 [2024-11-20 16:36:18.687287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.568 [2024-11-20 16:36:18.687296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.568 [2024-11-20 16:36:18.687304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.568 [2024-11-20 16:36:18.687314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.568 [2024-11-20 16:36:18.687321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.568 [2024-11-20 16:36:18.687330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.568 [2024-11-20 16:36:18.687337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.568 [2024-11-20 16:36:18.687346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.568 [2024-11-20 16:36:18.687353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.568 [2024-11-20 16:36:18.687362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.568 [2024-11-20 16:36:18.687369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.568 [2024-11-20 16:36:18.687378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.568 [2024-11-20 16:36:18.687386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.568 [2024-11-20 16:36:18.687395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.568 [2024-11-20 16:36:18.687402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.568 [2024-11-20 16:36:18.687411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.568 [2024-11-20 16:36:18.687418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.568 [2024-11-20 16:36:18.687427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.568 [2024-11-20 16:36:18.687435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.568 [2024-11-20 16:36:18.687444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.568 [2024-11-20 16:36:18.687451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.568 [2024-11-20 16:36:18.687461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.568 [2024-11-20 16:36:18.687468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.568 [2024-11-20 16:36:18.687477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.568 [2024-11-20 16:36:18.687485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.568 [2024-11-20 16:36:18.687494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.568 [2024-11-20 16:36:18.687501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.568 [2024-11-20 16:36:18.687512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.568 [2024-11-20 16:36:18.687520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.568 [2024-11-20 16:36:18.687530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.568 [2024-11-20 16:36:18.687537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.568 [2024-11-20 16:36:18.687546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:95248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.568 [2024-11-20 16:36:18.687553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.568 [2024-11-20 16:36:18.687562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.568 [2024-11-20 16:36:18.687569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.568 [2024-11-20 16:36:18.687579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:95264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.568 [2024-11-20 16:36:18.687587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.568 [2024-11-20 16:36:18.687596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.568 [2024-11-20 16:36:18.687604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.568 [2024-11-20 16:36:18.687613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:95280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.568 [2024-11-20 16:36:18.687621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.568 [2024-11-20 16:36:18.687631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:95288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.568 [2024-11-20 16:36:18.687638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.568 [2024-11-20 16:36:18.687648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:95296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.568 [2024-11-20 16:36:18.687655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.568 [2024-11-20 16:36:18.687665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.569 [2024-11-20 16:36:18.687672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.569 [2024-11-20 16:36:18.687683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:95312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.569 [2024-11-20 16:36:18.687691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.569 [2024-11-20 16:36:18.687700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.569 [2024-11-20 16:36:18.687707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.569 [2024-11-20 16:36:18.687718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.569 [2024-11-20 16:36:18.687727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.569 [2024-11-20 16:36:18.687736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.569 [2024-11-20 16:36:18.687743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.569 [2024-11-20 16:36:18.687753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.569 [2024-11-20 16:36:18.687761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.569 [2024-11-20 16:36:18.687770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.569 [2024-11-20 16:36:18.687777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.569 [2024-11-20 16:36:18.687787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.569 [2024-11-20 16:36:18.687795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.569 [2024-11-20 16:36:18.687804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.569 [2024-11-20 16:36:18.687813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.569 [2024-11-20 16:36:18.687823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:95368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.569 [2024-11-20 16:36:18.687830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.569 [2024-11-20 16:36:18.687840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:95376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.569 [2024-11-20 16:36:18.687849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.569 [2024-11-20 16:36:18.687859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:95384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.569 [2024-11-20 16:36:18.687866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.569 [2024-11-20 16:36:18.687875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:95392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.569 [2024-11-20 16:36:18.687883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.569 [2024-11-20 16:36:18.687894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.569 [2024-11-20 16:36:18.687902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.569 [2024-11-20 16:36:18.687911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.569 [2024-11-20 16:36:18.687918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.569 [2024-11-20 16:36:18.687930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:95416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.569 [2024-11-20 16:36:18.687937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.569 [2024-11-20 16:36:18.687948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.569 [2024-11-20 16:36:18.687955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.569 [2024-11-20 16:36:18.687967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.569 [2024-11-20 16:36:18.687974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.569 [2024-11-20 16:36:18.687986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.569 [2024-11-20 16:36:18.687993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.569 [2024-11-20 16:36:18.688002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.569 [2024-11-20 16:36:18.688009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.569 [2024-11-20 16:36:18.688019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:95456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.569 [2024-11-20 16:36:18.688026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.569 [2024-11-20 16:36:18.688036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:95464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.569 [2024-11-20 16:36:18.688044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.569 [2024-11-20 16:36:18.688054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.569 [2024-11-20 16:36:18.688062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.569 [2024-11-20 16:36:18.688081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.569 [2024-11-20 16:36:18.688088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.569 [2024-11-20 16:36:18.688098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95480 len:8 PRP1 0x0 PRP2 0x0 00:24:47.569 [2024-11-20 16:36:18.688106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.569 [2024-11-20 16:36:18.688145] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:47.569 [2024-11-20 16:36:18.688160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:47.569 [2024-11-20 16:36:18.691682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:47.569 [2024-11-20 16:36:18.691706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15dafc0 (9): Bad file descriptor 00:24:47.569 [2024-11-20 16:36:18.758992] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:24:47.569 10796.50 IOPS, 42.17 MiB/s [2024-11-20T15:36:33.528Z] 10917.67 IOPS, 42.65 MiB/s [2024-11-20T15:36:33.528Z] 10996.25 IOPS, 42.95 MiB/s [2024-11-20T15:36:33.528Z] [2024-11-20 16:36:22.311914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:42248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.569 [2024-11-20 16:36:22.311951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.569 [2024-11-20 16:36:22.311968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.569 [2024-11-20 16:36:22.311990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.569 [2024-11-20 16:36:22.312000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:42272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.569 [2024-11-20 16:36:22.312009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.569 [2024-11-20 16:36:22.312018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:42280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.569 [2024-11-20 16:36:22.312026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.569 [2024-11-20 16:36:22.312036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:42288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.569 [2024-11-20 16:36:22.312043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.569 [2024-11-20 16:36:22.312053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:42296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.569 [2024-11-20 16:36:22.312060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.569 [2024-11-20 16:36:22.312069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:42304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.569 [2024-11-20 16:36:22.312077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.569 [2024-11-20 16:36:22.312087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:42312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.569 [2024-11-20 16:36:22.312095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.569 [2024-11-20 16:36:22.312104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:42320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.569 [2024-11-20 16:36:22.312112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.569 [2024-11-20 16:36:22.312121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:42328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.569 [2024-11-20 16:36:22.312129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.569 [2024-11-20 16:36:22.312139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:42336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.569 [2024-11-20 16:36:22.312146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.569 [2024-11-20 16:36:22.312156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:42344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.569 [2024-11-20 16:36:22.312163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.569 [2024-11-20 16:36:22.312173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:42352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.570 [2024-11-20 16:36:22.312181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.570 [2024-11-20 16:36:22.312190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:42360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.570 [2024-11-20 16:36:22.312197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.570 [2024-11-20 16:36:22.312209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:42368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.570 [2024-11-20 16:36:22.312216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.570 [2024-11-20 16:36:22.312225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:42376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.570 [2024-11-20 16:36:22.312233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.570 [2024-11-20 16:36:22.312242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:42384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.570 [2024-11-20 16:36:22.312249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.570 [2024-11-20 16:36:22.312259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:42392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.570 [2024-11-20 16:36:22.312266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.570 [2024-11-20 16:36:22.312275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:42400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.570 [2024-11-20 16:36:22.312283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.570 [2024-11-20 16:36:22.312292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:42408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.570 [2024-11-20 16:36:22.312300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.570 [2024-11-20 16:36:22.312309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:42416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.570 [2024-11-20 16:36:22.312316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.570 [2024-11-20 16:36:22.312325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:42424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.570 [2024-11-20 16:36:22.312333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.570 [2024-11-20 16:36:22.312342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:42432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.570 [2024-11-20 16:36:22.312349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.570 [2024-11-20 16:36:22.312359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:42440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.570 [2024-11-20 16:36:22.312366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.570 [2024-11-20 16:36:22.312375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:42448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.570 [2024-11-20 16:36:22.312382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.570 [2024-11-20 16:36:22.312392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:42456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.570 [2024-11-20 16:36:22.312400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.570 [2024-11-20 16:36:22.312409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:42464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.570 [2024-11-20 16:36:22.312416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.570 [2024-11-20 16:36:22.312428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:42472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.570 [2024-11-20 16:36:22.312435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.570 [2024-11-20 16:36:22.312444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:42480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.570 [2024-11-20 16:36:22.312452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.570 [2024-11-20 16:36:22.312461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:42488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.570 [2024-11-20 16:36:22.312468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.570 [2024-11-20 16:36:22.312477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:42496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.570 [2024-11-20 16:36:22.312484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.570 [2024-11-20 16:36:22.312493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:42504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.570 [2024-11-20 16:36:22.312501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.570 [2024-11-20 16:36:22.312510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:42512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.570 [2024-11-20 16:36:22.312518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.570 [2024-11-20 16:36:22.312528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:42520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.570 [2024-11-20 16:36:22.312535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.570 [2024-11-20 16:36:22.312544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:42528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.570 [2024-11-20 16:36:22.312552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.570 [2024-11-20 16:36:22.312561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:42536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.570 [2024-11-20 16:36:22.312569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.570 [2024-11-20 16:36:22.312579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:42544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.570 [2024-11-20 16:36:22.312586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.570 [2024-11-20 16:36:22.312595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:42552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.570 [2024-11-20 16:36:22.312603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.570 [2024-11-20 16:36:22.312612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:42560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.570 [2024-11-20 16:36:22.312620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.570 [2024-11-20 16:36:22.312629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:42568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.570 [2024-11-20 16:36:22.312638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.570 [2024-11-20 16:36:22.312647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:42576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.570 [2024-11-20 16:36:22.312655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.570 [2024-11-20 16:36:22.312664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:42584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.570 [2024-11-20 16:36:22.312671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.570 [2024-11-20 16:36:22.312681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:42592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.570 [2024-11-20 16:36:22.312688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.570 [2024-11-20 16:36:22.312697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:42600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.570 [2024-11-20 16:36:22.312705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.570 [2024-11-20 16:36:22.312714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:42608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.570 [2024-11-20 16:36:22.312722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.570 [2024-11-20 16:36:22.312731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:42616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.570 [2024-11-20 16:36:22.312739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.571 [2024-11-20 16:36:22.312749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:42624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.571 [2024-11-20 16:36:22.312756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.571 [2024-11-20 16:36:22.312766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:42632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.571 [2024-11-20 16:36:22.312773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.571 [2024-11-20 16:36:22.312782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:42640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.571 [2024-11-20 16:36:22.312789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.571 [2024-11-20 16:36:22.312799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:42648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.571 [2024-11-20 16:36:22.312807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.571 [2024-11-20 16:36:22.312816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:42656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.571 [2024-11-20 16:36:22.312823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.571 [2024-11-20 16:36:22.312832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.571 [2024-11-20 16:36:22.312839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.571 [2024-11-20 16:36:22.312850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.571 [2024-11-20 16:36:22.312858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.571 [2024-11-20 16:36:22.312867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:42680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.571 [2024-11-20 16:36:22.312874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.571 [2024-11-20 16:36:22.312883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:42688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.571 [2024-11-20 16:36:22.312891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.571 [2024-11-20 16:36:22.312900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:42696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.571 [2024-11-20 16:36:22.312907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.571 [2024-11-20 16:36:22.312917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:42704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.571 [2024-11-20 16:36:22.312924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.571 [2024-11-20 16:36:22.312933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:42712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.571 [2024-11-20 16:36:22.312940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.571 [2024-11-20 16:36:22.312949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.571 [2024-11-20 16:36:22.312957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.571 [2024-11-20 16:36:22.312966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:42728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.571 [2024-11-20 16:36:22.312973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.571 [2024-11-20 16:36:22.312985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:42736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.571 [2024-11-20 16:36:22.312993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.571 [2024-11-20 16:36:22.313003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:42744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.571 [2024-11-20 16:36:22.313010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.571 [2024-11-20 16:36:22.313019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:42752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.571 [2024-11-20 16:36:22.313027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.571 [2024-11-20 16:36:22.313036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:42760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.571 [2024-11-20 16:36:22.313043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.571 [2024-11-20 16:36:22.313064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.571 [2024-11-20 16:36:22.313072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42768 len:8 PRP1 0x0 PRP2 0x0 00:24:47.571 [2024-11-20 16:36:22.313081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.571 [2024-11-20 16:36:22.313118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.571 [2024-11-20 16:36:22.313127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.571 [2024-11-20 16:36:22.313136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.571 [2024-11-20 16:36:22.313143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.571 [2024-11-20 16:36:22.313151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.571 [2024-11-20 16:36:22.313159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.571 [2024-11-20 16:36:22.313167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.571 [2024-11-20 16:36:22.313175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.571 [2024-11-20 16:36:22.313182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dafc0 is same with the state(6) to be set 00:24:47.571 [2024-11-20 16:36:22.313336] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.571 [2024-11-20 16:36:22.313345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.571 [2024-11-20 16:36:22.313352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42776 len:8 PRP1 0x0 PRP2 0x0 00:24:47.571 [2024-11-20 16:36:22.313360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.571 [2024-11-20 16:36:22.313369] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.571 [2024-11-20 16:36:22.313375] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.571 [2024-11-20 16:36:22.313381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42784 len:8 PRP1 0x0 PRP2 0x0 00:24:47.571 [2024-11-20 16:36:22.313388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.571 [2024-11-20 16:36:22.313396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.571 [2024-11-20 16:36:22.313402] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.571 [2024-11-20 16:36:22.313408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42792 len:8 PRP1 0x0 PRP2 0x0 00:24:47.571 [2024-11-20 16:36:22.313415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.571 [2024-11-20 16:36:22.313423] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.571 [2024-11-20 16:36:22.313428] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.571 [2024-11-20 16:36:22.313435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42800 len:8 PRP1 0x0 PRP2 0x0 00:24:47.571 [2024-11-20 16:36:22.313442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.571 [2024-11-20 16:36:22.313450] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.571 [2024-11-20 16:36:22.313455] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.571 [2024-11-20 16:36:22.313462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42808 len:8 PRP1 0x0 PRP2 0x0 00:24:47.571 [2024-11-20 16:36:22.313471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.571 [2024-11-20 16:36:22.313479] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.571 [2024-11-20 16:36:22.313485] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.571 [2024-11-20 16:36:22.313491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42816 len:8 PRP1 0x0 PRP2 0x0 00:24:47.571 [2024-11-20 16:36:22.313499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.571 [2024-11-20 16:36:22.313507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.571 [2024-11-20 16:36:22.313512] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.571 [2024-11-20 16:36:22.313518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42824 len:8 PRP1 0x0 PRP2 0x0 00:24:47.571 [2024-11-20 16:36:22.313525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.571 [2024-11-20 16:36:22.313533] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.571 [2024-11-20 16:36:22.313539] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.571 [2024-11-20 16:36:22.313545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42832 len:8 PRP1 0x0 PRP2 0x0 00:24:47.571 [2024-11-20 16:36:22.313553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.571 [2024-11-20 16:36:22.313561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.571 [2024-11-20 16:36:22.313566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.571 [2024-11-20 16:36:22.313572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42840 len:8 PRP1 0x0 PRP2 0x0 00:24:47.571 [2024-11-20 16:36:22.313579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.571 [2024-11-20 16:36:22.313587] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.571 [2024-11-20 16:36:22.313592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.572 [2024-11-20 16:36:22.313598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42848 len:8 PRP1 0x0 PRP2 0x0 00:24:47.572 [2024-11-20 16:36:22.313606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.572 [2024-11-20 16:36:22.313613] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.572 [2024-11-20 16:36:22.313619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.572 [2024-11-20 16:36:22.313625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42856 len:8 PRP1 0x0 PRP2 0x0 00:24:47.572 [2024-11-20 16:36:22.313632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.572 [2024-11-20 16:36:22.313640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.572 [2024-11-20 16:36:22.313645] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.572 [2024-11-20 16:36:22.313652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42864 len:8 PRP1 0x0 PRP2 0x0 00:24:47.572 [2024-11-20 16:36:22.313659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.572 [2024-11-20 16:36:22.313667] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.572 [2024-11-20 16:36:22.313672] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.572 [2024-11-20 16:36:22.313680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42872 len:8 PRP1 0x0 PRP2 0x0 00:24:47.572 [2024-11-20 16:36:22.313687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.572 [2024-11-20 16:36:22.313695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.572 [2024-11-20 16:36:22.313701] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.572 [2024-11-20 16:36:22.313706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42880 len:8 PRP1 0x0 PRP2 0x0 00:24:47.572 [2024-11-20 16:36:22.313714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.572 [2024-11-20 16:36:22.313721] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.572 [2024-11-20 16:36:22.313727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.572 [2024-11-20 16:36:22.313733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42888 len:8 PRP1 0x0 PRP2 0x0 00:24:47.572 [2024-11-20 16:36:22.313740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.572 [2024-11-20 16:36:22.313749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.572 [2024-11-20 16:36:22.313755] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.572 [2024-11-20 16:36:22.313761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42896 len:8 PRP1 0x0 PRP2 0x0 00:24:47.572 [2024-11-20 16:36:22.313768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.572 [2024-11-20 16:36:22.313776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.572 [2024-11-20 16:36:22.313781] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.572 [2024-11-20 16:36:22.313788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42904 len:8 PRP1 0x0 PRP2 0x0 00:24:47.572 [2024-11-20 16:36:22.313795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.572 [2024-11-20 16:36:22.313803] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.572 [2024-11-20 16:36:22.313808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.572 [2024-11-20 16:36:22.313814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42912 len:8 PRP1 0x0 PRP2 0x0 00:24:47.572 [2024-11-20 16:36:22.313822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.572 [2024-11-20 16:36:22.313829] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.572 [2024-11-20 16:36:22.313835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.572 [2024-11-20 16:36:22.313841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42920 len:8 PRP1 0x0 PRP2 0x0 00:24:47.572 [2024-11-20 16:36:22.313849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.572 [2024-11-20 16:36:22.313856] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.572 [2024-11-20 16:36:22.313862] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.572 [2024-11-20 16:36:22.313868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42928 len:8 PRP1 0x0 PRP2 0x0 00:24:47.572 [2024-11-20 16:36:22.313875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.572 [2024-11-20 16:36:22.313884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.572 [2024-11-20 16:36:22.313890] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.572 [2024-11-20 16:36:22.313896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42936 len:8 PRP1 0x0 PRP2 0x0 00:24:47.572 [2024-11-20 16:36:22.313903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.572 [2024-11-20 16:36:22.313911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.572 [2024-11-20 16:36:22.313916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.572 [2024-11-20 16:36:22.313922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42944 len:8 PRP1 0x0 PRP2 0x0 00:24:47.572 [2024-11-20 16:36:22.313929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.572 [2024-11-20 16:36:22.313937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.572 [2024-11-20 16:36:22.313942] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.572 [2024-11-20 16:36:22.313949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42256 len:8 PRP1 0x0 PRP2 0x0 00:24:47.572 [2024-11-20 16:36:22.313956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.572 [2024-11-20 16:36:22.313964] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.572 [2024-11-20 16:36:22.313969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.572 [2024-11-20 16:36:22.313975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42952 len:8 PRP1 0x0 PRP2 0x0 00:24:47.572 [2024-11-20 16:36:22.313987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.572 [2024-11-20 16:36:22.314000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.572 [2024-11-20 16:36:22.314010] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.572 [2024-11-20 16:36:22.314016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42960 len:8 PRP1 0x0 PRP2 0x0 00:24:47.572 [2024-11-20 16:36:22.314024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.572 [2024-11-20 16:36:22.314031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.572 [2024-11-20 16:36:22.314037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.572 [2024-11-20 16:36:22.314043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42968 len:8 PRP1 0x0 PRP2 0x0 00:24:47.572 [2024-11-20 16:36:22.314050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.572 [2024-11-20 16:36:22.314058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.572 [2024-11-20 16:36:22.314063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.572 [2024-11-20 16:36:22.314069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42976 len:8 PRP1 0x0 PRP2 0x0 00:24:47.572 [2024-11-20 16:36:22.314076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.572 [2024-11-20 16:36:22.314084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.572 [2024-11-20 16:36:22.314090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.572 [2024-11-20 16:36:22.314096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42984 len:8 PRP1 0x0 PRP2 0x0 00:24:47.572 [2024-11-20 16:36:22.314105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.572 [2024-11-20 16:36:22.314113] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.572 [2024-11-20 16:36:22.314118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.572 [2024-11-20 16:36:22.314124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42992 len:8 PRP1 0x0 PRP2 0x0 00:24:47.572 [2024-11-20 16:36:22.314131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.572 [2024-11-20 16:36:22.314139] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.572 [2024-11-20 16:36:22.314145] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.572 [2024-11-20 16:36:22.314152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43000 len:8 PRP1 0x0 PRP2 0x0 00:24:47.572 [2024-11-20 16:36:22.314159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.572 [2024-11-20 16:36:22.314167] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.572 [2024-11-20 16:36:22.314172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.572 [2024-11-20 16:36:22.314179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43008 len:8 PRP1 0x0 PRP2 0x0 00:24:47.572 [2024-11-20 16:36:22.314186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.572 [2024-11-20 16:36:22.314196] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.572 [2024-11-20 16:36:22.314202] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.572 [2024-11-20 16:36:22.314208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43016 len:8 PRP1 0x0 PRP2 0x0 00:24:47.572 [2024-11-20 16:36:22.314215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.572 [2024-11-20 16:36:22.314223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.572 [2024-11-20 16:36:22.314228] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.572 [2024-11-20 16:36:22.314234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43024 len:8 PRP1 0x0 PRP2 0x0 00:24:47.573 [2024-11-20 16:36:22.314241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.573 [2024-11-20 16:36:22.314249] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.573 [2024-11-20 16:36:22.314254] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.573 [2024-11-20 16:36:22.314260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43032 len:8 PRP1 0x0 PRP2 0x0 00:24:47.573 [2024-11-20 16:36:22.314268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.573 [2024-11-20 16:36:22.314275] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.573 [2024-11-20 16:36:22.314280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.573 [2024-11-20 16:36:22.314286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43040 len:8 PRP1 0x0 PRP2 0x0 00:24:47.573 [2024-11-20 16:36:22.314293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.573 [2024-11-20 16:36:22.314301] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.573 [2024-11-20 16:36:22.314307] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.573 [2024-11-20 16:36:22.314314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43048 len:8 PRP1 0x0 PRP2 0x0 00:24:47.573 [2024-11-20 16:36:22.314322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.573 [2024-11-20 16:36:22.314329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.573 [2024-11-20 16:36:22.314334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.573 [2024-11-20 16:36:22.314340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43056 len:8 PRP1 0x0 PRP2 0x0 00:24:47.573 [2024-11-20 16:36:22.314347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.573 [2024-11-20 16:36:22.314355] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.573 [2024-11-20 16:36:22.314361] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.573 [2024-11-20 16:36:22.314367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43064 len:8 PRP1 0x0 PRP2 0x0 00:24:47.573 [2024-11-20 16:36:22.314374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.573 [2024-11-20 16:36:22.314381] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.573 [2024-11-20 16:36:22.314387] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.573 [2024-11-20 16:36:22.314392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43072 len:8 PRP1 0x0 PRP2 0x0 00:24:47.573 [2024-11-20 16:36:22.314400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.573 [2024-11-20 16:36:22.314407] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.573 [2024-11-20 16:36:22.314413] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.573 [2024-11-20 16:36:22.314420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43080 len:8 PRP1 0x0 PRP2 0x0 00:24:47.573 [2024-11-20 16:36:22.314427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.573 [2024-11-20 16:36:22.314435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.573 [2024-11-20 16:36:22.314440] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.573 [2024-11-20 16:36:22.314446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43088 len:8 PRP1 0x0 PRP2 0x0 00:24:47.573 [2024-11-20 16:36:22.314453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.573 [2024-11-20 16:36:22.314461] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.573 [2024-11-20 16:36:22.314466] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.573 [2024-11-20 16:36:22.314472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43096 len:8 PRP1 0x0 PRP2 0x0 00:24:47.573 [2024-11-20 16:36:22.314480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.573 [2024-11-20 16:36:22.314487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.573 [2024-11-20 16:36:22.314492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.573 [2024-11-20 16:36:22.314498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43104 len:8 PRP1 0x0 PRP2 0x0 00:24:47.573 [2024-11-20 16:36:22.314506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.573 [2024-11-20 16:36:22.314513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.573 [2024-11-20 16:36:22.314520] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.573 [2024-11-20 16:36:22.314526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43112 len:8 PRP1 0x0 PRP2 0x0 00:24:47.573 [2024-11-20 16:36:22.314533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.573 [2024-11-20 16:36:22.314541] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.573 [2024-11-20 16:36:22.314546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.573 [2024-11-20 16:36:22.314552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43120 len:8 PRP1 0x0 PRP2 0x0 00:24:47.573 [2024-11-20 16:36:22.314560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.573 [2024-11-20 16:36:22.314567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.573 [2024-11-20 16:36:22.314573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.573 [2024-11-20 16:36:22.314579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43128 len:8 PRP1 0x0 PRP2 0x0 00:24:47.573 [2024-11-20 16:36:22.314586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.573 [2024-11-20 16:36:22.314593] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.573 [2024-11-20 16:36:22.324197] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.573 [2024-11-20 16:36:22.324225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43136 len:8 PRP1 0x0 PRP2 0x0 00:24:47.573 [2024-11-20 16:36:22.324237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.573 [2024-11-20 16:36:22.324250] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.573 [2024-11-20 16:36:22.324257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.573 [2024-11-20 16:36:22.324263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43144 len:8 PRP1 0x0 PRP2 0x0 00:24:47.573 [2024-11-20 16:36:22.324270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.573 [2024-11-20 16:36:22.324279] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.573 [2024-11-20 16:36:22.324284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.573 [2024-11-20 16:36:22.324291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43152 len:8 PRP1 0x0 PRP2 0x0 00:24:47.573 [2024-11-20 16:36:22.324298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.573 [2024-11-20 16:36:22.324306] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.573 [2024-11-20 16:36:22.324311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.573 [2024-11-20 16:36:22.324318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43160 len:8 PRP1 0x0 PRP2 0x0 00:24:47.573 [2024-11-20 16:36:22.324325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.573 [2024-11-20 16:36:22.324332] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.573 [2024-11-20 16:36:22.324338] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.573 [2024-11-20 16:36:22.324344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43168 len:8 PRP1 0x0 PRP2 0x0 00:24:47.573 [2024-11-20 16:36:22.324351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.573 [2024-11-20 16:36:22.324364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.573 [2024-11-20 16:36:22.324369] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.573 [2024-11-20 16:36:22.324376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43176 len:8 PRP1 0x0 PRP2 0x0 00:24:47.573 [2024-11-20 16:36:22.324383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.573 [2024-11-20 16:36:22.324390] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.573 [2024-11-20 16:36:22.324396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.573 [2024-11-20 16:36:22.324402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43184 len:8 PRP1 0x0 PRP2 0x0 00:24:47.573 [2024-11-20 16:36:22.324409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.573 [2024-11-20 16:36:22.324417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.573 [2024-11-20 16:36:22.324422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.573 [2024-11-20 16:36:22.324428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43192 len:8 PRP1 0x0 PRP2 0x0 00:24:47.573 [2024-11-20 16:36:22.324435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.573 [2024-11-20 16:36:22.324443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.573 [2024-11-20 16:36:22.324449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.573 [2024-11-20 16:36:22.324455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43200 len:8 PRP1 0x0 PRP2 0x0 00:24:47.573 [2024-11-20 16:36:22.324463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.573 [2024-11-20 16:36:22.324470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.573 [2024-11-20 16:36:22.324475] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.573 [2024-11-20 16:36:22.324481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43208 len:8 PRP1 0x0 PRP2 0x0 00:24:47.573 [2024-11-20 16:36:22.324489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.574 [2024-11-20 16:36:22.324496] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.574 [2024-11-20 16:36:22.324502] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.574 [2024-11-20 16:36:22.324508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43216 len:8 PRP1 0x0 PRP2 0x0 00:24:47.574 [2024-11-20 16:36:22.324515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.574 [2024-11-20 16:36:22.324523] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.574 [2024-11-20 16:36:22.324528] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.574 [2024-11-20 16:36:22.324534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43224 len:8 PRP1 0x0 PRP2 0x0 00:24:47.574 [2024-11-20 16:36:22.324541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.574 [2024-11-20 16:36:22.324549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.574 [2024-11-20 16:36:22.324554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.574 [2024-11-20 16:36:22.324561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43232 len:8 PRP1 0x0 PRP2 0x0 00:24:47.574 [2024-11-20 16:36:22.324570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.574 [2024-11-20 16:36:22.324578] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.574 [2024-11-20 16:36:22.324583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.574 [2024-11-20 16:36:22.324589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43240 len:8 PRP1 0x0 PRP2 0x0 00:24:47.574 [2024-11-20 16:36:22.324597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.574 [2024-11-20 16:36:22.324604] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.574 [2024-11-20 16:36:22.324610] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.574 [2024-11-20 16:36:22.324616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43248 len:8 PRP1 0x0 PRP2 0x0 00:24:47.574 [2024-11-20 16:36:22.324623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.574 [2024-11-20 16:36:22.324630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.574 [2024-11-20 16:36:22.324636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.574 [2024-11-20 16:36:22.324642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43256 len:8 PRP1 0x0 PRP2 0x0 00:24:47.574 [2024-11-20 16:36:22.324649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.574 [2024-11-20 16:36:22.324657] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.574 [2024-11-20 16:36:22.324663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.574 [2024-11-20 16:36:22.324669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43264 len:8 PRP1 0x0 PRP2 0x0 00:24:47.574 [2024-11-20 16:36:22.324676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.574 [2024-11-20 16:36:22.324684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.574 [2024-11-20 16:36:22.324689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.574 [2024-11-20 16:36:22.324696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42248 len:8 PRP1 0x0 PRP2 0x0 00:24:47.574 [2024-11-20 16:36:22.324703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.574 [2024-11-20 16:36:22.324711] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.574 [2024-11-20 16:36:22.324716] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.574 [2024-11-20 16:36:22.324722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42264 len:8 PRP1 0x0 PRP2 0x0 00:24:47.574 [2024-11-20 16:36:22.324730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.574 [2024-11-20 16:36:22.324737] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.574 [2024-11-20 16:36:22.324743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.574 [2024-11-20 16:36:22.324749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42272 len:8 PRP1 0x0 PRP2 0x0 00:24:47.574 [2024-11-20 16:36:22.324756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.574 [2024-11-20 16:36:22.324764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.574 [2024-11-20 16:36:22.324770] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.574 [2024-11-20 16:36:22.324778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42280 len:8 PRP1 0x0 PRP2 0x0 00:24:47.574 [2024-11-20 16:36:22.324785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.574 [2024-11-20 16:36:22.324793] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.574 [2024-11-20 16:36:22.324798] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.574 [2024-11-20 16:36:22.324804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42288 len:8 PRP1 0x0 PRP2 0x0 00:24:47.574 [2024-11-20 16:36:22.324812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.574 [2024-11-20 16:36:22.324819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.574 [2024-11-20 16:36:22.324825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.574 [2024-11-20 16:36:22.324831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42296 len:8 PRP1 0x0 PRP2 0x0 00:24:47.574 [2024-11-20 16:36:22.324838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.574 [2024-11-20 16:36:22.324846] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.574 [2024-11-20 16:36:22.324851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.574 [2024-11-20 16:36:22.324857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42304 len:8 PRP1 0x0 PRP2 0x0 00:24:47.574 [2024-11-20 16:36:22.324864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.574 [2024-11-20 16:36:22.324872] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.574 [2024-11-20 16:36:22.324878] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.574 [2024-11-20 16:36:22.324884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42312 len:8 PRP1 0x0 PRP2 0x0 00:24:47.574 [2024-11-20 16:36:22.324891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.574 [2024-11-20 16:36:22.324899] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.574 [2024-11-20 16:36:22.324904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.574 [2024-11-20 16:36:22.324910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42320 len:8 PRP1 0x0 PRP2 0x0 00:24:47.574 [2024-11-20 16:36:22.324918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.574 [2024-11-20 16:36:22.324925] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.574 [2024-11-20 16:36:22.324931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.574 [2024-11-20 16:36:22.324937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42328 len:8 PRP1 0x0 PRP2 0x0 00:24:47.574 [2024-11-20 16:36:22.324944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.574 [2024-11-20 16:36:22.324952] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.574 [2024-11-20 16:36:22.324957] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.574 [2024-11-20 16:36:22.324963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42336 len:8 PRP1 0x0 PRP2 0x0 00:24:47.574 [2024-11-20 16:36:22.324970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.574 [2024-11-20 16:36:22.324989] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.574 [2024-11-20 16:36:22.324995] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.574 [2024-11-20 16:36:22.325001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42344 len:8 PRP1 0x0 PRP2 0x0 00:24:47.574 [2024-11-20 16:36:22.325008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.574 [2024-11-20 16:36:22.325016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.574 [2024-11-20 16:36:22.325021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.574 [2024-11-20 16:36:22.325027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42352 len:8 PRP1 0x0 PRP2 0x0 00:24:47.574 [2024-11-20 16:36:22.325034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.574 [2024-11-20 16:36:22.325043] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.574 [2024-11-20 16:36:22.325048] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.574 [2024-11-20 16:36:22.325054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42360 len:8 PRP1 0x0 PRP2 0x0 00:24:47.574 [2024-11-20 16:36:22.325062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.574 [2024-11-20 16:36:22.325069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.574 [2024-11-20 16:36:22.325075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.574 [2024-11-20 16:36:22.325081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42368 len:8 PRP1 0x0 PRP2 0x0 00:24:47.575 [2024-11-20 16:36:22.325088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.575 [2024-11-20 16:36:22.325096] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.575 [2024-11-20 16:36:22.325102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.575 [2024-11-20 16:36:22.325108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42376 len:8 PRP1 0x0 PRP2 0x0 00:24:47.575 [2024-11-20 16:36:22.325115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.575 [2024-11-20 16:36:22.325123] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.575 [2024-11-20 16:36:22.325129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.575 [2024-11-20 16:36:22.325135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42384 len:8 PRP1 0x0 PRP2 0x0 00:24:47.575 [2024-11-20 16:36:22.325142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.575 [2024-11-20 16:36:22.325150] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.575 [2024-11-20 16:36:22.325155] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.575 [2024-11-20 16:36:22.325161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42392 len:8 PRP1 0x0 PRP2 0x0 00:24:47.575 [2024-11-20 16:36:22.325169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.575 [2024-11-20 16:36:22.325176] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.575 [2024-11-20 16:36:22.325182] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.575 [2024-11-20 16:36:22.325188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42400 len:8 PRP1 0x0 PRP2 0x0 00:24:47.575 [2024-11-20 16:36:22.325198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.575 [2024-11-20 16:36:22.325206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.575 [2024-11-20 16:36:22.325212] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.575 [2024-11-20 16:36:22.325218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42408 len:8 PRP1 0x0 PRP2 0x0 00:24:47.575 [2024-11-20 16:36:22.325225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.575 [2024-11-20 16:36:22.325232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.575 [2024-11-20 16:36:22.325238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.575 [2024-11-20 16:36:22.325244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42416 len:8 PRP1 0x0 PRP2 0x0 00:24:47.575 [2024-11-20 16:36:22.325252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.575 [2024-11-20 16:36:22.325259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.575 [2024-11-20 16:36:22.325264] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.575 [2024-11-20 16:36:22.325271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42424 len:8 PRP1 0x0 PRP2 0x0 00:24:47.575 [2024-11-20 16:36:22.325278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.575 [2024-11-20 16:36:22.325286] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.575 [2024-11-20 16:36:22.325291] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.575 [2024-11-20 16:36:22.325297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42432 len:8 PRP1 0x0 PRP2 0x0 00:24:47.575 [2024-11-20 16:36:22.325304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.575 [2024-11-20 16:36:22.325312] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.575 [2024-11-20 16:36:22.325318] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.575 [2024-11-20 16:36:22.325324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42440 len:8 PRP1 0x0 PRP2 0x0 00:24:47.575 [2024-11-20 16:36:22.325331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.575 [2024-11-20 16:36:22.325338] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.575 [2024-11-20 16:36:22.325344] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.575 [2024-11-20 16:36:22.325350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42448 len:8 PRP1 0x0 PRP2 0x0 00:24:47.575 [2024-11-20 16:36:22.325357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.575 [2024-11-20 16:36:22.325365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.575 [2024-11-20 16:36:22.325370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.575 [2024-11-20 16:36:22.325376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42456 len:8 PRP1 0x0 PRP2 0x0 00:24:47.575 [2024-11-20 16:36:22.325384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.575 [2024-11-20 16:36:22.325391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.575 [2024-11-20 16:36:22.325397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.575 [2024-11-20 16:36:22.325404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42464 len:8 PRP1 0x0 PRP2 0x0 00:24:47.575 [2024-11-20 16:36:22.325411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.575 [2024-11-20 16:36:22.325419] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.575 [2024-11-20 16:36:22.325425] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.575 [2024-11-20 16:36:22.325431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42472 len:8 PRP1 0x0 PRP2 0x0 00:24:47.575 [2024-11-20 16:36:22.325438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.575 [2024-11-20 16:36:22.325445] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.575 [2024-11-20 16:36:22.325451] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.575 [2024-11-20 16:36:22.325457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42480 len:8 PRP1 0x0 PRP2 0x0 00:24:47.575 [2024-11-20 16:36:22.325464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.575 [2024-11-20 16:36:22.325472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.575 [2024-11-20 16:36:22.325477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.575 [2024-11-20 16:36:22.325483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42488 len:8 PRP1 0x0 PRP2 0x0 00:24:47.575 [2024-11-20 16:36:22.325490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.575 [2024-11-20 16:36:22.325498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.575 [2024-11-20 16:36:22.325503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.575 [2024-11-20 16:36:22.325509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42496 len:8 PRP1 0x0 PRP2 0x0 00:24:47.575 [2024-11-20 16:36:22.325516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.575 [2024-11-20 16:36:22.325524] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.575 [2024-11-20 16:36:22.325530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.575 [2024-11-20 16:36:22.325536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42504 len:8 PRP1 0x0 PRP2 0x0 00:24:47.575 [2024-11-20 16:36:22.325543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.575 [2024-11-20 16:36:22.325550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.575 [2024-11-20 16:36:22.325556] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.575 [2024-11-20 16:36:22.325562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42512 len:8 PRP1 0x0 PRP2 0x0 00:24:47.575 [2024-11-20 16:36:22.325570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.575 [2024-11-20 16:36:22.325577] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.575 [2024-11-20 16:36:22.325583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.575 [2024-11-20 16:36:22.325589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42520 len:8 PRP1 0x0 PRP2 0x0 00:24:47.575 [2024-11-20 16:36:22.325597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.575 [2024-11-20 16:36:22.325604] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.575 [2024-11-20 16:36:22.325611] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.575 [2024-11-20 16:36:22.325617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42528 len:8 PRP1 0x0 PRP2 0x0 00:24:47.575 [2024-11-20 16:36:22.325624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.575 [2024-11-20 16:36:22.325632] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.575 [2024-11-20 16:36:22.325638] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.575 [2024-11-20 16:36:22.325644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42536 len:8 PRP1 0x0 PRP2 0x0 00:24:47.575 [2024-11-20 16:36:22.325651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.575 [2024-11-20 16:36:22.325658] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.575 [2024-11-20 16:36:22.325664] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.576 [2024-11-20 16:36:22.325670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42544 len:8 PRP1 0x0 PRP2 0x0 00:24:47.576 [2024-11-20 16:36:22.325677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.576 [2024-11-20 16:36:22.325685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.576 [2024-11-20 16:36:22.325690] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.576 [2024-11-20 16:36:22.325696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42552 len:8 PRP1 0x0 PRP2 0x0 00:24:47.576 [2024-11-20 16:36:22.325703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.576 [2024-11-20 16:36:22.325711] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.576 [2024-11-20 16:36:22.325716] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.576 [2024-11-20 16:36:22.325722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42560 len:8 PRP1 0x0 PRP2 0x0 00:24:47.576 [2024-11-20 16:36:22.325729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.576 [2024-11-20 16:36:22.325737] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.576 [2024-11-20 16:36:22.325743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.576 [2024-11-20 16:36:22.325749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42568 len:8 PRP1 0x0 PRP2 0x0 00:24:47.576 [2024-11-20 16:36:22.325756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.576 [2024-11-20 16:36:22.325764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.576 [2024-11-20 16:36:22.325769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.576 [2024-11-20 16:36:22.325775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42576 len:8 PRP1 0x0 PRP2 0x0 00:24:47.576 [2024-11-20 16:36:22.325783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.576 [2024-11-20 16:36:22.325790] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.576 [2024-11-20 16:36:22.325795] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.576 [2024-11-20 16:36:22.325801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42584 len:8 PRP1 0x0 PRP2 0x0 00:24:47.576 [2024-11-20 16:36:22.325808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.576 [2024-11-20 16:36:22.325818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.576 [2024-11-20 16:36:22.325824] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.576 [2024-11-20 16:36:22.325830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42592 len:8 PRP1 0x0 PRP2 0x0 00:24:47.576 [2024-11-20 16:36:22.325837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.576 [2024-11-20 16:36:22.325844] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.576 [2024-11-20 16:36:22.325850] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.576 [2024-11-20 16:36:22.325856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42600 len:8 PRP1 0x0 PRP2 0x0 00:24:47.576 [2024-11-20 16:36:22.325863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.576 [2024-11-20 16:36:22.325871] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.576 [2024-11-20 16:36:22.325876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.576 [2024-11-20 16:36:22.325882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42608 len:8 PRP1 0x0 PRP2 0x0 00:24:47.576 [2024-11-20 16:36:22.325889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.576 [2024-11-20 16:36:22.325897] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.576 [2024-11-20 16:36:22.325903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.576 [2024-11-20 16:36:22.325909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42616 len:8 PRP1 0x0 PRP2 0x0 00:24:47.576 [2024-11-20 16:36:22.325916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.576 [2024-11-20 16:36:22.325924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.576 [2024-11-20 16:36:22.325929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.576 [2024-11-20 16:36:22.325935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42624 len:8 PRP1 0x0 PRP2 0x0 00:24:47.576 [2024-11-20 16:36:22.325942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.576 [2024-11-20 16:36:22.325949] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.576 [2024-11-20 16:36:22.325955] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.576 [2024-11-20 16:36:22.325961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42632 len:8 PRP1 0x0 PRP2 0x0 00:24:47.576 [2024-11-20 16:36:22.325968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.576 [2024-11-20 16:36:22.325978] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.576 [2024-11-20 16:36:22.325988] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.576 [2024-11-20 16:36:22.333046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42640 len:8 PRP1 0x0 PRP2 0x0 00:24:47.576 [2024-11-20 16:36:22.333074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.576 [2024-11-20 16:36:22.333088] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.576 [2024-11-20 16:36:22.333094] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.576 [2024-11-20 16:36:22.333101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42648 len:8 PRP1 0x0 PRP2 0x0 00:24:47.576 [2024-11-20 16:36:22.333113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.576 [2024-11-20 16:36:22.333121] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.576 [2024-11-20 16:36:22.333127] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.576 [2024-11-20 16:36:22.333133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42656 len:8 PRP1 0x0 PRP2 0x0 00:24:47.576 [2024-11-20 16:36:22.333140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.576 [2024-11-20 16:36:22.333148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.576 [2024-11-20 16:36:22.333153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.576 [2024-11-20 16:36:22.333160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42664 len:8 PRP1 0x0 PRP2 0x0 00:24:47.576 [2024-11-20 16:36:22.333167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.576 [2024-11-20 16:36:22.333174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.576 [2024-11-20 16:36:22.333180] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.576 [2024-11-20 16:36:22.333186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42672 len:8 PRP1 0x0 PRP2 0x0 00:24:47.576 [2024-11-20 16:36:22.333193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.576 [2024-11-20 16:36:22.333201] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.576 [2024-11-20 16:36:22.333207] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.576 [2024-11-20 16:36:22.333213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42680 len:8 PRP1 0x0 PRP2 0x0 00:24:47.576 [2024-11-20 16:36:22.333220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.576 [2024-11-20 16:36:22.333227] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.576 [2024-11-20 16:36:22.333233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.576 [2024-11-20 16:36:22.333239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42688 len:8 PRP1 0x0 PRP2 0x0 00:24:47.576 [2024-11-20 16:36:22.333246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.576 [2024-11-20 16:36:22.333254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.576 [2024-11-20 16:36:22.333259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.576 [2024-11-20 16:36:22.333265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42696 len:8 PRP1 0x0 PRP2 0x0 00:24:47.576 [2024-11-20 16:36:22.333272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.576 [2024-11-20 16:36:22.333280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.576 [2024-11-20 16:36:22.333285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.576 [2024-11-20 16:36:22.333291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42704 len:8 PRP1 0x0 PRP2 0x0 00:24:47.576 [2024-11-20 16:36:22.333298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.576 [2024-11-20 16:36:22.333306] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.576 [2024-11-20 16:36:22.333312] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.576 [2024-11-20 16:36:22.333319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42712 len:8 PRP1 0x0 PRP2 0x0 00:24:47.576 [2024-11-20 16:36:22.333326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.576 [2024-11-20 16:36:22.333334] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.576 [2024-11-20 16:36:22.333340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.576 [2024-11-20 16:36:22.333345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42720 len:8 PRP1 0x0 PRP2 0x0 00:24:47.576 [2024-11-20 16:36:22.333353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.576 [2024-11-20 16:36:22.333360] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.576 [2024-11-20 16:36:22.333366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.576 [2024-11-20 16:36:22.333372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42728 len:8 PRP1 0x0 PRP2 0x0 00:24:47.576 [2024-11-20 16:36:22.333379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.577 [2024-11-20 16:36:22.333387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.577 [2024-11-20 16:36:22.333392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.577 [2024-11-20 16:36:22.333399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42736 len:8 PRP1 0x0 PRP2 0x0 00:24:47.577 [2024-11-20 16:36:22.333406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.577 [2024-11-20 16:36:22.333413] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.577 [2024-11-20 16:36:22.333419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.577 [2024-11-20 16:36:22.333425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42744 len:8 PRP1 0x0 PRP2 0x0 00:24:47.577 [2024-11-20 16:36:22.333432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.577 [2024-11-20 16:36:22.333440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.577 [2024-11-20 16:36:22.333445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.577 [2024-11-20 16:36:22.333452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42752 len:8 PRP1 0x0 PRP2 0x0 00:24:47.577 [2024-11-20 16:36:22.333459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.577 [2024-11-20 16:36:22.333466] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.577 [2024-11-20 16:36:22.333472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.577 [2024-11-20 16:36:22.333478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42760 len:8 PRP1 0x0 PRP2 0x0 00:24:47.577 [2024-11-20 16:36:22.333485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.577 [2024-11-20 16:36:22.333494] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.577 [2024-11-20 16:36:22.333499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.577 [2024-11-20 16:36:22.333505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42768 len:8 PRP1 0x0 PRP2 0x0 00:24:47.577 [2024-11-20 16:36:22.333513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.577 [2024-11-20 16:36:22.333554] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:47.577 [2024-11-20 16:36:22.333564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:24:47.577 [2024-11-20 16:36:22.333607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15dafc0 (9): Bad file descriptor 00:24:47.577 [2024-11-20 16:36:22.337109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:24:47.577 [2024-11-20 16:36:22.368367] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:24:47.577 10914.40 IOPS, 42.63 MiB/s [2024-11-20T15:36:33.536Z] 10946.67 IOPS, 42.76 MiB/s [2024-11-20T15:36:33.536Z] 10968.71 IOPS, 42.85 MiB/s [2024-11-20T15:36:33.536Z] 11003.62 IOPS, 42.98 MiB/s [2024-11-20T15:36:33.536Z] 11051.89 IOPS, 43.17 MiB/s [2024-11-20T15:36:33.536Z] [2024-11-20 16:36:26.685357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:47592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.577 [2024-11-20 16:36:26.685389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.577 [2024-11-20 16:36:26.685406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:47600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.577 [2024-11-20 16:36:26.685414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.577 [2024-11-20 16:36:26.685424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:47608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.577 [2024-11-20 16:36:26.685431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.577 [2024-11-20 16:36:26.685441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:47616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.577 [2024-11-20 16:36:26.685449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.577 [2024-11-20 16:36:26.685458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:47624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.577 [2024-11-20 16:36:26.685465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.577 [2024-11-20 16:36:26.685475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:47632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.577 [2024-11-20 16:36:26.685482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.577 [2024-11-20 16:36:26.685492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:47640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.577 [2024-11-20 16:36:26.685499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.577 [2024-11-20 16:36:26.685509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:48352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.577 [2024-11-20 16:36:26.685516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.577 [2024-11-20 16:36:26.685525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:48360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.577 [2024-11-20 16:36:26.685532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.577 [2024-11-20 16:36:26.685542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:48368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.577 [2024-11-20 16:36:26.685549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.577 [2024-11-20 16:36:26.685564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:48376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.577 [2024-11-20 16:36:26.685571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.577 [2024-11-20 16:36:26.685581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:48384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.577 [2024-11-20 16:36:26.685588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.577 [2024-11-20 16:36:26.685597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:48392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.577 [2024-11-20 16:36:26.685605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.577 [2024-11-20 16:36:26.685614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:48400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.577 [2024-11-20 16:36:26.685621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.577 [2024-11-20 16:36:26.685631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:48408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.577 [2024-11-20 16:36:26.685637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.577 [2024-11-20 16:36:26.685647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:48416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.577 [2024-11-20 16:36:26.685655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.577 [2024-11-20 16:36:26.685664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:48424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.577 [2024-11-20 16:36:26.685671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.577 [2024-11-20 16:36:26.685681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:48432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.577 [2024-11-20 16:36:26.685688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.577 [2024-11-20 16:36:26.685697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:48440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.577 [2024-11-20 16:36:26.685705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.577 [2024-11-20 16:36:26.685715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.577 [2024-11-20 16:36:26.685722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.577 [2024-11-20 16:36:26.685731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:48456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.577 [2024-11-20 16:36:26.685738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.577 [2024-11-20 16:36:26.685747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:48464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.577 [2024-11-20 16:36:26.685755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.577 [2024-11-20 16:36:26.685764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:48472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.577 [2024-11-20 16:36:26.685772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.577 [2024-11-20 16:36:26.685782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:48480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.577 [2024-11-20 16:36:26.685789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.577 [2024-11-20 16:36:26.685798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:48488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.577 [2024-11-20 16:36:26.685805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.577 [2024-11-20 16:36:26.685814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:48496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.577 [2024-11-20 16:36:26.685821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.577 [2024-11-20 16:36:26.685831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:48504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.577 [2024-11-20 16:36:26.685838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.577 [2024-11-20 16:36:26.685847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:48512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.577 [2024-11-20 16:36:26.685854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-11-20 16:36:26.685864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:48520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.578 [2024-11-20 16:36:26.685871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-11-20 16:36:26.685880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:48528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.578 [2024-11-20 16:36:26.685887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-11-20 16:36:26.685896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:48536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.578 [2024-11-20 16:36:26.685903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-11-20 16:36:26.685913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:48544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.578 [2024-11-20 16:36:26.685920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-11-20 16:36:26.685929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:48552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.578 [2024-11-20 16:36:26.685937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-11-20 16:36:26.685947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:48560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.578 [2024-11-20 16:36:26.685954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-11-20 16:36:26.685963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:48568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.578 [2024-11-20 16:36:26.685971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-11-20 16:36:26.685980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:48576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.578 [2024-11-20 16:36:26.685993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-11-20 16:36:26.686003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:48584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.578 [2024-11-20 16:36:26.686010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-11-20 16:36:26.686019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:47648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-11-20 16:36:26.686026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-11-20 16:36:26.686036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:48592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.578 [2024-11-20 16:36:26.686043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-11-20 16:36:26.686053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:48600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.578 [2024-11-20 16:36:26.686060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-11-20 16:36:26.686070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:47656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-11-20 16:36:26.686077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-11-20 16:36:26.686086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:47664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-11-20 16:36:26.686094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-11-20 16:36:26.686103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:47672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-11-20 16:36:26.686110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-11-20 16:36:26.686120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:47680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-11-20 16:36:26.686127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-11-20 16:36:26.686136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:47688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-11-20 16:36:26.686143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-11-20 16:36:26.686154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:47696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-11-20 16:36:26.686161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-11-20 16:36:26.686170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:47704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-11-20 16:36:26.686177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-11-20 16:36:26.686187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:47712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-11-20 16:36:26.686194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-11-20 16:36:26.686206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:47720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-11-20 16:36:26.686213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-11-20 16:36:26.686223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:47728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-11-20 16:36:26.686230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-11-20 16:36:26.686239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:47736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-11-20 16:36:26.686246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-11-20 16:36:26.686256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:47744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-11-20 16:36:26.686263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-11-20 16:36:26.686273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:47752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-11-20 16:36:26.686280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-11-20 16:36:26.686289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:47760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-11-20 16:36:26.686297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-11-20 16:36:26.686306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:47768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-11-20 16:36:26.686314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-11-20 16:36:26.686323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:47776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-11-20 16:36:26.686330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-11-20 16:36:26.686339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:47784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-11-20 16:36:26.686347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-11-20 16:36:26.686356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:47792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-11-20 16:36:26.686364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-11-20 16:36:26.686373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:47800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-11-20 16:36:26.686380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-11-20 16:36:26.686390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:47808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-11-20 16:36:26.686397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-11-20 16:36:26.686406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:47816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-11-20 16:36:26.686415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-11-20 16:36:26.686425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:47824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-11-20 16:36:26.686432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-11-20 16:36:26.686441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:47832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-11-20 16:36:26.686449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-11-20 16:36:26.686458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:47840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-11-20 16:36:26.686465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-11-20 16:36:26.686474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:47848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-11-20 16:36:26.686482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-11-20 16:36:26.686491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:47856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-11-20 16:36:26.686499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-11-20 16:36:26.686509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:47864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.578 [2024-11-20 16:36:26.686516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.578 [2024-11-20 16:36:26.686526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:47872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-11-20 16:36:26.686533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-11-20 16:36:26.686542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:47880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-11-20 16:36:26.686550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-11-20 16:36:26.686560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:47888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-11-20 16:36:26.686567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-11-20 16:36:26.686577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:47896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-11-20 16:36:26.686585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-11-20 16:36:26.686595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:48608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.579 [2024-11-20 16:36:26.686602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-11-20 16:36:26.686612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:47904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-11-20 16:36:26.686620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-11-20 16:36:26.686631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:47912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-11-20 16:36:26.686638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-11-20 16:36:26.686648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:47920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-11-20 16:36:26.686655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-11-20 16:36:26.686665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:47928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-11-20 16:36:26.686673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-11-20 16:36:26.686683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:47936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-11-20 16:36:26.686691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-11-20 16:36:26.686700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:47944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-11-20 16:36:26.686708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-11-20 16:36:26.686718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:47952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-11-20 16:36:26.686725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-11-20 16:36:26.686735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:47960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-11-20 16:36:26.686742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-11-20 16:36:26.686751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:47968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-11-20 16:36:26.686759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-11-20 16:36:26.686768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:47976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-11-20 16:36:26.686775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-11-20 16:36:26.686785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:47984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-11-20 16:36:26.686792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-11-20 16:36:26.686802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:47992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-11-20 16:36:26.686809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-11-20 16:36:26.686818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:48000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-11-20 16:36:26.686825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-11-20 16:36:26.686834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:48008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-11-20 16:36:26.686843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-11-20 16:36:26.686853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:48016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-11-20 16:36:26.686860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-11-20 16:36:26.686870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:48024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-11-20 16:36:26.686877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-11-20 16:36:26.686886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:48032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-11-20 16:36:26.686893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-11-20 16:36:26.686903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:48040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-11-20 16:36:26.686910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-11-20 16:36:26.686920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:48048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-11-20 16:36:26.686927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-11-20 16:36:26.686936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:48056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-11-20 16:36:26.686944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-11-20 16:36:26.686953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:48064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-11-20 16:36:26.686960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-11-20 16:36:26.686970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:48072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-11-20 16:36:26.686977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-11-20 16:36:26.686990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:48080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-11-20 16:36:26.686997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-11-20 16:36:26.687007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:48088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-11-20 16:36:26.687014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-11-20 16:36:26.687023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:48096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-11-20 16:36:26.687032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-11-20 16:36:26.687041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:48104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-11-20 16:36:26.687049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-11-20 16:36:26.687058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:48112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-11-20 16:36:26.687070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-11-20 16:36:26.687079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:48120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-11-20 16:36:26.687086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-11-20 16:36:26.687096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:48128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-11-20 16:36:26.687103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-11-20 16:36:26.687113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:48136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-11-20 16:36:26.687120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.579 [2024-11-20 16:36:26.687130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:48144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.579 [2024-11-20 16:36:26.687137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-11-20 16:36:26.687147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:48152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.580 [2024-11-20 16:36:26.687154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-11-20 16:36:26.687163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:48160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.580 [2024-11-20 16:36:26.687171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-11-20 16:36:26.687181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:48168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.580 [2024-11-20 16:36:26.687188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-11-20 16:36:26.687198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:48176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.580 [2024-11-20 16:36:26.687205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-11-20 16:36:26.687214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:48184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.580 [2024-11-20 16:36:26.687222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-11-20 16:36:26.687231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:48192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.580 [2024-11-20 16:36:26.687238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-11-20 16:36:26.687247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:48200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.580 [2024-11-20 16:36:26.687255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-11-20 16:36:26.687264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:48208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.580 [2024-11-20 16:36:26.687271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-11-20 16:36:26.687282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:48216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.580 [2024-11-20 16:36:26.687290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-11-20 16:36:26.687299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:48224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.580 [2024-11-20 16:36:26.687306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-11-20 16:36:26.687316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:48232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.580 [2024-11-20 16:36:26.687323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-11-20 16:36:26.687333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:48240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.580 [2024-11-20 16:36:26.687340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-11-20 16:36:26.687349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:48248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.580 [2024-11-20 16:36:26.687356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-11-20 16:36:26.687366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:48256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.580 [2024-11-20 16:36:26.687373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-11-20 16:36:26.687382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:48264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.580 [2024-11-20 16:36:26.687390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-11-20 16:36:26.687399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:48272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.580 [2024-11-20 16:36:26.687407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-11-20 16:36:26.687416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:48280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.580 [2024-11-20 16:36:26.687424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-11-20 16:36:26.687433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:48288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.580 [2024-11-20 16:36:26.687440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-11-20 16:36:26.687450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:48296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.580 [2024-11-20 16:36:26.687457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-11-20 16:36:26.687467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:48304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.580 [2024-11-20 16:36:26.687474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-11-20 16:36:26.687484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:48312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.580 [2024-11-20 16:36:26.687492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-11-20 16:36:26.687502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:48320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.580 [2024-11-20 16:36:26.687509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-11-20 16:36:26.687518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:48328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.580 [2024-11-20 16:36:26.687526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-11-20 16:36:26.687535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:48336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.580 [2024-11-20 16:36:26.687542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-11-20 16:36:26.687563] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.580 [2024-11-20 16:36:26.687571] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.580 [2024-11-20 16:36:26.687578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48344 len:8 PRP1 0x0 PRP2 0x0 00:24:47.580 [2024-11-20 16:36:26.687585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-11-20 16:36:26.687626] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:47.580 [2024-11-20 16:36:26.687648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.580 [2024-11-20 16:36:26.687656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-11-20 16:36:26.687665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.580 [2024-11-20 16:36:26.687672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-11-20 16:36:26.687681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.580 [2024-11-20 16:36:26.687688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-11-20 16:36:26.687696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.580 [2024-11-20 16:36:26.687703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.580 [2024-11-20 16:36:26.687711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:24:47.580 [2024-11-20 16:36:26.691315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:24:47.580 [2024-11-20 16:36:26.691342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15dafc0 (9): Bad file descriptor 00:24:47.580 [2024-11-20 16:36:26.726217] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:24:47.580 11113.50 IOPS, 43.41 MiB/s [2024-11-20T15:36:33.539Z] 11116.18 IOPS, 43.42 MiB/s [2024-11-20T15:36:33.539Z] 11122.50 IOPS, 43.45 MiB/s [2024-11-20T15:36:33.539Z] 11164.15 IOPS, 43.61 MiB/s [2024-11-20T15:36:33.539Z] 11168.00 IOPS, 43.62 MiB/s 00:24:47.580 Latency(us) 00:24:47.580 [2024-11-20T15:36:33.539Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:47.580 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:47.580 Verification LBA range: start 0x0 length 0x4000 00:24:47.580 NVMe0n1 : 15.01 11168.84 43.63 374.59 0.00 11060.90 532.48 28835.84 00:24:47.580 [2024-11-20T15:36:33.539Z] =================================================================================================================== 00:24:47.580 [2024-11-20T15:36:33.539Z] Total : 11168.84 43.63 374.59 0.00 11060.90 532.48 28835.84 00:24:47.580 Received shutdown signal, test time was about 15.000000 seconds 00:24:47.580 00:24:47.580 Latency(us) 00:24:47.580 [2024-11-20T15:36:33.539Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:47.580 [2024-11-20T15:36:33.539Z] =================================================================================================================== 00:24:47.580 [2024-11-20T15:36:33.539Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:47.580 16:36:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:47.580 16:36:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:47.580 16:36:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:47.580 16:36:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2320793 00:24:47.580 16:36:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2320793 /var/tmp/bdevperf.sock 00:24:47.580 16:36:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:47.581 16:36:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2320793 ']' 00:24:47.581 16:36:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:47.581 16:36:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:47.581 16:36:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:47.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:47.581 16:36:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:47.581 16:36:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:47.841 16:36:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:47.841 16:36:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:47.841 16:36:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:48.101 [2024-11-20 16:36:33.868685] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:48.101 16:36:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:48.101 [2024-11-20 16:36:34.045138] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:48.361 16:36:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:48.361 NVMe0n1 00:24:48.621 16:36:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:48.621 00:24:48.621 16:36:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:48.882 00:24:49.143 16:36:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:49.143 16:36:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:49.143 16:36:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:49.403 16:36:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:52.703 16:36:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:52.703 16:36:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:52.703 16:36:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2321949 00:24:52.703 16:36:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2321949 00:24:52.703 16:36:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:53.643 { 00:24:53.643 "results": [ 00:24:53.643 { 00:24:53.643 "job": "NVMe0n1", 00:24:53.643 "core_mask": "0x1", 00:24:53.643 "workload": "verify", 00:24:53.643 "status": "finished", 00:24:53.643 "verify_range": { 00:24:53.643 "start": 0, 00:24:53.643 "length": 16384 00:24:53.643 }, 00:24:53.643 "queue_depth": 128, 00:24:53.643 "io_size": 4096, 00:24:53.643 "runtime": 1.006821, 00:24:53.643 "iops": 11301.909674112876, 00:24:53.643 "mibps": 44.14808466450342, 00:24:53.643 "io_failed": 0, 00:24:53.643 "io_timeout": 0, 00:24:53.643 "avg_latency_us": 11268.593406567654, 00:24:53.643 "min_latency_us": 1686.1866666666667, 00:24:53.643 "max_latency_us": 9611.946666666667 00:24:53.643 } 00:24:53.643 ], 00:24:53.643 "core_count": 1 00:24:53.643 } 00:24:53.643 16:36:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:53.643 [2024-11-20 16:36:32.923746] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:24:53.643 [2024-11-20 16:36:32.923804] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2320793 ] 00:24:53.643 [2024-11-20 16:36:32.995638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:53.643 [2024-11-20 16:36:33.030280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:53.643 [2024-11-20 16:36:35.191266] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:53.643 [2024-11-20 16:36:35.191310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:53.643 [2024-11-20 16:36:35.191322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.643 [2024-11-20 16:36:35.191332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:53.643 [2024-11-20 16:36:35.191340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.643 [2024-11-20 16:36:35.191348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:53.643 [2024-11-20 16:36:35.191356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.643 [2024-11-20 16:36:35.191364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:53.643 [2024-11-20 16:36:35.191371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.643 [2024-11-20 16:36:35.191379] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:24:53.643 [2024-11-20 16:36:35.191406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:24:53.643 [2024-11-20 16:36:35.191421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1345fc0 (9): Bad file descriptor 00:24:53.643 [2024-11-20 16:36:35.241244] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:24:53.643 Running I/O for 1 seconds... 00:24:53.643 11251.00 IOPS, 43.95 MiB/s 00:24:53.643 Latency(us) 00:24:53.643 [2024-11-20T15:36:39.602Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:53.643 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:53.643 Verification LBA range: start 0x0 length 0x4000 00:24:53.643 NVMe0n1 : 1.01 11301.91 44.15 0.00 0.00 11268.59 1686.19 9611.95 00:24:53.643 [2024-11-20T15:36:39.602Z] =================================================================================================================== 00:24:53.643 [2024-11-20T15:36:39.602Z] Total : 11301.91 44.15 0.00 0.00 11268.59 1686.19 9611.95 00:24:53.644 16:36:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:53.644 16:36:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:53.904 16:36:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:54.166 16:36:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:54.166 16:36:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:54.166 16:36:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:54.426 16:36:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:57.761 16:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:57.761 16:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:57.761 16:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2320793 00:24:57.761 16:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2320793 ']' 00:24:57.761 16:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2320793 00:24:57.761 16:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:57.761 16:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:57.761 16:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2320793 00:24:57.761 16:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:57.761 16:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:57.761 16:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2320793' 00:24:57.761 killing process with pid 2320793 00:24:57.761 16:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2320793 00:24:57.761 16:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2320793 00:24:57.761 16:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:57.761 16:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:58.062 16:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:58.062 16:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:58.062 16:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:58.062 16:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:58.062 16:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:24:58.062 16:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:58.062 16:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:24:58.062 16:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:58.062 16:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:58.062 rmmod nvme_tcp 00:24:58.062 rmmod nvme_fabrics 00:24:58.062 rmmod nvme_keyring 00:24:58.062 16:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:58.062 16:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:24:58.062 16:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:24:58.062 16:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 2317264 ']' 00:24:58.062 16:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 2317264 00:24:58.062 16:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2317264 ']' 00:24:58.062 16:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2317264 00:24:58.062 16:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:58.062 16:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:58.062 16:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2317264 00:24:58.062 16:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:58.062 16:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:58.062 16:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2317264' 00:24:58.062 killing process with pid 2317264 00:24:58.062 16:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2317264 00:24:58.062 16:36:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2317264 00:24:58.352 16:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:58.352 16:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:58.352 16:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:58.352 16:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:24:58.352 16:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:58.352 16:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:24:58.352 16:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:24:58.352 16:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:58.352 16:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:58.352 16:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:58.352 16:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:58.352 16:36:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:00.267 16:36:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:00.267 00:25:00.267 real 0m39.227s 00:25:00.267 user 2m0.732s 00:25:00.267 sys 0m8.390s 00:25:00.267 16:36:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:00.267 16:36:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:00.267 ************************************ 00:25:00.267 END TEST nvmf_failover 00:25:00.267 ************************************ 00:25:00.267 16:36:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:00.267 16:36:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:00.267 16:36:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:00.267 16:36:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.528 ************************************ 00:25:00.528 START TEST nvmf_host_discovery 00:25:00.528 ************************************ 00:25:00.528 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:00.528 * Looking for test storage... 00:25:00.528 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:00.528 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:00.528 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:25:00.528 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:00.528 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:00.528 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:00.528 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:00.528 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:00.528 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:25:00.528 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:25:00.528 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:25:00.528 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:25:00.528 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:25:00.528 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:25:00.528 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:00.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.529 --rc genhtml_branch_coverage=1 00:25:00.529 --rc genhtml_function_coverage=1 00:25:00.529 --rc genhtml_legend=1 00:25:00.529 --rc geninfo_all_blocks=1 00:25:00.529 --rc geninfo_unexecuted_blocks=1 00:25:00.529 00:25:00.529 ' 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:00.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.529 --rc genhtml_branch_coverage=1 00:25:00.529 --rc genhtml_function_coverage=1 00:25:00.529 --rc genhtml_legend=1 00:25:00.529 --rc geninfo_all_blocks=1 00:25:00.529 --rc geninfo_unexecuted_blocks=1 00:25:00.529 00:25:00.529 ' 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:00.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.529 --rc genhtml_branch_coverage=1 00:25:00.529 --rc genhtml_function_coverage=1 00:25:00.529 --rc genhtml_legend=1 00:25:00.529 --rc geninfo_all_blocks=1 00:25:00.529 --rc geninfo_unexecuted_blocks=1 00:25:00.529 00:25:00.529 ' 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:00.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.529 --rc genhtml_branch_coverage=1 00:25:00.529 --rc genhtml_function_coverage=1 00:25:00.529 --rc genhtml_legend=1 00:25:00.529 --rc geninfo_all_blocks=1 00:25:00.529 --rc geninfo_unexecuted_blocks=1 00:25:00.529 00:25:00.529 ' 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:00.529 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:00.529 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:00.530 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:25:00.530 16:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:08.674 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:08.674 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:08.674 Found net devices under 0000:31:00.0: cvl_0_0 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:08.674 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:08.675 Found net devices under 0000:31:00.1: cvl_0_1 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:08.675 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:08.675 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.694 ms 00:25:08.675 00:25:08.675 --- 10.0.0.2 ping statistics --- 00:25:08.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.675 rtt min/avg/max/mdev = 0.694/0.694/0.694/0.000 ms 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:08.675 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:08.675 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:25:08.675 00:25:08.675 --- 10.0.0.1 ping statistics --- 00:25:08.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.675 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=2327140 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 2327140 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2327140 ']' 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:08.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:08.675 16:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.675 [2024-11-20 16:36:53.980553] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:25:08.675 [2024-11-20 16:36:53.980615] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:08.675 [2024-11-20 16:36:54.081691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.675 [2024-11-20 16:36:54.131861] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:08.675 [2024-11-20 16:36:54.131912] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:08.675 [2024-11-20 16:36:54.131937] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:08.675 [2024-11-20 16:36:54.131944] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:08.675 [2024-11-20 16:36:54.131950] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:08.675 [2024-11-20 16:36:54.132780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:08.937 16:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:08.937 16:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:25:08.937 16:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:08.937 16:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:08.937 16:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.937 16:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:08.937 16:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:08.937 16:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.937 16:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.937 [2024-11-20 16:36:54.832844] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:08.937 16:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.937 16:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:08.937 16:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.937 16:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.937 [2024-11-20 16:36:54.845111] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:08.937 16:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.937 16:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:08.937 16:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.938 16:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.938 null0 00:25:08.938 16:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.938 16:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:08.938 16:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.938 16:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.938 null1 00:25:08.938 16:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.938 16:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:08.938 16:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.938 16:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.938 16:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.938 16:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2327394 00:25:08.938 16:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:08.938 16:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2327394 /tmp/host.sock 00:25:08.938 16:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2327394 ']' 00:25:08.938 16:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:25:08.938 16:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:08.938 16:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:08.938 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:08.938 16:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:08.938 16:36:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.199 [2024-11-20 16:36:54.943051] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:25:09.199 [2024-11-20 16:36:54.943114] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2327394 ] 00:25:09.199 [2024-11-20 16:36:55.018684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.199 [2024-11-20 16:36:55.061355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.142 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.143 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:10.143 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:10.143 16:36:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.143 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:10.143 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:10.143 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:10.143 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:10.143 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:10.143 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.143 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:10.143 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.143 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.143 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:10.143 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:10.143 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.143 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.143 [2024-11-20 16:36:56.080193] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:10.143 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.143 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:10.143 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:10.143 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:10.143 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.143 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.143 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:10.143 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:10.403 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.404 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:10.404 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:10.404 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:10.404 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:10.404 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:10.404 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.404 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:10.404 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.404 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.404 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:10.404 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:10.404 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:10.404 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:10.404 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:10.404 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:10.404 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:10.404 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:10.404 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:10.404 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:10.404 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:10.404 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.404 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.404 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.404 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:10.404 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:10.404 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:10.404 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:10.404 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:10.404 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.404 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.404 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.404 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:10.404 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:10.404 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:10.404 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:10.404 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:10.404 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:10.404 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:10.404 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:10.404 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:10.404 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:10.404 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.404 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.404 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.404 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:25:10.404 16:36:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:10.975 [2024-11-20 16:36:56.760495] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:10.975 [2024-11-20 16:36:56.760514] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:10.975 [2024-11-20 16:36:56.760527] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:10.975 [2024-11-20 16:36:56.847810] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:11.236 [2024-11-20 16:36:56.949829] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:11.236 [2024-11-20 16:36:56.950837] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x14af8b0:1 started. 00:25:11.236 [2024-11-20 16:36:56.952516] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:11.236 [2024-11-20 16:36:56.952534] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:11.236 [2024-11-20 16:36:57.000723] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x14af8b0 was disconnected and freed. delete nvme_qpair. 00:25:11.497 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:11.497 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:11.497 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:11.497 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:11.497 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:11.497 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.497 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.497 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:11.497 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:11.497 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.497 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.497 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:11.497 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:11.497 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:11.497 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:11.497 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:11.497 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:11.497 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:11.497 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:11.497 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:11.497 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.497 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:11.497 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.497 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:11.497 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.497 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:11.497 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:11.497 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:11.497 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:11.497 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:11.497 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:11.497 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:11.497 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:11.497 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:11.497 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.497 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.497 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:11.497 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:11.497 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:11.497 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.760 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:25:11.760 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:11.760 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:11.760 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:11.760 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:11.760 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:11.760 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:11.760 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:11.760 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:11.760 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:11.760 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:11.760 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:11.760 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.760 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.760 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.760 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:11.760 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:11.760 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:11.760 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:11.760 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:11.760 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.760 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.760 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.760 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:11.760 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:11.760 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:11.760 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:11.760 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:11.760 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:11.760 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:11.760 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:11.760 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.760 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:11.760 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.760 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:12.022 [2024-11-20 16:36:57.772100] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x147e250:1 started. 00:25:12.022 [2024-11-20 16:36:57.782396] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x147e250 was disconnected and freed. delete nvme_qpair. 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:12.022 [2024-11-20 16:36:57.864914] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:12.022 [2024-11-20 16:36:57.865732] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:12.022 [2024-11-20 16:36:57.865754] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:12.022 [2024-11-20 16:36:57.953446] bdev_nvme.c:7403:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:12.022 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:12.284 16:36:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.284 [2024-11-20 16:36:58.016260] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:25:12.284 [2024-11-20 16:36:58.016298] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:12.284 [2024-11-20 16:36:58.016309] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:12.284 [2024-11-20 16:36:58.016314] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:12.284 16:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:12.284 16:36:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:13.227 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:13.227 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:13.227 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:13.227 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:13.227 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:13.227 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:13.227 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.227 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.227 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:13.227 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.227 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:13.227 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:13.227 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:13.227 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:13.227 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:13.227 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:13.227 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:13.227 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:13.227 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:13.227 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:13.227 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:13.227 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.227 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.227 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:13.227 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.227 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:13.227 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:13.227 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:13.227 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:13.227 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:13.227 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.227 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.227 [2024-11-20 16:36:59.128394] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:13.227 [2024-11-20 16:36:59.128416] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:13.227 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.227 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:13.227 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:13.227 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:13.227 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:13.227 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:13.227 [2024-11-20 16:36:59.136954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:13.227 [2024-11-20 16:36:59.136973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.227 [2024-11-20 16:36:59.136986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:13.227 [2024-11-20 16:36:59.136994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.227 [2024-11-20 16:36:59.137003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:13.227 [2024-11-20 16:36:59.137011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.227 [2024-11-20 16:36:59.137019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:13.227 [2024-11-20 16:36:59.137026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.227 [2024-11-20 16:36:59.137034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147ffd0 is same with the state(6) to be set 00:25:13.227 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:13.227 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:13.227 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:13.227 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.227 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:13.227 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.227 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:13.227 [2024-11-20 16:36:59.146966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x147ffd0 (9): Bad file descriptor 00:25:13.228 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.228 [2024-11-20 16:36:59.157000] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:13.228 [2024-11-20 16:36:59.157012] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:13.228 [2024-11-20 16:36:59.157017] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:13.228 [2024-11-20 16:36:59.157023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:13.228 [2024-11-20 16:36:59.157039] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:13.228 [2024-11-20 16:36:59.157459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.228 [2024-11-20 16:36:59.157498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x147ffd0 with addr=10.0.0.2, port=4420 00:25:13.228 [2024-11-20 16:36:59.157509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147ffd0 is same with the state(6) to be set 00:25:13.228 [2024-11-20 16:36:59.157527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x147ffd0 (9): Bad file descriptor 00:25:13.228 [2024-11-20 16:36:59.157565] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:13.228 [2024-11-20 16:36:59.157575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:13.228 [2024-11-20 16:36:59.157584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:13.228 [2024-11-20 16:36:59.157591] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:13.228 [2024-11-20 16:36:59.157597] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:13.228 [2024-11-20 16:36:59.157602] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:13.228 [2024-11-20 16:36:59.167072] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:13.228 [2024-11-20 16:36:59.167086] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:13.228 [2024-11-20 16:36:59.167090] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:13.228 [2024-11-20 16:36:59.167095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:13.228 [2024-11-20 16:36:59.167111] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:13.228 [2024-11-20 16:36:59.167430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.228 [2024-11-20 16:36:59.167443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x147ffd0 with addr=10.0.0.2, port=4420 00:25:13.228 [2024-11-20 16:36:59.167451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147ffd0 is same with the state(6) to be set 00:25:13.228 [2024-11-20 16:36:59.167463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x147ffd0 (9): Bad file descriptor 00:25:13.228 [2024-11-20 16:36:59.167473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:13.228 [2024-11-20 16:36:59.167480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:13.228 [2024-11-20 16:36:59.167487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:13.228 [2024-11-20 16:36:59.167493] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:13.228 [2024-11-20 16:36:59.167498] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:13.228 [2024-11-20 16:36:59.167506] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:13.228 [2024-11-20 16:36:59.177143] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:13.228 [2024-11-20 16:36:59.177158] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:13.228 [2024-11-20 16:36:59.177162] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:13.228 [2024-11-20 16:36:59.177167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:13.228 [2024-11-20 16:36:59.177182] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:13.228 [2024-11-20 16:36:59.177398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.228 [2024-11-20 16:36:59.177410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x147ffd0 with addr=10.0.0.2, port=4420 00:25:13.228 [2024-11-20 16:36:59.177418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147ffd0 is same with the state(6) to be set 00:25:13.228 [2024-11-20 16:36:59.177429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x147ffd0 (9): Bad file descriptor 00:25:13.228 [2024-11-20 16:36:59.177440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:13.228 [2024-11-20 16:36:59.177447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:13.228 [2024-11-20 16:36:59.177454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:13.228 [2024-11-20 16:36:59.177460] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:13.228 [2024-11-20 16:36:59.177465] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:13.228 [2024-11-20 16:36:59.177469] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:13.489 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.489 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:13.489 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:13.489 [2024-11-20 16:36:59.187213] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:13.489 [2024-11-20 16:36:59.187226] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:13.489 [2024-11-20 16:36:59.187230] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:13.489 [2024-11-20 16:36:59.187235] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:13.489 [2024-11-20 16:36:59.187248] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:13.489 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:13.489 [2024-11-20 16:36:59.187529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.489 [2024-11-20 16:36:59.187542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x147ffd0 with addr=10.0.0.2, port=4420 00:25:13.489 [2024-11-20 16:36:59.187549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147ffd0 is same with the state(6) to be set 00:25:13.489 [2024-11-20 16:36:59.187561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x147ffd0 (9): Bad file descriptor 00:25:13.489 [2024-11-20 16:36:59.187575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:13.489 [2024-11-20 16:36:59.187582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:13.489 [2024-11-20 16:36:59.187589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:13.490 [2024-11-20 16:36:59.187595] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:13.490 [2024-11-20 16:36:59.187600] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:13.490 [2024-11-20 16:36:59.187604] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:13.490 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:13.490 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:13.490 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:13.490 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:13.490 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:13.490 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:13.490 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.490 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:13.490 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.490 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:13.490 [2024-11-20 16:36:59.197280] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:13.490 [2024-11-20 16:36:59.197293] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:13.490 [2024-11-20 16:36:59.197297] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:13.490 [2024-11-20 16:36:59.197302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:13.490 [2024-11-20 16:36:59.197316] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:13.490 [2024-11-20 16:36:59.197611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.490 [2024-11-20 16:36:59.197622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x147ffd0 with addr=10.0.0.2, port=4420 00:25:13.490 [2024-11-20 16:36:59.197629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147ffd0 is same with the state(6) to be set 00:25:13.490 [2024-11-20 16:36:59.197640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x147ffd0 (9): Bad file descriptor 00:25:13.490 [2024-11-20 16:36:59.198490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:13.490 [2024-11-20 16:36:59.198503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:13.490 [2024-11-20 16:36:59.198511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:13.490 [2024-11-20 16:36:59.198517] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:13.490 [2024-11-20 16:36:59.198522] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:13.490 [2024-11-20 16:36:59.198526] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:13.490 [2024-11-20 16:36:59.207347] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:13.490 [2024-11-20 16:36:59.207367] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:13.490 [2024-11-20 16:36:59.207371] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:13.490 [2024-11-20 16:36:59.207376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:13.490 [2024-11-20 16:36:59.207390] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:13.490 [2024-11-20 16:36:59.207566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.490 [2024-11-20 16:36:59.207577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x147ffd0 with addr=10.0.0.2, port=4420 00:25:13.490 [2024-11-20 16:36:59.207585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147ffd0 is same with the state(6) to be set 00:25:13.490 [2024-11-20 16:36:59.207596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x147ffd0 (9): Bad file descriptor 00:25:13.490 [2024-11-20 16:36:59.207612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:13.490 [2024-11-20 16:36:59.207620] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:13.490 [2024-11-20 16:36:59.207627] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:13.490 [2024-11-20 16:36:59.207633] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:13.490 [2024-11-20 16:36:59.207638] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:13.490 [2024-11-20 16:36:59.207642] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:13.490 [2024-11-20 16:36:59.217422] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:13.490 [2024-11-20 16:36:59.217435] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:13.490 [2024-11-20 16:36:59.217440] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:13.490 [2024-11-20 16:36:59.217444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:13.490 [2024-11-20 16:36:59.217459] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:13.490 [2024-11-20 16:36:59.217711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.490 [2024-11-20 16:36:59.217722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x147ffd0 with addr=10.0.0.2, port=4420 00:25:13.490 [2024-11-20 16:36:59.217730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147ffd0 is same with the state(6) to be set 00:25:13.490 [2024-11-20 16:36:59.217741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x147ffd0 (9): Bad file descriptor 00:25:13.490 [2024-11-20 16:36:59.217765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:13.490 [2024-11-20 16:36:59.217772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:13.490 [2024-11-20 16:36:59.217780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:13.490 [2024-11-20 16:36:59.217786] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:13.490 [2024-11-20 16:36:59.217790] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:13.490 [2024-11-20 16:36:59.217795] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:13.490 [2024-11-20 16:36:59.227491] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:13.490 [2024-11-20 16:36:59.227502] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:13.490 [2024-11-20 16:36:59.227507] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:13.490 [2024-11-20 16:36:59.227511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:13.490 [2024-11-20 16:36:59.227525] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:13.490 [2024-11-20 16:36:59.227801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.490 [2024-11-20 16:36:59.227812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x147ffd0 with addr=10.0.0.2, port=4420 00:25:13.490 [2024-11-20 16:36:59.227820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147ffd0 is same with the state(6) to be set 00:25:13.490 [2024-11-20 16:36:59.227830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x147ffd0 (9): Bad file descriptor 00:25:13.490 [2024-11-20 16:36:59.227846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:13.490 [2024-11-20 16:36:59.227853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:13.490 [2024-11-20 16:36:59.227860] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:13.490 [2024-11-20 16:36:59.227866] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:13.490 [2024-11-20 16:36:59.227871] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:13.490 [2024-11-20 16:36:59.227875] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:13.490 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.490 [2024-11-20 16:36:59.237557] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:13.490 [2024-11-20 16:36:59.237568] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:13.490 [2024-11-20 16:36:59.237573] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:13.490 [2024-11-20 16:36:59.237578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:13.490 [2024-11-20 16:36:59.237591] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:13.490 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:13.490 [2024-11-20 16:36:59.237879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.490 [2024-11-20 16:36:59.237890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x147ffd0 with addr=10.0.0.2, port=4420 00:25:13.490 [2024-11-20 16:36:59.237897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147ffd0 is same with the state(6) to be set 00:25:13.490 [2024-11-20 16:36:59.237908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x147ffd0 (9): Bad file descriptor 00:25:13.490 [2024-11-20 16:36:59.237932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:13.490 [2024-11-20 16:36:59.237939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:13.490 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:13.490 [2024-11-20 16:36:59.237946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:13.490 [2024-11-20 16:36:59.237956] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:13.490 [2024-11-20 16:36:59.237961] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:13.490 [2024-11-20 16:36:59.237965] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:13.491 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:13.491 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:13.491 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:13.491 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:13.491 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:13.491 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:13.491 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:13.491 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:13.491 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.491 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:13.491 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.491 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:13.491 [2024-11-20 16:36:59.247623] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:13.491 [2024-11-20 16:36:59.247635] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:13.491 [2024-11-20 16:36:59.247639] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:13.491 [2024-11-20 16:36:59.247644] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:13.491 [2024-11-20 16:36:59.247657] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:13.491 [2024-11-20 16:36:59.247941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.491 [2024-11-20 16:36:59.247952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x147ffd0 with addr=10.0.0.2, port=4420 00:25:13.491 [2024-11-20 16:36:59.247959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147ffd0 is same with the state(6) to be set 00:25:13.491 [2024-11-20 16:36:59.247969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x147ffd0 (9): Bad file descriptor 00:25:13.491 [2024-11-20 16:36:59.247990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:13.491 [2024-11-20 16:36:59.247998] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:13.491 [2024-11-20 16:36:59.248005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:13.491 [2024-11-20 16:36:59.248011] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:13.491 [2024-11-20 16:36:59.248016] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:13.491 [2024-11-20 16:36:59.248020] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:13.491 [2024-11-20 16:36:59.255134] bdev_nvme.c:7266:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:13.491 [2024-11-20 16:36:59.255158] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:13.491 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.491 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:25:13.491 16:36:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:14.432 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:14.432 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:14.432 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:14.432 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:14.432 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:14.432 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.432 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:14.432 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:14.432 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:14.432 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.432 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:25:14.432 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:14.432 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:14.432 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:14.432 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:14.432 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:14.432 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:14.432 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:14.432 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:14.432 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:14.432 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:14.432 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:14.432 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.432 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:14.432 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.694 16:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:16.081 [2024-11-20 16:37:01.635185] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:16.081 [2024-11-20 16:37:01.635205] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:16.081 [2024-11-20 16:37:01.635218] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:16.081 [2024-11-20 16:37:01.762617] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:16.081 [2024-11-20 16:37:01.826269] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:25:16.081 [2024-11-20 16:37:01.827075] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x14971d0:1 started. 00:25:16.081 [2024-11-20 16:37:01.828944] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:16.081 [2024-11-20 16:37:01.828974] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:16.081 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.081 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:16.081 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:16.081 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:16.081 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:16.081 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:16.081 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:16.081 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:16.081 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:16.081 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.081 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:16.081 request: 00:25:16.081 { 00:25:16.081 "name": "nvme", 00:25:16.081 "trtype": "tcp", 00:25:16.081 "traddr": "10.0.0.2", 00:25:16.081 "adrfam": "ipv4", 00:25:16.081 "trsvcid": "8009", 00:25:16.081 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:16.081 "wait_for_attach": true, 00:25:16.081 "method": "bdev_nvme_start_discovery", 00:25:16.081 "req_id": 1 00:25:16.081 } 00:25:16.081 Got JSON-RPC error response 00:25:16.081 response: 00:25:16.081 { 00:25:16.081 "code": -17, 00:25:16.081 "message": "File exists" 00:25:16.081 } 00:25:16.081 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:16.081 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:16.081 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:16.081 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:16.081 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:16.081 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:16.081 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:16.081 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:16.081 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.081 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:16.081 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:16.081 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:16.081 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.081 [2024-11-20 16:37:01.872797] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x14971d0 was disconnected and freed. delete nvme_qpair. 00:25:16.082 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:16.082 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:16.082 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:16.082 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:16.082 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.082 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:16.082 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:16.082 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:16.082 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.082 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:16.082 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:16.082 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:16.082 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:16.082 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:16.082 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:16.082 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:16.082 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:16.082 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:16.082 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.082 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:16.082 request: 00:25:16.082 { 00:25:16.082 "name": "nvme_second", 00:25:16.082 "trtype": "tcp", 00:25:16.082 "traddr": "10.0.0.2", 00:25:16.082 "adrfam": "ipv4", 00:25:16.082 "trsvcid": "8009", 00:25:16.082 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:16.082 "wait_for_attach": true, 00:25:16.082 "method": "bdev_nvme_start_discovery", 00:25:16.082 "req_id": 1 00:25:16.082 } 00:25:16.082 Got JSON-RPC error response 00:25:16.082 response: 00:25:16.082 { 00:25:16.082 "code": -17, 00:25:16.082 "message": "File exists" 00:25:16.082 } 00:25:16.082 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:16.082 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:16.082 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:16.082 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:16.082 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:16.082 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:16.082 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:16.082 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:16.082 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.082 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:16.082 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:16.082 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:16.082 16:37:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.082 16:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:16.082 16:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:16.082 16:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:16.082 16:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:16.082 16:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:16.082 16:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.082 16:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:16.082 16:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:16.342 16:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.342 16:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:16.342 16:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:16.342 16:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:16.342 16:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:16.342 16:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:16.342 16:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:16.342 16:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:16.342 16:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:16.342 16:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:16.343 16:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.343 16:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:17.284 [2024-11-20 16:37:03.088450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.284 [2024-11-20 16:37:03.088480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15dcc40 with addr=10.0.0.2, port=8010 00:25:17.284 [2024-11-20 16:37:03.088495] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:17.284 [2024-11-20 16:37:03.088502] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:17.284 [2024-11-20 16:37:03.088509] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:18.227 [2024-11-20 16:37:04.090806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.227 [2024-11-20 16:37:04.090829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15dcc40 with addr=10.0.0.2, port=8010 00:25:18.227 [2024-11-20 16:37:04.090841] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:18.227 [2024-11-20 16:37:04.090847] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:18.227 [2024-11-20 16:37:04.090854] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:19.169 [2024-11-20 16:37:05.092801] bdev_nvme.c:7522:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:19.169 request: 00:25:19.169 { 00:25:19.169 "name": "nvme_second", 00:25:19.169 "trtype": "tcp", 00:25:19.169 "traddr": "10.0.0.2", 00:25:19.169 "adrfam": "ipv4", 00:25:19.169 "trsvcid": "8010", 00:25:19.169 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:19.169 "wait_for_attach": false, 00:25:19.169 "attach_timeout_ms": 3000, 00:25:19.169 "method": "bdev_nvme_start_discovery", 00:25:19.169 "req_id": 1 00:25:19.169 } 00:25:19.169 Got JSON-RPC error response 00:25:19.169 response: 00:25:19.169 { 00:25:19.169 "code": -110, 00:25:19.169 "message": "Connection timed out" 00:25:19.169 } 00:25:19.169 16:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:19.169 16:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:19.169 16:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:19.169 16:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:19.169 16:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:19.169 16:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:19.169 16:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:19.170 16:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:19.170 16:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.170 16:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:19.170 16:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.170 16:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:19.170 16:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.431 16:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:19.431 16:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:19.431 16:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2327394 00:25:19.431 16:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:19.431 16:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:19.431 16:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:25:19.431 16:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:19.431 16:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:25:19.431 16:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:19.431 16:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:19.431 rmmod nvme_tcp 00:25:19.431 rmmod nvme_fabrics 00:25:19.431 rmmod nvme_keyring 00:25:19.431 16:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:19.431 16:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:25:19.431 16:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:25:19.431 16:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 2327140 ']' 00:25:19.431 16:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 2327140 00:25:19.431 16:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 2327140 ']' 00:25:19.431 16:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 2327140 00:25:19.431 16:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:25:19.431 16:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:19.431 16:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2327140 00:25:19.431 16:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:19.431 16:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:19.431 16:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2327140' 00:25:19.431 killing process with pid 2327140 00:25:19.431 16:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 2327140 00:25:19.431 16:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 2327140 00:25:19.431 16:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:19.431 16:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:19.431 16:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:19.431 16:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:25:19.691 16:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:25:19.691 16:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:19.692 16:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:25:19.692 16:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:19.692 16:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:19.692 16:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:19.692 16:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:19.692 16:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:21.604 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:21.604 00:25:21.604 real 0m21.220s 00:25:21.604 user 0m25.496s 00:25:21.604 sys 0m7.176s 00:25:21.604 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:21.604 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.604 ************************************ 00:25:21.604 END TEST nvmf_host_discovery 00:25:21.604 ************************************ 00:25:21.604 16:37:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:21.604 16:37:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:21.604 16:37:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:21.604 16:37:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.604 ************************************ 00:25:21.604 START TEST nvmf_host_multipath_status 00:25:21.604 ************************************ 00:25:21.604 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:21.865 * Looking for test storage... 00:25:21.865 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:21.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.866 --rc genhtml_branch_coverage=1 00:25:21.866 --rc genhtml_function_coverage=1 00:25:21.866 --rc genhtml_legend=1 00:25:21.866 --rc geninfo_all_blocks=1 00:25:21.866 --rc geninfo_unexecuted_blocks=1 00:25:21.866 00:25:21.866 ' 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:21.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.866 --rc genhtml_branch_coverage=1 00:25:21.866 --rc genhtml_function_coverage=1 00:25:21.866 --rc genhtml_legend=1 00:25:21.866 --rc geninfo_all_blocks=1 00:25:21.866 --rc geninfo_unexecuted_blocks=1 00:25:21.866 00:25:21.866 ' 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:21.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.866 --rc genhtml_branch_coverage=1 00:25:21.866 --rc genhtml_function_coverage=1 00:25:21.866 --rc genhtml_legend=1 00:25:21.866 --rc geninfo_all_blocks=1 00:25:21.866 --rc geninfo_unexecuted_blocks=1 00:25:21.866 00:25:21.866 ' 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:21.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.866 --rc genhtml_branch_coverage=1 00:25:21.866 --rc genhtml_function_coverage=1 00:25:21.866 --rc genhtml_legend=1 00:25:21.866 --rc geninfo_all_blocks=1 00:25:21.866 --rc geninfo_unexecuted_blocks=1 00:25:21.866 00:25:21.866 ' 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.866 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.867 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.867 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:21.867 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.867 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:25:21.867 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:21.867 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:21.867 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:21.867 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:21.867 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:21.867 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:21.867 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:21.867 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:21.867 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:21.867 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:21.867 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:21.867 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:21.867 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:21.867 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:21.867 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:21.867 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:21.867 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:21.867 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:21.867 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:21.867 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:21.867 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:21.867 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:21.867 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:21.867 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:21.867 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:21.867 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:21.867 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:21.867 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:25:21.867 16:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:30.011 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:30.011 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:30.011 Found net devices under 0000:31:00.0: cvl_0_0 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:30.011 Found net devices under 0000:31:00.1: cvl_0_1 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:30.011 16:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:30.011 16:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:30.011 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:30.011 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.702 ms 00:25:30.011 00:25:30.011 --- 10.0.0.2 ping statistics --- 00:25:30.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:30.012 rtt min/avg/max/mdev = 0.702/0.702/0.702/0.000 ms 00:25:30.012 16:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:30.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:30.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:25:30.012 00:25:30.012 --- 10.0.0.1 ping statistics --- 00:25:30.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:30.012 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:25:30.012 16:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:30.012 16:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:25:30.012 16:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:30.012 16:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:30.012 16:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:30.012 16:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:30.012 16:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:30.012 16:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:30.012 16:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:30.012 16:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:30.012 16:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:30.012 16:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:30.012 16:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:30.012 16:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=2333656 00:25:30.012 16:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 2333656 00:25:30.012 16:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:30.012 16:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2333656 ']' 00:25:30.012 16:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:30.012 16:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:30.012 16:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:30.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:30.012 16:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:30.012 16:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:30.012 [2024-11-20 16:37:15.130206] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:25:30.012 [2024-11-20 16:37:15.130273] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:30.012 [2024-11-20 16:37:15.214435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:30.012 [2024-11-20 16:37:15.255329] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:30.012 [2024-11-20 16:37:15.255364] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:30.012 [2024-11-20 16:37:15.255374] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:30.012 [2024-11-20 16:37:15.255381] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:30.012 [2024-11-20 16:37:15.255387] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:30.012 [2024-11-20 16:37:15.256655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:30.012 [2024-11-20 16:37:15.256658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.012 16:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:30.012 16:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:25:30.012 16:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:30.012 16:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:30.012 16:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:30.273 16:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:30.273 16:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2333656 00:25:30.273 16:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:30.273 [2024-11-20 16:37:16.132305] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:30.273 16:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:30.533 Malloc0 00:25:30.533 16:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:30.794 16:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:30.794 16:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:31.056 [2024-11-20 16:37:16.826342] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:31.056 16:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:31.056 [2024-11-20 16:37:16.994777] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:31.056 16:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2334110 00:25:31.056 16:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:31.056 16:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:31.056 16:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2334110 /var/tmp/bdevperf.sock 00:25:31.318 16:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2334110 ']' 00:25:31.318 16:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:31.318 16:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:31.318 16:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:31.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:31.318 16:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:31.318 16:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:31.318 16:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:31.318 16:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:25:31.318 16:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:31.579 16:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:31.841 Nvme0n1 00:25:31.841 16:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:32.413 Nvme0n1 00:25:32.413 16:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:32.413 16:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:34.325 16:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:34.325 16:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:34.586 16:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:34.846 16:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:35.787 16:37:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:35.787 16:37:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:35.787 16:37:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.787 16:37:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:36.048 16:37:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:36.048 16:37:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:36.048 16:37:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:36.048 16:37:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:36.048 16:37:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:36.048 16:37:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:36.048 16:37:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:36.048 16:37:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:36.308 16:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:36.308 16:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:36.308 16:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:36.308 16:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:36.569 16:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:36.570 16:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:36.570 16:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:36.570 16:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:36.570 16:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:36.570 16:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:36.570 16:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:36.570 16:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:36.830 16:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:36.830 16:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:36.830 16:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:37.090 16:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:37.350 16:37:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:38.291 16:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:38.291 16:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:38.291 16:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.291 16:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:38.291 16:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:38.291 16:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:38.291 16:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.291 16:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:38.552 16:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:38.552 16:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:38.552 16:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:38.552 16:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.813 16:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:38.813 16:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:38.813 16:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.813 16:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:39.074 16:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:39.074 16:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:39.074 16:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.074 16:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:39.074 16:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:39.074 16:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:39.074 16:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.074 16:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:39.334 16:37:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:39.334 16:37:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:39.334 16:37:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:39.595 16:37:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:39.595 16:37:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:40.977 16:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:40.977 16:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:40.977 16:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.977 16:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:40.977 16:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.977 16:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:40.977 16:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.977 16:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:40.977 16:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:40.977 16:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:40.977 16:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.977 16:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:41.237 16:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:41.237 16:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:41.237 16:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.237 16:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:41.497 16:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:41.497 16:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:41.498 16:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.498 16:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:41.498 16:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:41.498 16:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:41.498 16:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.498 16:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:41.758 16:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:41.758 16:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:41.758 16:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:42.018 16:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:42.018 16:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:43.405 16:37:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:43.405 16:37:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:43.405 16:37:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.405 16:37:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:43.405 16:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:43.405 16:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:43.405 16:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.405 16:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:43.405 16:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:43.405 16:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:43.405 16:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:43.405 16:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.665 16:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:43.665 16:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:43.665 16:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.665 16:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:43.925 16:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:43.925 16:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:43.925 16:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.925 16:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:43.925 16:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:43.925 16:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:43.925 16:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.925 16:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:44.185 16:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:44.186 16:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:44.186 16:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:44.446 16:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:44.446 16:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:45.830 16:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:45.830 16:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:45.830 16:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.830 16:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:45.830 16:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:45.830 16:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:45.830 16:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.830 16:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:45.830 16:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:45.830 16:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:45.830 16:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.830 16:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:46.092 16:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:46.092 16:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:46.092 16:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:46.092 16:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:46.352 16:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:46.352 16:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:46.353 16:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:46.353 16:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:46.353 16:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:46.353 16:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:46.353 16:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:46.353 16:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:46.613 16:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:46.613 16:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:46.613 16:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:46.874 16:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:46.874 16:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:48.278 16:37:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:48.278 16:37:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:48.278 16:37:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.278 16:37:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:48.278 16:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:48.278 16:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:48.278 16:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:48.278 16:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.278 16:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.278 16:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:48.278 16:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.278 16:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:48.538 16:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.538 16:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:48.538 16:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.538 16:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:48.799 16:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.799 16:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:48.799 16:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.799 16:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:48.799 16:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:48.799 16:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:48.799 16:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.799 16:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:49.059 16:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:49.059 16:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:49.320 16:37:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:49.320 16:37:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:49.581 16:37:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:49.581 16:37:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:50.592 16:37:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:50.592 16:37:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:50.592 16:37:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.592 16:37:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:50.917 16:37:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.917 16:37:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:50.918 16:37:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.918 16:37:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:50.918 16:37:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.918 16:37:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:50.918 16:37:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.918 16:37:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:51.178 16:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:51.179 16:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:51.179 16:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.179 16:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:51.440 16:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:51.440 16:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:51.440 16:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:51.440 16:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.701 16:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:51.701 16:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:51.701 16:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.701 16:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:51.701 16:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:51.701 16:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:51.701 16:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:51.962 16:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:52.223 16:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:53.165 16:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:53.165 16:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:53.165 16:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.165 16:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:53.429 16:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:53.430 16:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:53.430 16:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.430 16:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:53.430 16:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.430 16:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:53.430 16:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.430 16:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:53.690 16:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.690 16:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:53.690 16:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.690 16:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:53.951 16:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.951 16:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:53.951 16:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.951 16:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:53.951 16:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.951 16:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:53.951 16:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.951 16:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:54.212 16:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.212 16:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:54.212 16:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:54.474 16:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:54.734 16:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:55.675 16:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:55.675 16:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:55.675 16:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.675 16:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:55.935 16:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:55.935 16:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:55.935 16:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.935 16:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:55.935 16:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:55.935 16:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:55.935 16:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.935 16:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:56.196 16:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.196 16:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:56.196 16:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.196 16:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:56.459 16:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.459 16:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:56.459 16:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.460 16:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:56.460 16:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.460 16:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:56.460 16:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.460 16:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:56.720 16:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.720 16:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:56.720 16:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:56.980 16:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:57.240 16:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:58.182 16:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:58.182 16:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:58.182 16:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.182 16:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:58.443 16:37:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.443 16:37:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:58.443 16:37:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.443 16:37:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:58.443 16:37:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:58.443 16:37:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:58.443 16:37:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.443 16:37:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:58.704 16:37:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.704 16:37:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:58.704 16:37:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.704 16:37:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:58.964 16:37:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.964 16:37:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:58.964 16:37:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:58.964 16:37:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.964 16:37:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.964 16:37:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:58.964 16:37:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.964 16:37:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:59.224 16:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:59.224 16:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2334110 00:25:59.224 16:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2334110 ']' 00:25:59.224 16:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2334110 00:25:59.224 16:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:25:59.224 16:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:59.224 16:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2334110 00:25:59.224 16:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:59.224 16:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:59.224 16:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2334110' 00:25:59.224 killing process with pid 2334110 00:25:59.224 16:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2334110 00:25:59.224 16:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2334110 00:25:59.224 { 00:25:59.224 "results": [ 00:25:59.224 { 00:25:59.224 "job": "Nvme0n1", 00:25:59.224 "core_mask": "0x4", 00:25:59.224 "workload": "verify", 00:25:59.224 "status": "terminated", 00:25:59.224 "verify_range": { 00:25:59.224 "start": 0, 00:25:59.224 "length": 16384 00:25:59.224 }, 00:25:59.224 "queue_depth": 128, 00:25:59.224 "io_size": 4096, 00:25:59.224 "runtime": 26.797519, 00:25:59.224 "iops": 10792.67823263788, 00:25:59.224 "mibps": 42.15889934624172, 00:25:59.224 "io_failed": 0, 00:25:59.224 "io_timeout": 0, 00:25:59.224 "avg_latency_us": 11841.406855060388, 00:25:59.224 "min_latency_us": 312.32, 00:25:59.224 "max_latency_us": 3019898.88 00:25:59.224 } 00:25:59.224 ], 00:25:59.224 "core_count": 1 00:25:59.224 } 00:25:59.488 16:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2334110 00:25:59.488 16:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:59.488 [2024-11-20 16:37:17.059738] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:25:59.488 [2024-11-20 16:37:17.059797] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2334110 ] 00:25:59.488 [2024-11-20 16:37:17.118666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.488 [2024-11-20 16:37:17.147741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:59.488 Running I/O for 90 seconds... 00:25:59.488 9501.00 IOPS, 37.11 MiB/s [2024-11-20T15:37:45.447Z] 9545.50 IOPS, 37.29 MiB/s [2024-11-20T15:37:45.447Z] 9592.00 IOPS, 37.47 MiB/s [2024-11-20T15:37:45.447Z] 9577.00 IOPS, 37.41 MiB/s [2024-11-20T15:37:45.447Z] 9851.80 IOPS, 38.48 MiB/s [2024-11-20T15:37:45.447Z] 10368.67 IOPS, 40.50 MiB/s [2024-11-20T15:37:45.447Z] 10726.57 IOPS, 41.90 MiB/s [2024-11-20T15:37:45.447Z] 10694.88 IOPS, 41.78 MiB/s [2024-11-20T15:37:45.447Z] 10577.89 IOPS, 41.32 MiB/s [2024-11-20T15:37:45.447Z] 10484.20 IOPS, 40.95 MiB/s [2024-11-20T15:37:45.447Z] 10395.82 IOPS, 40.61 MiB/s [2024-11-20T15:37:45.447Z] [2024-11-20 16:37:30.187262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:64992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.488 [2024-11-20 16:37:30.187301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:59.488 [2024-11-20 16:37:30.187335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.488 [2024-11-20 16:37:30.187342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:59.488 [2024-11-20 16:37:30.187353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.488 [2024-11-20 16:37:30.187358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:59.488 [2024-11-20 16:37:30.187369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:65032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.488 [2024-11-20 16:37:30.187374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:59.488 [2024-11-20 16:37:30.187384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.488 [2024-11-20 16:37:30.187389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:59.488 [2024-11-20 16:37:30.187400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.488 [2024-11-20 16:37:30.187405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:59.488 [2024-11-20 16:37:30.187415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.488 [2024-11-20 16:37:30.187420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:59.488 [2024-11-20 16:37:30.187430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.488 [2024-11-20 16:37:30.187435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:59.488 [2024-11-20 16:37:30.187556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.488 [2024-11-20 16:37:30.187563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:59.488 [2024-11-20 16:37:30.187573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.488 [2024-11-20 16:37:30.187585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:59.488 [2024-11-20 16:37:30.187596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.488 [2024-11-20 16:37:30.187601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:59.488 [2024-11-20 16:37:30.187612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.488 [2024-11-20 16:37:30.187617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:59.488 [2024-11-20 16:37:30.187627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.488 [2024-11-20 16:37:30.187632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:59.488 [2024-11-20 16:37:30.187643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:65112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.488 [2024-11-20 16:37:30.187648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:59.488 [2024-11-20 16:37:30.187659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.488 [2024-11-20 16:37:30.187664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:59.488 [2024-11-20 16:37:30.187675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.488 [2024-11-20 16:37:30.187680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:59.488 [2024-11-20 16:37:30.187691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.488 [2024-11-20 16:37:30.187696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:59.488 [2024-11-20 16:37:30.187707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.488 [2024-11-20 16:37:30.187712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:59.488 [2024-11-20 16:37:30.187723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:65152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.488 [2024-11-20 16:37:30.187728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:59.488 [2024-11-20 16:37:30.187739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.488 [2024-11-20 16:37:30.187744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:59.488 [2024-11-20 16:37:30.187755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.488 [2024-11-20 16:37:30.187760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:59.488 [2024-11-20 16:37:30.187770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.488 [2024-11-20 16:37:30.187775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:59.489 [2024-11-20 16:37:30.187787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.489 [2024-11-20 16:37:30.187792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:59.489 [2024-11-20 16:37:30.187803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.489 [2024-11-20 16:37:30.187808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:59.489 [2024-11-20 16:37:30.187818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.489 [2024-11-20 16:37:30.187823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:59.489 [2024-11-20 16:37:30.187833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.489 [2024-11-20 16:37:30.187838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:59.489 [2024-11-20 16:37:30.187848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.489 [2024-11-20 16:37:30.187853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:59.489 [2024-11-20 16:37:30.187864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.489 [2024-11-20 16:37:30.187869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:59.489 [2024-11-20 16:37:30.187879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.489 [2024-11-20 16:37:30.187885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:59.489 [2024-11-20 16:37:30.187895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.489 [2024-11-20 16:37:30.187900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:59.489 [2024-11-20 16:37:30.187911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.489 [2024-11-20 16:37:30.187916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:59.489 [2024-11-20 16:37:30.187926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.489 [2024-11-20 16:37:30.187932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:59.489 [2024-11-20 16:37:30.187943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:65000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.489 [2024-11-20 16:37:30.187947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:59.489 [2024-11-20 16:37:30.188223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:65008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.489 [2024-11-20 16:37:30.188233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:59.489 [2024-11-20 16:37:30.188248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.489 [2024-11-20 16:37:30.188253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:59.489 [2024-11-20 16:37:30.188266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.489 [2024-11-20 16:37:30.188271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:59.489 [2024-11-20 16:37:30.188283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.489 [2024-11-20 16:37:30.188288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:59.489 [2024-11-20 16:37:30.188300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.489 [2024-11-20 16:37:30.188306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:59.489 [2024-11-20 16:37:30.188318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.489 [2024-11-20 16:37:30.188323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:59.489 [2024-11-20 16:37:30.188335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.489 [2024-11-20 16:37:30.188340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:59.489 [2024-11-20 16:37:30.188352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.489 [2024-11-20 16:37:30.188357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:59.489 [2024-11-20 16:37:30.188369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.489 [2024-11-20 16:37:30.188374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:59.489 [2024-11-20 16:37:30.188386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.489 [2024-11-20 16:37:30.188391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:59.489 [2024-11-20 16:37:30.188403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.489 [2024-11-20 16:37:30.188408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:59.489 [2024-11-20 16:37:30.188420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.489 [2024-11-20 16:37:30.188426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:59.489 [2024-11-20 16:37:30.188438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.489 [2024-11-20 16:37:30.188443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:59.489 [2024-11-20 16:37:30.188455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.489 [2024-11-20 16:37:30.188461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:59.489 [2024-11-20 16:37:30.188474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.489 [2024-11-20 16:37:30.188479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:59.489 [2024-11-20 16:37:30.188491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.489 [2024-11-20 16:37:30.188496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:59.489 [2024-11-20 16:37:30.188508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.489 [2024-11-20 16:37:30.188513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:59.489 [2024-11-20 16:37:30.188525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.489 [2024-11-20 16:37:30.188530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:59.489 [2024-11-20 16:37:30.188542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.489 [2024-11-20 16:37:30.188547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:59.489 [2024-11-20 16:37:30.188559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.489 [2024-11-20 16:37:30.188564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:59.489 [2024-11-20 16:37:30.188576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.489 [2024-11-20 16:37:30.188581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:59.489 [2024-11-20 16:37:30.188593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.489 [2024-11-20 16:37:30.188598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:59.489 [2024-11-20 16:37:30.188610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.489 [2024-11-20 16:37:30.188615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:59.489 [2024-11-20 16:37:30.188627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.489 [2024-11-20 16:37:30.188632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:59.489 [2024-11-20 16:37:30.188644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.489 [2024-11-20 16:37:30.188649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:59.489 [2024-11-20 16:37:30.188661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.489 [2024-11-20 16:37:30.188668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:59.489 [2024-11-20 16:37:30.188680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.489 [2024-11-20 16:37:30.188685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:59.489 [2024-11-20 16:37:30.188697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.489 [2024-11-20 16:37:30.188703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:59.490 [2024-11-20 16:37:30.188715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.490 [2024-11-20 16:37:30.188720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:59.490 [2024-11-20 16:37:30.188732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.490 [2024-11-20 16:37:30.188737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:59.490 [2024-11-20 16:37:30.188749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.490 [2024-11-20 16:37:30.188754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:59.490 [2024-11-20 16:37:30.188766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.490 [2024-11-20 16:37:30.188771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:59.490 [2024-11-20 16:37:30.188784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.490 [2024-11-20 16:37:30.188789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:59.490 [2024-11-20 16:37:30.188802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.490 [2024-11-20 16:37:30.188807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:59.490 [2024-11-20 16:37:30.188819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.490 [2024-11-20 16:37:30.188824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:59.490 [2024-11-20 16:37:30.188836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.490 [2024-11-20 16:37:30.188841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.490 [2024-11-20 16:37:30.188854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.490 [2024-11-20 16:37:30.188859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.490 [2024-11-20 16:37:30.188871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.490 [2024-11-20 16:37:30.188876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:59.490 [2024-11-20 16:37:30.188892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.490 [2024-11-20 16:37:30.188897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:59.490 [2024-11-20 16:37:30.188910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.490 [2024-11-20 16:37:30.188915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:59.490 [2024-11-20 16:37:30.188995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.490 [2024-11-20 16:37:30.189003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:59.490 [2024-11-20 16:37:30.189018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.490 [2024-11-20 16:37:30.189023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:59.490 [2024-11-20 16:37:30.189037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.490 [2024-11-20 16:37:30.189043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:59.490 [2024-11-20 16:37:30.189057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.490 [2024-11-20 16:37:30.189062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:59.490 [2024-11-20 16:37:30.189076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.490 [2024-11-20 16:37:30.189081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:59.490 [2024-11-20 16:37:30.189095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:65616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.490 [2024-11-20 16:37:30.189100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:59.490 [2024-11-20 16:37:30.189114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.490 [2024-11-20 16:37:30.189119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:59.490 [2024-11-20 16:37:30.189133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.490 [2024-11-20 16:37:30.189138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:59.490 [2024-11-20 16:37:30.189152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.490 [2024-11-20 16:37:30.189157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:59.490 [2024-11-20 16:37:30.189171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:65648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.490 [2024-11-20 16:37:30.189176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:59.490 [2024-11-20 16:37:30.189191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.490 [2024-11-20 16:37:30.189196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:59.490 [2024-11-20 16:37:30.189211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:65664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.490 [2024-11-20 16:37:30.189216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:59.490 [2024-11-20 16:37:30.189230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:65672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.490 [2024-11-20 16:37:30.189235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:59.490 [2024-11-20 16:37:30.189249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.490 [2024-11-20 16:37:30.189254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:59.490 [2024-11-20 16:37:30.189269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.490 [2024-11-20 16:37:30.189274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:59.490 [2024-11-20 16:37:30.189288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.490 [2024-11-20 16:37:30.189293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:59.490 [2024-11-20 16:37:30.189335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:65704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.490 [2024-11-20 16:37:30.189341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:59.490 [2024-11-20 16:37:30.189356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.490 [2024-11-20 16:37:30.189362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:59.490 [2024-11-20 16:37:30.189377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.490 [2024-11-20 16:37:30.189381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:59.490 [2024-11-20 16:37:30.189396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.490 [2024-11-20 16:37:30.189401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:59.490 [2024-11-20 16:37:30.189416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.490 [2024-11-20 16:37:30.189421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:59.490 [2024-11-20 16:37:30.189436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.490 [2024-11-20 16:37:30.189440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:59.490 [2024-11-20 16:37:30.189455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.490 [2024-11-20 16:37:30.189462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:59.490 [2024-11-20 16:37:30.189477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.490 [2024-11-20 16:37:30.189482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:59.490 [2024-11-20 16:37:30.189547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.490 [2024-11-20 16:37:30.189555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:59.490 [2024-11-20 16:37:30.189571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.490 [2024-11-20 16:37:30.189577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:59.491 [2024-11-20 16:37:30.189592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:65784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.491 [2024-11-20 16:37:30.189597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:59.491 [2024-11-20 16:37:30.189612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.491 [2024-11-20 16:37:30.189618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:59.491 [2024-11-20 16:37:30.189633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.491 [2024-11-20 16:37:30.189638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:59.491 [2024-11-20 16:37:30.189653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:65808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.491 [2024-11-20 16:37:30.189659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:59.491 [2024-11-20 16:37:30.189674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.491 [2024-11-20 16:37:30.189680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:59.491 [2024-11-20 16:37:30.189695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:65824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.491 [2024-11-20 16:37:30.189700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:59.491 [2024-11-20 16:37:30.189755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.491 [2024-11-20 16:37:30.189762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:59.491 [2024-11-20 16:37:30.189778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.491 [2024-11-20 16:37:30.189783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:59.491 [2024-11-20 16:37:30.189798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:65848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.491 [2024-11-20 16:37:30.189805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:59.491 [2024-11-20 16:37:30.189821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.491 [2024-11-20 16:37:30.189827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:59.491 [2024-11-20 16:37:30.189842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.491 [2024-11-20 16:37:30.189847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:59.491 [2024-11-20 16:37:30.189862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.491 [2024-11-20 16:37:30.189867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:59.491 [2024-11-20 16:37:30.189883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.491 [2024-11-20 16:37:30.189888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:59.491 [2024-11-20 16:37:30.189904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.491 [2024-11-20 16:37:30.189909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:59.491 [2024-11-20 16:37:30.190017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.491 [2024-11-20 16:37:30.190024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:59.491 [2024-11-20 16:37:30.190040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.491 [2024-11-20 16:37:30.190045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:59.491 [2024-11-20 16:37:30.190061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.491 [2024-11-20 16:37:30.190071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:59.491 [2024-11-20 16:37:30.190087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.491 [2024-11-20 16:37:30.190093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:59.491 [2024-11-20 16:37:30.190108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.491 [2024-11-20 16:37:30.190113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:59.491 [2024-11-20 16:37:30.190129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.491 [2024-11-20 16:37:30.190135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:59.491 [2024-11-20 16:37:30.190151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:65944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.491 [2024-11-20 16:37:30.190156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:59.491 [2024-11-20 16:37:30.190173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.491 [2024-11-20 16:37:30.190178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:59.491 [2024-11-20 16:37:30.190212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.491 [2024-11-20 16:37:30.190218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:59.491 [2024-11-20 16:37:30.190234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:65968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.491 [2024-11-20 16:37:30.190240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:59.491 10234.33 IOPS, 39.98 MiB/s [2024-11-20T15:37:45.450Z] 9447.08 IOPS, 36.90 MiB/s [2024-11-20T15:37:45.450Z] 8772.29 IOPS, 34.27 MiB/s [2024-11-20T15:37:45.450Z] 8292.13 IOPS, 32.39 MiB/s [2024-11-20T15:37:45.450Z] 8578.56 IOPS, 33.51 MiB/s [2024-11-20T15:37:45.450Z] 8831.59 IOPS, 34.50 MiB/s [2024-11-20T15:37:45.450Z] 9280.67 IOPS, 36.25 MiB/s [2024-11-20T15:37:45.450Z] 9683.32 IOPS, 37.83 MiB/s [2024-11-20T15:37:45.450Z] 9946.90 IOPS, 38.86 MiB/s [2024-11-20T15:37:45.450Z] 10100.67 IOPS, 39.46 MiB/s [2024-11-20T15:37:45.450Z] 10227.73 IOPS, 39.95 MiB/s [2024-11-20T15:37:45.450Z] 10497.57 IOPS, 41.01 MiB/s [2024-11-20T15:37:45.450Z] 10764.25 IOPS, 42.05 MiB/s [2024-11-20T15:37:45.450Z] [2024-11-20 16:37:42.912266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:46968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.491 [2024-11-20 16:37:42.912303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:59.491 [2024-11-20 16:37:42.912333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:47000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.491 [2024-11-20 16:37:42.912339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:59.491 [2024-11-20 16:37:42.912350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:47024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.491 [2024-11-20 16:37:42.912356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:59.491 [2024-11-20 16:37:42.912366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:46392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.491 [2024-11-20 16:37:42.912371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:59.491 [2024-11-20 16:37:42.912382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:46416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.491 [2024-11-20 16:37:42.912387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:59.491 [2024-11-20 16:37:42.913037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:46448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.491 [2024-11-20 16:37:42.913048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:59.491 [2024-11-20 16:37:42.913059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:46472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.491 [2024-11-20 16:37:42.913065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:59.491 [2024-11-20 16:37:42.913075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:47040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.491 [2024-11-20 16:37:42.913080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:59.491 [2024-11-20 16:37:42.913096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:47056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.491 [2024-11-20 16:37:42.913101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:59.491 [2024-11-20 16:37:42.913112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:47072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.491 [2024-11-20 16:37:42.913117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:59.491 [2024-11-20 16:37:42.913128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:47088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.491 [2024-11-20 16:37:42.913133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:59.491 [2024-11-20 16:37:42.913759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:47104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.492 [2024-11-20 16:37:42.913770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:59.492 [2024-11-20 16:37:42.913781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:47120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.492 [2024-11-20 16:37:42.913786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:59.492 [2024-11-20 16:37:42.913797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:47136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.492 [2024-11-20 16:37:42.913802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.492 [2024-11-20 16:37:42.913813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:47152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.492 [2024-11-20 16:37:42.913818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:59.492 [2024-11-20 16:37:42.913828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:47168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.492 [2024-11-20 16:37:42.913833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:59.492 [2024-11-20 16:37:42.913843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:47184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.492 [2024-11-20 16:37:42.913849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:59.492 [2024-11-20 16:37:42.913859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:47200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.492 [2024-11-20 16:37:42.913864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:59.492 10879.72 IOPS, 42.50 MiB/s [2024-11-20T15:37:45.451Z] 10830.38 IOPS, 42.31 MiB/s [2024-11-20T15:37:45.451Z] Received shutdown signal, test time was about 26.798130 seconds 00:25:59.492 00:25:59.492 Latency(us) 00:25:59.492 [2024-11-20T15:37:45.451Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:59.492 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:59.492 Verification LBA range: start 0x0 length 0x4000 00:25:59.492 Nvme0n1 : 26.80 10792.68 42.16 0.00 0.00 11841.41 312.32 3019898.88 00:25:59.492 [2024-11-20T15:37:45.451Z] =================================================================================================================== 00:25:59.492 [2024-11-20T15:37:45.451Z] Total : 10792.68 42.16 0.00 0.00 11841.41 312.32 3019898.88 00:25:59.492 16:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:59.492 16:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:59.492 16:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:59.492 16:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:59.492 16:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:59.492 16:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:25:59.492 16:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:59.492 16:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:25:59.492 16:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:59.492 16:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:59.492 rmmod nvme_tcp 00:25:59.752 rmmod nvme_fabrics 00:25:59.752 rmmod nvme_keyring 00:25:59.752 16:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:59.752 16:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:25:59.752 16:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:25:59.752 16:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 2333656 ']' 00:25:59.753 16:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 2333656 00:25:59.753 16:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2333656 ']' 00:25:59.753 16:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2333656 00:25:59.753 16:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:25:59.753 16:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:59.753 16:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2333656 00:25:59.753 16:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:59.753 16:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:59.753 16:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2333656' 00:25:59.753 killing process with pid 2333656 00:25:59.753 16:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2333656 00:25:59.753 16:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2333656 00:25:59.753 16:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:59.753 16:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:59.753 16:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:59.753 16:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:25:59.753 16:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:25:59.753 16:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:25:59.753 16:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:59.753 16:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:59.753 16:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:59.753 16:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:59.753 16:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:59.753 16:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:02.305 16:37:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:02.305 00:26:02.305 real 0m40.233s 00:26:02.305 user 1m44.054s 00:26:02.305 sys 0m11.374s 00:26:02.305 16:37:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:02.305 16:37:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:02.305 ************************************ 00:26:02.305 END TEST nvmf_host_multipath_status 00:26:02.305 ************************************ 00:26:02.305 16:37:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:02.305 16:37:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:02.305 16:37:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:02.305 16:37:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.305 ************************************ 00:26:02.305 START TEST nvmf_discovery_remove_ifc 00:26:02.305 ************************************ 00:26:02.305 16:37:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:02.305 * Looking for test storage... 00:26:02.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:02.305 16:37:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:02.305 16:37:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:26:02.305 16:37:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:02.305 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:02.305 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:02.305 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:02.305 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:02.305 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:26:02.305 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:26:02.305 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:26:02.305 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:26:02.305 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:02.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.306 --rc genhtml_branch_coverage=1 00:26:02.306 --rc genhtml_function_coverage=1 00:26:02.306 --rc genhtml_legend=1 00:26:02.306 --rc geninfo_all_blocks=1 00:26:02.306 --rc geninfo_unexecuted_blocks=1 00:26:02.306 00:26:02.306 ' 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:02.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.306 --rc genhtml_branch_coverage=1 00:26:02.306 --rc genhtml_function_coverage=1 00:26:02.306 --rc genhtml_legend=1 00:26:02.306 --rc geninfo_all_blocks=1 00:26:02.306 --rc geninfo_unexecuted_blocks=1 00:26:02.306 00:26:02.306 ' 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:02.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.306 --rc genhtml_branch_coverage=1 00:26:02.306 --rc genhtml_function_coverage=1 00:26:02.306 --rc genhtml_legend=1 00:26:02.306 --rc geninfo_all_blocks=1 00:26:02.306 --rc geninfo_unexecuted_blocks=1 00:26:02.306 00:26:02.306 ' 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:02.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.306 --rc genhtml_branch_coverage=1 00:26:02.306 --rc genhtml_function_coverage=1 00:26:02.306 --rc genhtml_legend=1 00:26:02.306 --rc geninfo_all_blocks=1 00:26:02.306 --rc geninfo_unexecuted_blocks=1 00:26:02.306 00:26:02.306 ' 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:02.306 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:02.307 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:02.307 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:02.307 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:02.307 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:02.307 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:02.307 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:02.307 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:02.307 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:02.307 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:02.307 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:02.307 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:02.307 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:02.307 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:02.307 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:02.307 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:02.307 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:02.307 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:02.307 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:02.307 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:02.307 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:02.307 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:02.307 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:26:02.307 16:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:10.449 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:10.449 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:10.449 Found net devices under 0000:31:00.0: cvl_0_0 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:10.449 Found net devices under 0000:31:00.1: cvl_0_1 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:10.449 16:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:10.449 16:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:10.449 16:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:10.449 16:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:10.449 16:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:10.449 16:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:10.449 16:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:10.449 16:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:10.449 16:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:10.449 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:10.449 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.584 ms 00:26:10.449 00:26:10.449 --- 10.0.0.2 ping statistics --- 00:26:10.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:10.449 rtt min/avg/max/mdev = 0.584/0.584/0.584/0.000 ms 00:26:10.449 16:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:10.449 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:10.449 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:26:10.449 00:26:10.449 --- 10.0.0.1 ping statistics --- 00:26:10.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:10.449 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:26:10.449 16:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:10.449 16:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:26:10.449 16:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:10.449 16:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:10.449 16:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:10.449 16:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:10.449 16:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:10.449 16:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:10.449 16:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:10.449 16:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:10.449 16:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:10.449 16:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:10.449 16:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:10.449 16:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=2343907 00:26:10.449 16:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 2343907 00:26:10.449 16:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:10.449 16:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2343907 ']' 00:26:10.449 16:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:10.449 16:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:10.449 16:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:10.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:10.449 16:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:10.449 16:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:10.449 [2024-11-20 16:37:55.320762] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:26:10.449 [2024-11-20 16:37:55.320824] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:10.449 [2024-11-20 16:37:55.419182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:10.449 [2024-11-20 16:37:55.469043] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:10.449 [2024-11-20 16:37:55.469093] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:10.449 [2024-11-20 16:37:55.469102] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:10.449 [2024-11-20 16:37:55.469110] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:10.449 [2024-11-20 16:37:55.469116] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:10.449 [2024-11-20 16:37:55.469942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:10.449 16:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:10.449 16:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:26:10.449 16:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:10.449 16:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:10.449 16:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:10.449 16:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:10.449 16:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:10.449 16:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.449 16:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:10.450 [2024-11-20 16:37:56.183366] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:10.450 [2024-11-20 16:37:56.191596] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:10.450 null0 00:26:10.450 [2024-11-20 16:37:56.223560] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:10.450 16:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.450 16:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2344240 00:26:10.450 16:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2344240 /tmp/host.sock 00:26:10.450 16:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2344240 ']' 00:26:10.450 16:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:26:10.450 16:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:10.450 16:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:10.450 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:10.450 16:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:10.450 16:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:10.450 16:37:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:10.450 [2024-11-20 16:37:56.298996] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:26:10.450 [2024-11-20 16:37:56.299056] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2344240 ] 00:26:10.450 [2024-11-20 16:37:56.374127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:10.710 [2024-11-20 16:37:56.416202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:11.280 16:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:11.280 16:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:26:11.280 16:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:11.280 16:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:11.280 16:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.280 16:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:11.280 16:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.280 16:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:11.280 16:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.280 16:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:11.280 16:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.280 16:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:11.280 16:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.280 16:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:12.663 [2024-11-20 16:37:58.211967] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:12.663 [2024-11-20 16:37:58.211990] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:12.663 [2024-11-20 16:37:58.212004] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:12.663 [2024-11-20 16:37:58.341408] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:12.663 [2024-11-20 16:37:58.442159] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:12.663 [2024-11-20 16:37:58.443143] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x15c3550:1 started. 00:26:12.663 [2024-11-20 16:37:58.444703] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:12.663 [2024-11-20 16:37:58.444747] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:12.663 [2024-11-20 16:37:58.444768] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:12.663 [2024-11-20 16:37:58.444782] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:12.663 [2024-11-20 16:37:58.444802] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:12.663 16:37:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.663 16:37:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:12.663 16:37:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:12.663 16:37:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:12.663 16:37:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:12.663 16:37:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.663 16:37:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:12.663 16:37:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:12.663 [2024-11-20 16:37:58.451341] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x15c3550 was disconnected and freed. delete nvme_qpair. 00:26:12.663 16:37:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:12.663 16:37:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.663 16:37:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:12.663 16:37:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:12.663 16:37:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:12.924 16:37:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:12.924 16:37:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:12.924 16:37:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:12.924 16:37:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:12.924 16:37:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.924 16:37:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:12.924 16:37:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:12.924 16:37:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:12.924 16:37:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.924 16:37:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:12.924 16:37:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:13.867 16:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:13.867 16:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:13.867 16:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:13.867 16:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.867 16:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:13.867 16:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:13.867 16:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:13.867 16:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.867 16:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:13.867 16:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:14.808 16:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:14.808 16:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:14.808 16:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:14.808 16:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.808 16:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:14.808 16:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:14.808 16:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:15.067 16:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.067 16:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:15.067 16:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:16.005 16:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:16.005 16:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:16.005 16:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:16.005 16:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:16.005 16:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.005 16:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:16.005 16:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:16.005 16:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.005 16:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:16.005 16:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:16.943 16:38:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:16.943 16:38:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:16.943 16:38:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:16.943 16:38:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:16.943 16:38:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.943 16:38:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:16.944 16:38:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:16.944 16:38:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.944 16:38:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:16.944 16:38:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:18.326 [2024-11-20 16:38:03.885507] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:18.326 [2024-11-20 16:38:03.885553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.326 [2024-11-20 16:38:03.885566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.326 [2024-11-20 16:38:03.885576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.326 [2024-11-20 16:38:03.885584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.326 [2024-11-20 16:38:03.885592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.326 [2024-11-20 16:38:03.885600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.326 [2024-11-20 16:38:03.885608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.326 [2024-11-20 16:38:03.885616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.326 [2024-11-20 16:38:03.885624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.326 [2024-11-20 16:38:03.885631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.326 [2024-11-20 16:38:03.885639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159fec0 is same with the state(6) to be set 00:26:18.326 [2024-11-20 16:38:03.895528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x159fec0 (9): Bad file descriptor 00:26:18.326 16:38:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:18.326 [2024-11-20 16:38:03.905566] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:18.326 [2024-11-20 16:38:03.905578] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:18.326 [2024-11-20 16:38:03.905583] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:18.326 [2024-11-20 16:38:03.905589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:18.326 [2024-11-20 16:38:03.905612] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:18.326 16:38:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:18.326 16:38:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:18.326 16:38:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.326 16:38:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:18.326 16:38:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:18.326 16:38:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:19.267 [2024-11-20 16:38:04.957046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:19.267 [2024-11-20 16:38:04.957092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x159fec0 with addr=10.0.0.2, port=4420 00:26:19.267 [2024-11-20 16:38:04.957107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159fec0 is same with the state(6) to be set 00:26:19.267 [2024-11-20 16:38:04.957138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x159fec0 (9): Bad file descriptor 00:26:19.267 [2024-11-20 16:38:04.957522] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:26:19.267 [2024-11-20 16:38:04.957548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:19.267 [2024-11-20 16:38:04.957556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:19.267 [2024-11-20 16:38:04.957565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:19.267 [2024-11-20 16:38:04.957572] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:19.267 [2024-11-20 16:38:04.957578] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:19.267 [2024-11-20 16:38:04.957584] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:19.267 [2024-11-20 16:38:04.957592] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:19.267 [2024-11-20 16:38:04.957597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:19.267 16:38:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.267 16:38:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:19.267 16:38:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:20.209 [2024-11-20 16:38:05.959974] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:20.209 [2024-11-20 16:38:05.959997] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:20.209 [2024-11-20 16:38:05.960008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:20.209 [2024-11-20 16:38:05.960016] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:20.209 [2024-11-20 16:38:05.960023] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:26:20.209 [2024-11-20 16:38:05.960031] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:20.209 [2024-11-20 16:38:05.960036] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:20.209 [2024-11-20 16:38:05.960041] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:20.209 [2024-11-20 16:38:05.960064] bdev_nvme.c:7230:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:20.209 [2024-11-20 16:38:05.960087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:20.209 [2024-11-20 16:38:05.960097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.209 [2024-11-20 16:38:05.960107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:20.209 [2024-11-20 16:38:05.960120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.209 [2024-11-20 16:38:05.960128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:20.209 [2024-11-20 16:38:05.960136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.209 [2024-11-20 16:38:05.960144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:20.209 [2024-11-20 16:38:05.960151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.209 [2024-11-20 16:38:05.960160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:20.209 [2024-11-20 16:38:05.960167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.209 [2024-11-20 16:38:05.960175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:26:20.209 [2024-11-20 16:38:05.960500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x158f600 (9): Bad file descriptor 00:26:20.209 [2024-11-20 16:38:05.961514] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:20.209 [2024-11-20 16:38:05.961525] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:26:20.209 16:38:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:20.209 16:38:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:20.209 16:38:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:20.209 16:38:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.209 16:38:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:20.209 16:38:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:20.209 16:38:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:20.209 16:38:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.209 16:38:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:20.209 16:38:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:20.209 16:38:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:20.209 16:38:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:20.209 16:38:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:20.209 16:38:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:20.209 16:38:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:20.209 16:38:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.209 16:38:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:20.209 16:38:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:20.209 16:38:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:20.209 16:38:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.470 16:38:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:20.470 16:38:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:21.410 16:38:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:21.410 16:38:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:21.410 16:38:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:21.410 16:38:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.410 16:38:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:21.410 16:38:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:21.410 16:38:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:21.410 16:38:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.410 16:38:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:21.410 16:38:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:22.349 [2024-11-20 16:38:08.017098] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:22.349 [2024-11-20 16:38:08.017115] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:22.349 [2024-11-20 16:38:08.017129] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:22.349 [2024-11-20 16:38:08.145554] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:22.349 16:38:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:22.349 16:38:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:22.349 16:38:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:22.349 16:38:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:22.349 16:38:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.349 16:38:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:22.349 16:38:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:22.349 16:38:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.349 16:38:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:22.349 16:38:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:22.609 [2024-11-20 16:38:08.326669] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:26:22.609 [2024-11-20 16:38:08.327586] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x15aa570:1 started. 00:26:22.609 [2024-11-20 16:38:08.328796] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:22.609 [2024-11-20 16:38:08.328833] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:22.609 [2024-11-20 16:38:08.328852] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:22.609 [2024-11-20 16:38:08.328867] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:22.609 [2024-11-20 16:38:08.328875] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:22.609 [2024-11-20 16:38:08.335452] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x15aa570 was disconnected and freed. delete nvme_qpair. 00:26:23.550 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:23.550 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:23.550 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:23.550 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.550 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:23.550 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:23.550 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:23.550 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.550 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:23.550 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:23.550 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2344240 00:26:23.550 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2344240 ']' 00:26:23.550 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2344240 00:26:23.550 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:26:23.550 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:23.550 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2344240 00:26:23.550 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:23.550 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:23.550 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2344240' 00:26:23.550 killing process with pid 2344240 00:26:23.550 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2344240 00:26:23.550 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2344240 00:26:23.810 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:23.810 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:23.810 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:26:23.810 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:23.810 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:26:23.810 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:23.810 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:23.810 rmmod nvme_tcp 00:26:23.810 rmmod nvme_fabrics 00:26:23.810 rmmod nvme_keyring 00:26:23.810 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:23.810 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:26:23.810 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:26:23.810 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 2343907 ']' 00:26:23.810 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 2343907 00:26:23.810 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2343907 ']' 00:26:23.810 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2343907 00:26:23.810 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:26:23.810 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:23.810 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2343907 00:26:23.810 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:23.810 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:23.810 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2343907' 00:26:23.810 killing process with pid 2343907 00:26:23.810 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2343907 00:26:23.810 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2343907 00:26:23.810 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:23.810 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:23.810 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:23.810 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:26:23.810 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:23.810 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:26:23.810 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:26:24.070 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:24.070 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:24.070 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:24.070 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:24.070 16:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:26.016 16:38:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:26.016 00:26:26.016 real 0m23.996s 00:26:26.016 user 0m29.111s 00:26:26.016 sys 0m6.928s 00:26:26.016 16:38:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:26.016 16:38:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:26.016 ************************************ 00:26:26.016 END TEST nvmf_discovery_remove_ifc 00:26:26.016 ************************************ 00:26:26.016 16:38:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:26.016 16:38:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:26.016 16:38:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:26.016 16:38:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.016 ************************************ 00:26:26.016 START TEST nvmf_identify_kernel_target 00:26:26.016 ************************************ 00:26:26.016 16:38:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:26.277 * Looking for test storage... 00:26:26.277 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:26.277 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:26.277 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:26:26.277 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:26.277 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:26.277 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:26.277 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:26.277 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:26.277 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:26:26.277 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:26:26.277 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:26:26.277 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:26:26.277 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:26:26.277 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:26:26.277 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:26:26.277 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:26.277 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:26:26.277 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:26:26.277 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:26.277 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:26.277 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:26:26.277 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:26:26.277 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:26.277 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:26:26.277 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:26:26.277 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:26:26.277 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:26:26.277 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:26.277 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:26:26.277 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:26:26.277 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:26.277 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:26.277 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:26:26.277 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:26.277 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:26.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:26.277 --rc genhtml_branch_coverage=1 00:26:26.278 --rc genhtml_function_coverage=1 00:26:26.278 --rc genhtml_legend=1 00:26:26.278 --rc geninfo_all_blocks=1 00:26:26.278 --rc geninfo_unexecuted_blocks=1 00:26:26.278 00:26:26.278 ' 00:26:26.278 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:26.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:26.278 --rc genhtml_branch_coverage=1 00:26:26.278 --rc genhtml_function_coverage=1 00:26:26.278 --rc genhtml_legend=1 00:26:26.278 --rc geninfo_all_blocks=1 00:26:26.278 --rc geninfo_unexecuted_blocks=1 00:26:26.278 00:26:26.278 ' 00:26:26.278 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:26.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:26.278 --rc genhtml_branch_coverage=1 00:26:26.278 --rc genhtml_function_coverage=1 00:26:26.278 --rc genhtml_legend=1 00:26:26.278 --rc geninfo_all_blocks=1 00:26:26.278 --rc geninfo_unexecuted_blocks=1 00:26:26.278 00:26:26.278 ' 00:26:26.278 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:26.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:26.278 --rc genhtml_branch_coverage=1 00:26:26.278 --rc genhtml_function_coverage=1 00:26:26.278 --rc genhtml_legend=1 00:26:26.278 --rc geninfo_all_blocks=1 00:26:26.278 --rc geninfo_unexecuted_blocks=1 00:26:26.278 00:26:26.278 ' 00:26:26.278 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:26.278 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:26.278 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:26.278 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:26.278 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:26.278 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:26.278 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:26.278 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:26.278 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:26.278 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:26.278 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:26.278 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:26.278 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:26:26.278 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:26:26.278 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:26.278 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:26.278 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:26.278 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:26.278 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:26.278 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:26:26.278 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:26.278 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:26.278 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:26.278 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.278 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.278 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.278 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:26.278 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.278 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:26:26.278 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:26.278 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:26.278 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:26.278 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:26.278 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:26.278 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:26.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:26.278 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:26.278 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:26.278 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:26.278 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:26.278 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:26.278 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:26.278 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:26.278 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:26.278 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:26.278 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:26.278 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:26.278 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:26.278 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:26.278 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:26.278 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:26:26.278 16:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:34.416 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:34.416 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:34.416 Found net devices under 0000:31:00.0: cvl_0_0 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:34.416 Found net devices under 0000:31:00.1: cvl_0_1 00:26:34.416 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:34.417 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:34.417 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:26:34.417 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:34.417 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:34.417 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:34.417 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:34.417 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:34.417 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:34.417 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:34.417 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:34.417 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:34.417 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:34.417 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:34.417 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:34.417 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:34.417 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:34.417 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:34.417 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:34.417 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:34.417 16:38:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:34.417 16:38:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:34.417 16:38:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:34.417 16:38:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:34.417 16:38:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:34.417 16:38:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:34.417 16:38:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:34.417 16:38:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:34.417 16:38:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:34.417 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:34.417 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:26:34.417 00:26:34.417 --- 10.0.0.2 ping statistics --- 00:26:34.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:34.417 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:26:34.417 16:38:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:34.417 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:34.417 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:26:34.417 00:26:34.417 --- 10.0.0.1 ping statistics --- 00:26:34.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:34.417 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:26:34.417 16:38:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:34.417 16:38:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:26:34.417 16:38:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:34.417 16:38:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:34.417 16:38:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:34.417 16:38:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:34.417 16:38:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:34.417 16:38:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:34.417 16:38:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:34.417 16:38:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:34.417 16:38:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:34.417 16:38:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:26:34.417 16:38:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:34.417 16:38:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:34.417 16:38:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.417 16:38:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.417 16:38:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:34.417 16:38:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.417 16:38:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:34.417 16:38:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:34.417 16:38:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:34.417 16:38:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:34.417 16:38:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:34.417 16:38:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:34.417 16:38:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:26:34.417 16:38:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:34.417 16:38:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:34.417 16:38:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:34.417 16:38:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:26:34.417 16:38:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:26:34.417 16:38:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:26:34.417 16:38:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:34.417 16:38:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:36.960 Waiting for block devices as requested 00:26:36.960 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:36.960 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:36.960 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:36.960 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:36.960 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:37.220 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:37.220 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:37.220 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:37.480 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:26:37.480 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:37.740 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:37.740 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:37.740 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:37.740 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:38.000 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:38.000 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:38.000 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:38.260 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:38.260 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:38.260 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:26:38.260 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:26:38.260 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:38.260 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:38.260 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:26:38.260 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:38.260 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:38.520 No valid GPT data, bailing 00:26:38.520 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:38.520 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:26:38.520 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:26:38.520 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:26:38.520 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:26:38.520 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:38.520 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:38.520 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:38.520 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:38.520 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:26:38.520 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:26:38.520 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:26:38.520 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:26:38.520 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:26:38.520 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:26:38.520 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:26:38.520 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:38.520 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 10.0.0.1 -t tcp -s 4420 00:26:38.520 00:26:38.520 Discovery Log Number of Records 2, Generation counter 2 00:26:38.520 =====Discovery Log Entry 0====== 00:26:38.520 trtype: tcp 00:26:38.520 adrfam: ipv4 00:26:38.520 subtype: current discovery subsystem 00:26:38.520 treq: not specified, sq flow control disable supported 00:26:38.520 portid: 1 00:26:38.520 trsvcid: 4420 00:26:38.520 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:38.520 traddr: 10.0.0.1 00:26:38.520 eflags: none 00:26:38.520 sectype: none 00:26:38.520 =====Discovery Log Entry 1====== 00:26:38.520 trtype: tcp 00:26:38.520 adrfam: ipv4 00:26:38.520 subtype: nvme subsystem 00:26:38.520 treq: not specified, sq flow control disable supported 00:26:38.520 portid: 1 00:26:38.521 trsvcid: 4420 00:26:38.521 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:38.521 traddr: 10.0.0.1 00:26:38.521 eflags: none 00:26:38.521 sectype: none 00:26:38.521 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:38.521 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:38.782 ===================================================== 00:26:38.782 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:38.782 ===================================================== 00:26:38.782 Controller Capabilities/Features 00:26:38.782 ================================ 00:26:38.782 Vendor ID: 0000 00:26:38.782 Subsystem Vendor ID: 0000 00:26:38.782 Serial Number: 90e695fb4c26bbb4da9e 00:26:38.782 Model Number: Linux 00:26:38.782 Firmware Version: 6.8.9-20 00:26:38.782 Recommended Arb Burst: 0 00:26:38.782 IEEE OUI Identifier: 00 00 00 00:26:38.782 Multi-path I/O 00:26:38.782 May have multiple subsystem ports: No 00:26:38.782 May have multiple controllers: No 00:26:38.782 Associated with SR-IOV VF: No 00:26:38.782 Max Data Transfer Size: Unlimited 00:26:38.782 Max Number of Namespaces: 0 00:26:38.782 Max Number of I/O Queues: 1024 00:26:38.782 NVMe Specification Version (VS): 1.3 00:26:38.782 NVMe Specification Version (Identify): 1.3 00:26:38.782 Maximum Queue Entries: 1024 00:26:38.782 Contiguous Queues Required: No 00:26:38.782 Arbitration Mechanisms Supported 00:26:38.782 Weighted Round Robin: Not Supported 00:26:38.782 Vendor Specific: Not Supported 00:26:38.782 Reset Timeout: 7500 ms 00:26:38.782 Doorbell Stride: 4 bytes 00:26:38.782 NVM Subsystem Reset: Not Supported 00:26:38.782 Command Sets Supported 00:26:38.782 NVM Command Set: Supported 00:26:38.782 Boot Partition: Not Supported 00:26:38.782 Memory Page Size Minimum: 4096 bytes 00:26:38.782 Memory Page Size Maximum: 4096 bytes 00:26:38.782 Persistent Memory Region: Not Supported 00:26:38.782 Optional Asynchronous Events Supported 00:26:38.782 Namespace Attribute Notices: Not Supported 00:26:38.782 Firmware Activation Notices: Not Supported 00:26:38.782 ANA Change Notices: Not Supported 00:26:38.782 PLE Aggregate Log Change Notices: Not Supported 00:26:38.782 LBA Status Info Alert Notices: Not Supported 00:26:38.782 EGE Aggregate Log Change Notices: Not Supported 00:26:38.782 Normal NVM Subsystem Shutdown event: Not Supported 00:26:38.782 Zone Descriptor Change Notices: Not Supported 00:26:38.782 Discovery Log Change Notices: Supported 00:26:38.782 Controller Attributes 00:26:38.782 128-bit Host Identifier: Not Supported 00:26:38.782 Non-Operational Permissive Mode: Not Supported 00:26:38.782 NVM Sets: Not Supported 00:26:38.782 Read Recovery Levels: Not Supported 00:26:38.782 Endurance Groups: Not Supported 00:26:38.782 Predictable Latency Mode: Not Supported 00:26:38.782 Traffic Based Keep ALive: Not Supported 00:26:38.782 Namespace Granularity: Not Supported 00:26:38.782 SQ Associations: Not Supported 00:26:38.782 UUID List: Not Supported 00:26:38.782 Multi-Domain Subsystem: Not Supported 00:26:38.782 Fixed Capacity Management: Not Supported 00:26:38.782 Variable Capacity Management: Not Supported 00:26:38.782 Delete Endurance Group: Not Supported 00:26:38.782 Delete NVM Set: Not Supported 00:26:38.782 Extended LBA Formats Supported: Not Supported 00:26:38.782 Flexible Data Placement Supported: Not Supported 00:26:38.782 00:26:38.782 Controller Memory Buffer Support 00:26:38.782 ================================ 00:26:38.782 Supported: No 00:26:38.782 00:26:38.782 Persistent Memory Region Support 00:26:38.782 ================================ 00:26:38.782 Supported: No 00:26:38.782 00:26:38.782 Admin Command Set Attributes 00:26:38.783 ============================ 00:26:38.783 Security Send/Receive: Not Supported 00:26:38.783 Format NVM: Not Supported 00:26:38.783 Firmware Activate/Download: Not Supported 00:26:38.783 Namespace Management: Not Supported 00:26:38.783 Device Self-Test: Not Supported 00:26:38.783 Directives: Not Supported 00:26:38.783 NVMe-MI: Not Supported 00:26:38.783 Virtualization Management: Not Supported 00:26:38.783 Doorbell Buffer Config: Not Supported 00:26:38.783 Get LBA Status Capability: Not Supported 00:26:38.783 Command & Feature Lockdown Capability: Not Supported 00:26:38.783 Abort Command Limit: 1 00:26:38.783 Async Event Request Limit: 1 00:26:38.783 Number of Firmware Slots: N/A 00:26:38.783 Firmware Slot 1 Read-Only: N/A 00:26:38.783 Firmware Activation Without Reset: N/A 00:26:38.783 Multiple Update Detection Support: N/A 00:26:38.783 Firmware Update Granularity: No Information Provided 00:26:38.783 Per-Namespace SMART Log: No 00:26:38.783 Asymmetric Namespace Access Log Page: Not Supported 00:26:38.783 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:38.783 Command Effects Log Page: Not Supported 00:26:38.783 Get Log Page Extended Data: Supported 00:26:38.783 Telemetry Log Pages: Not Supported 00:26:38.783 Persistent Event Log Pages: Not Supported 00:26:38.783 Supported Log Pages Log Page: May Support 00:26:38.783 Commands Supported & Effects Log Page: Not Supported 00:26:38.783 Feature Identifiers & Effects Log Page:May Support 00:26:38.783 NVMe-MI Commands & Effects Log Page: May Support 00:26:38.783 Data Area 4 for Telemetry Log: Not Supported 00:26:38.783 Error Log Page Entries Supported: 1 00:26:38.783 Keep Alive: Not Supported 00:26:38.783 00:26:38.783 NVM Command Set Attributes 00:26:38.783 ========================== 00:26:38.783 Submission Queue Entry Size 00:26:38.783 Max: 1 00:26:38.783 Min: 1 00:26:38.783 Completion Queue Entry Size 00:26:38.783 Max: 1 00:26:38.783 Min: 1 00:26:38.783 Number of Namespaces: 0 00:26:38.783 Compare Command: Not Supported 00:26:38.783 Write Uncorrectable Command: Not Supported 00:26:38.783 Dataset Management Command: Not Supported 00:26:38.783 Write Zeroes Command: Not Supported 00:26:38.783 Set Features Save Field: Not Supported 00:26:38.783 Reservations: Not Supported 00:26:38.783 Timestamp: Not Supported 00:26:38.783 Copy: Not Supported 00:26:38.783 Volatile Write Cache: Not Present 00:26:38.783 Atomic Write Unit (Normal): 1 00:26:38.783 Atomic Write Unit (PFail): 1 00:26:38.783 Atomic Compare & Write Unit: 1 00:26:38.783 Fused Compare & Write: Not Supported 00:26:38.783 Scatter-Gather List 00:26:38.783 SGL Command Set: Supported 00:26:38.783 SGL Keyed: Not Supported 00:26:38.783 SGL Bit Bucket Descriptor: Not Supported 00:26:38.783 SGL Metadata Pointer: Not Supported 00:26:38.783 Oversized SGL: Not Supported 00:26:38.783 SGL Metadata Address: Not Supported 00:26:38.783 SGL Offset: Supported 00:26:38.783 Transport SGL Data Block: Not Supported 00:26:38.783 Replay Protected Memory Block: Not Supported 00:26:38.783 00:26:38.783 Firmware Slot Information 00:26:38.783 ========================= 00:26:38.783 Active slot: 0 00:26:38.783 00:26:38.783 00:26:38.783 Error Log 00:26:38.783 ========= 00:26:38.783 00:26:38.783 Active Namespaces 00:26:38.783 ================= 00:26:38.783 Discovery Log Page 00:26:38.783 ================== 00:26:38.783 Generation Counter: 2 00:26:38.783 Number of Records: 2 00:26:38.783 Record Format: 0 00:26:38.783 00:26:38.783 Discovery Log Entry 0 00:26:38.783 ---------------------- 00:26:38.783 Transport Type: 3 (TCP) 00:26:38.783 Address Family: 1 (IPv4) 00:26:38.783 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:38.783 Entry Flags: 00:26:38.783 Duplicate Returned Information: 0 00:26:38.783 Explicit Persistent Connection Support for Discovery: 0 00:26:38.783 Transport Requirements: 00:26:38.783 Secure Channel: Not Specified 00:26:38.783 Port ID: 1 (0x0001) 00:26:38.783 Controller ID: 65535 (0xffff) 00:26:38.783 Admin Max SQ Size: 32 00:26:38.783 Transport Service Identifier: 4420 00:26:38.783 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:38.783 Transport Address: 10.0.0.1 00:26:38.783 Discovery Log Entry 1 00:26:38.783 ---------------------- 00:26:38.783 Transport Type: 3 (TCP) 00:26:38.783 Address Family: 1 (IPv4) 00:26:38.783 Subsystem Type: 2 (NVM Subsystem) 00:26:38.783 Entry Flags: 00:26:38.783 Duplicate Returned Information: 0 00:26:38.783 Explicit Persistent Connection Support for Discovery: 0 00:26:38.783 Transport Requirements: 00:26:38.783 Secure Channel: Not Specified 00:26:38.783 Port ID: 1 (0x0001) 00:26:38.783 Controller ID: 65535 (0xffff) 00:26:38.783 Admin Max SQ Size: 32 00:26:38.783 Transport Service Identifier: 4420 00:26:38.783 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:38.783 Transport Address: 10.0.0.1 00:26:38.783 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:38.783 get_feature(0x01) failed 00:26:38.783 get_feature(0x02) failed 00:26:38.783 get_feature(0x04) failed 00:26:38.783 ===================================================== 00:26:38.783 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:38.783 ===================================================== 00:26:38.783 Controller Capabilities/Features 00:26:38.783 ================================ 00:26:38.783 Vendor ID: 0000 00:26:38.783 Subsystem Vendor ID: 0000 00:26:38.783 Serial Number: 10f8b6e466fade2511a7 00:26:38.783 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:38.783 Firmware Version: 6.8.9-20 00:26:38.783 Recommended Arb Burst: 6 00:26:38.783 IEEE OUI Identifier: 00 00 00 00:26:38.783 Multi-path I/O 00:26:38.783 May have multiple subsystem ports: Yes 00:26:38.783 May have multiple controllers: Yes 00:26:38.783 Associated with SR-IOV VF: No 00:26:38.783 Max Data Transfer Size: Unlimited 00:26:38.783 Max Number of Namespaces: 1024 00:26:38.783 Max Number of I/O Queues: 128 00:26:38.783 NVMe Specification Version (VS): 1.3 00:26:38.783 NVMe Specification Version (Identify): 1.3 00:26:38.783 Maximum Queue Entries: 1024 00:26:38.783 Contiguous Queues Required: No 00:26:38.783 Arbitration Mechanisms Supported 00:26:38.783 Weighted Round Robin: Not Supported 00:26:38.783 Vendor Specific: Not Supported 00:26:38.783 Reset Timeout: 7500 ms 00:26:38.783 Doorbell Stride: 4 bytes 00:26:38.783 NVM Subsystem Reset: Not Supported 00:26:38.783 Command Sets Supported 00:26:38.783 NVM Command Set: Supported 00:26:38.783 Boot Partition: Not Supported 00:26:38.783 Memory Page Size Minimum: 4096 bytes 00:26:38.783 Memory Page Size Maximum: 4096 bytes 00:26:38.783 Persistent Memory Region: Not Supported 00:26:38.783 Optional Asynchronous Events Supported 00:26:38.783 Namespace Attribute Notices: Supported 00:26:38.783 Firmware Activation Notices: Not Supported 00:26:38.783 ANA Change Notices: Supported 00:26:38.783 PLE Aggregate Log Change Notices: Not Supported 00:26:38.783 LBA Status Info Alert Notices: Not Supported 00:26:38.783 EGE Aggregate Log Change Notices: Not Supported 00:26:38.783 Normal NVM Subsystem Shutdown event: Not Supported 00:26:38.783 Zone Descriptor Change Notices: Not Supported 00:26:38.783 Discovery Log Change Notices: Not Supported 00:26:38.783 Controller Attributes 00:26:38.783 128-bit Host Identifier: Supported 00:26:38.783 Non-Operational Permissive Mode: Not Supported 00:26:38.783 NVM Sets: Not Supported 00:26:38.783 Read Recovery Levels: Not Supported 00:26:38.783 Endurance Groups: Not Supported 00:26:38.783 Predictable Latency Mode: Not Supported 00:26:38.783 Traffic Based Keep ALive: Supported 00:26:38.783 Namespace Granularity: Not Supported 00:26:38.783 SQ Associations: Not Supported 00:26:38.783 UUID List: Not Supported 00:26:38.783 Multi-Domain Subsystem: Not Supported 00:26:38.783 Fixed Capacity Management: Not Supported 00:26:38.783 Variable Capacity Management: Not Supported 00:26:38.783 Delete Endurance Group: Not Supported 00:26:38.783 Delete NVM Set: Not Supported 00:26:38.783 Extended LBA Formats Supported: Not Supported 00:26:38.783 Flexible Data Placement Supported: Not Supported 00:26:38.783 00:26:38.783 Controller Memory Buffer Support 00:26:38.783 ================================ 00:26:38.783 Supported: No 00:26:38.783 00:26:38.783 Persistent Memory Region Support 00:26:38.783 ================================ 00:26:38.783 Supported: No 00:26:38.783 00:26:38.783 Admin Command Set Attributes 00:26:38.783 ============================ 00:26:38.783 Security Send/Receive: Not Supported 00:26:38.783 Format NVM: Not Supported 00:26:38.783 Firmware Activate/Download: Not Supported 00:26:38.784 Namespace Management: Not Supported 00:26:38.784 Device Self-Test: Not Supported 00:26:38.784 Directives: Not Supported 00:26:38.784 NVMe-MI: Not Supported 00:26:38.784 Virtualization Management: Not Supported 00:26:38.784 Doorbell Buffer Config: Not Supported 00:26:38.784 Get LBA Status Capability: Not Supported 00:26:38.784 Command & Feature Lockdown Capability: Not Supported 00:26:38.784 Abort Command Limit: 4 00:26:38.784 Async Event Request Limit: 4 00:26:38.784 Number of Firmware Slots: N/A 00:26:38.784 Firmware Slot 1 Read-Only: N/A 00:26:38.784 Firmware Activation Without Reset: N/A 00:26:38.784 Multiple Update Detection Support: N/A 00:26:38.784 Firmware Update Granularity: No Information Provided 00:26:38.784 Per-Namespace SMART Log: Yes 00:26:38.784 Asymmetric Namespace Access Log Page: Supported 00:26:38.784 ANA Transition Time : 10 sec 00:26:38.784 00:26:38.784 Asymmetric Namespace Access Capabilities 00:26:38.784 ANA Optimized State : Supported 00:26:38.784 ANA Non-Optimized State : Supported 00:26:38.784 ANA Inaccessible State : Supported 00:26:38.784 ANA Persistent Loss State : Supported 00:26:38.784 ANA Change State : Supported 00:26:38.784 ANAGRPID is not changed : No 00:26:38.784 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:38.784 00:26:38.784 ANA Group Identifier Maximum : 128 00:26:38.784 Number of ANA Group Identifiers : 128 00:26:38.784 Max Number of Allowed Namespaces : 1024 00:26:38.784 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:38.784 Command Effects Log Page: Supported 00:26:38.784 Get Log Page Extended Data: Supported 00:26:38.784 Telemetry Log Pages: Not Supported 00:26:38.784 Persistent Event Log Pages: Not Supported 00:26:38.784 Supported Log Pages Log Page: May Support 00:26:38.784 Commands Supported & Effects Log Page: Not Supported 00:26:38.784 Feature Identifiers & Effects Log Page:May Support 00:26:38.784 NVMe-MI Commands & Effects Log Page: May Support 00:26:38.784 Data Area 4 for Telemetry Log: Not Supported 00:26:38.784 Error Log Page Entries Supported: 128 00:26:38.784 Keep Alive: Supported 00:26:38.784 Keep Alive Granularity: 1000 ms 00:26:38.784 00:26:38.784 NVM Command Set Attributes 00:26:38.784 ========================== 00:26:38.784 Submission Queue Entry Size 00:26:38.784 Max: 64 00:26:38.784 Min: 64 00:26:38.784 Completion Queue Entry Size 00:26:38.784 Max: 16 00:26:38.784 Min: 16 00:26:38.784 Number of Namespaces: 1024 00:26:38.784 Compare Command: Not Supported 00:26:38.784 Write Uncorrectable Command: Not Supported 00:26:38.784 Dataset Management Command: Supported 00:26:38.784 Write Zeroes Command: Supported 00:26:38.784 Set Features Save Field: Not Supported 00:26:38.784 Reservations: Not Supported 00:26:38.784 Timestamp: Not Supported 00:26:38.784 Copy: Not Supported 00:26:38.784 Volatile Write Cache: Present 00:26:38.784 Atomic Write Unit (Normal): 1 00:26:38.784 Atomic Write Unit (PFail): 1 00:26:38.784 Atomic Compare & Write Unit: 1 00:26:38.784 Fused Compare & Write: Not Supported 00:26:38.784 Scatter-Gather List 00:26:38.784 SGL Command Set: Supported 00:26:38.784 SGL Keyed: Not Supported 00:26:38.784 SGL Bit Bucket Descriptor: Not Supported 00:26:38.784 SGL Metadata Pointer: Not Supported 00:26:38.784 Oversized SGL: Not Supported 00:26:38.784 SGL Metadata Address: Not Supported 00:26:38.784 SGL Offset: Supported 00:26:38.784 Transport SGL Data Block: Not Supported 00:26:38.784 Replay Protected Memory Block: Not Supported 00:26:38.784 00:26:38.784 Firmware Slot Information 00:26:38.784 ========================= 00:26:38.784 Active slot: 0 00:26:38.784 00:26:38.784 Asymmetric Namespace Access 00:26:38.784 =========================== 00:26:38.784 Change Count : 0 00:26:38.784 Number of ANA Group Descriptors : 1 00:26:38.784 ANA Group Descriptor : 0 00:26:38.784 ANA Group ID : 1 00:26:38.784 Number of NSID Values : 1 00:26:38.784 Change Count : 0 00:26:38.784 ANA State : 1 00:26:38.784 Namespace Identifier : 1 00:26:38.784 00:26:38.784 Commands Supported and Effects 00:26:38.784 ============================== 00:26:38.784 Admin Commands 00:26:38.784 -------------- 00:26:38.784 Get Log Page (02h): Supported 00:26:38.784 Identify (06h): Supported 00:26:38.784 Abort (08h): Supported 00:26:38.784 Set Features (09h): Supported 00:26:38.784 Get Features (0Ah): Supported 00:26:38.784 Asynchronous Event Request (0Ch): Supported 00:26:38.784 Keep Alive (18h): Supported 00:26:38.784 I/O Commands 00:26:38.784 ------------ 00:26:38.784 Flush (00h): Supported 00:26:38.784 Write (01h): Supported LBA-Change 00:26:38.784 Read (02h): Supported 00:26:38.784 Write Zeroes (08h): Supported LBA-Change 00:26:38.784 Dataset Management (09h): Supported 00:26:38.784 00:26:38.784 Error Log 00:26:38.784 ========= 00:26:38.784 Entry: 0 00:26:38.784 Error Count: 0x3 00:26:38.784 Submission Queue Id: 0x0 00:26:38.784 Command Id: 0x5 00:26:38.784 Phase Bit: 0 00:26:38.784 Status Code: 0x2 00:26:38.784 Status Code Type: 0x0 00:26:38.784 Do Not Retry: 1 00:26:38.784 Error Location: 0x28 00:26:38.784 LBA: 0x0 00:26:38.784 Namespace: 0x0 00:26:38.784 Vendor Log Page: 0x0 00:26:38.784 ----------- 00:26:38.784 Entry: 1 00:26:38.784 Error Count: 0x2 00:26:38.784 Submission Queue Id: 0x0 00:26:38.784 Command Id: 0x5 00:26:38.784 Phase Bit: 0 00:26:38.784 Status Code: 0x2 00:26:38.784 Status Code Type: 0x0 00:26:38.784 Do Not Retry: 1 00:26:38.784 Error Location: 0x28 00:26:38.784 LBA: 0x0 00:26:38.784 Namespace: 0x0 00:26:38.784 Vendor Log Page: 0x0 00:26:38.784 ----------- 00:26:38.784 Entry: 2 00:26:38.784 Error Count: 0x1 00:26:38.784 Submission Queue Id: 0x0 00:26:38.784 Command Id: 0x4 00:26:38.784 Phase Bit: 0 00:26:38.784 Status Code: 0x2 00:26:38.784 Status Code Type: 0x0 00:26:38.784 Do Not Retry: 1 00:26:38.784 Error Location: 0x28 00:26:38.784 LBA: 0x0 00:26:38.784 Namespace: 0x0 00:26:38.784 Vendor Log Page: 0x0 00:26:38.784 00:26:38.784 Number of Queues 00:26:38.784 ================ 00:26:38.784 Number of I/O Submission Queues: 128 00:26:38.784 Number of I/O Completion Queues: 128 00:26:38.784 00:26:38.784 ZNS Specific Controller Data 00:26:38.784 ============================ 00:26:38.784 Zone Append Size Limit: 0 00:26:38.784 00:26:38.784 00:26:38.784 Active Namespaces 00:26:38.784 ================= 00:26:38.784 get_feature(0x05) failed 00:26:38.784 Namespace ID:1 00:26:38.784 Command Set Identifier: NVM (00h) 00:26:38.784 Deallocate: Supported 00:26:38.784 Deallocated/Unwritten Error: Not Supported 00:26:38.784 Deallocated Read Value: Unknown 00:26:38.784 Deallocate in Write Zeroes: Not Supported 00:26:38.784 Deallocated Guard Field: 0xFFFF 00:26:38.784 Flush: Supported 00:26:38.784 Reservation: Not Supported 00:26:38.784 Namespace Sharing Capabilities: Multiple Controllers 00:26:38.784 Size (in LBAs): 3750748848 (1788GiB) 00:26:38.784 Capacity (in LBAs): 3750748848 (1788GiB) 00:26:38.784 Utilization (in LBAs): 3750748848 (1788GiB) 00:26:38.784 UUID: 665641a9-d043-41c6-b774-7b274514238b 00:26:38.784 Thin Provisioning: Not Supported 00:26:38.784 Per-NS Atomic Units: Yes 00:26:38.784 Atomic Write Unit (Normal): 8 00:26:38.784 Atomic Write Unit (PFail): 8 00:26:38.784 Preferred Write Granularity: 8 00:26:38.784 Atomic Compare & Write Unit: 8 00:26:38.784 Atomic Boundary Size (Normal): 0 00:26:38.784 Atomic Boundary Size (PFail): 0 00:26:38.784 Atomic Boundary Offset: 0 00:26:38.784 NGUID/EUI64 Never Reused: No 00:26:38.784 ANA group ID: 1 00:26:38.784 Namespace Write Protected: No 00:26:38.784 Number of LBA Formats: 1 00:26:38.784 Current LBA Format: LBA Format #00 00:26:38.784 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:38.784 00:26:38.784 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:38.784 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:38.784 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:26:38.784 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:38.784 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:26:38.784 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:38.784 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:38.784 rmmod nvme_tcp 00:26:38.784 rmmod nvme_fabrics 00:26:38.784 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:38.785 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:26:38.785 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:26:38.785 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:26:38.785 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:38.785 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:38.785 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:38.785 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:26:38.785 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:26:38.785 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:38.785 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:26:38.785 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:38.785 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:38.785 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:38.785 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:38.785 16:38:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:41.323 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:41.323 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:41.323 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:41.323 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:26:41.323 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:41.323 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:41.323 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:41.323 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:41.323 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:26:41.323 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:26:41.323 16:38:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:44.708 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:26:44.708 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:26:44.708 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:26:44.708 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:26:44.708 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:26:44.708 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:26:44.708 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:26:44.708 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:26:44.708 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:26:44.708 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:26:44.708 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:26:44.708 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:26:44.708 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:26:44.708 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:26:44.708 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:26:44.708 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:26:44.708 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:26:45.019 00:26:45.019 real 0m18.906s 00:26:45.019 user 0m5.008s 00:26:45.019 sys 0m10.916s 00:26:45.019 16:38:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:45.019 16:38:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:45.019 ************************************ 00:26:45.019 END TEST nvmf_identify_kernel_target 00:26:45.019 ************************************ 00:26:45.019 16:38:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:45.019 16:38:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:45.019 16:38:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:45.019 16:38:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.019 ************************************ 00:26:45.019 START TEST nvmf_auth_host 00:26:45.019 ************************************ 00:26:45.019 16:38:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:45.282 * Looking for test storage... 00:26:45.282 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:45.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:45.282 --rc genhtml_branch_coverage=1 00:26:45.282 --rc genhtml_function_coverage=1 00:26:45.282 --rc genhtml_legend=1 00:26:45.282 --rc geninfo_all_blocks=1 00:26:45.282 --rc geninfo_unexecuted_blocks=1 00:26:45.282 00:26:45.282 ' 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:45.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:45.282 --rc genhtml_branch_coverage=1 00:26:45.282 --rc genhtml_function_coverage=1 00:26:45.282 --rc genhtml_legend=1 00:26:45.282 --rc geninfo_all_blocks=1 00:26:45.282 --rc geninfo_unexecuted_blocks=1 00:26:45.282 00:26:45.282 ' 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:45.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:45.282 --rc genhtml_branch_coverage=1 00:26:45.282 --rc genhtml_function_coverage=1 00:26:45.282 --rc genhtml_legend=1 00:26:45.282 --rc geninfo_all_blocks=1 00:26:45.282 --rc geninfo_unexecuted_blocks=1 00:26:45.282 00:26:45.282 ' 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:45.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:45.282 --rc genhtml_branch_coverage=1 00:26:45.282 --rc genhtml_function_coverage=1 00:26:45.282 --rc genhtml_legend=1 00:26:45.282 --rc geninfo_all_blocks=1 00:26:45.282 --rc geninfo_unexecuted_blocks=1 00:26:45.282 00:26:45.282 ' 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:45.282 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:45.283 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:45.283 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:45.283 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:45.283 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:45.283 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:26:45.283 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:26:45.283 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:26:45.283 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:26:45.283 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:45.283 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:45.283 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:26:45.283 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:26:45.283 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:26:45.283 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:45.283 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:45.283 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:45.283 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:45.283 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:45.283 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:45.283 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:45.283 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:45.283 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:45.283 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:45.283 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:26:45.283 16:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.428 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:53.428 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:26:53.428 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:53.428 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:53.428 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:53.428 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:53.428 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:53.428 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:26:53.428 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:53.428 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:26:53.428 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:26:53.428 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:26:53.428 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:26:53.428 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:26:53.428 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:26:53.428 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:53.428 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:53.428 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:53.428 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:53.429 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:53.429 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:53.429 Found net devices under 0000:31:00.0: cvl_0_0 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:53.429 Found net devices under 0000:31:00.1: cvl_0_1 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:53.429 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:53.429 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.598 ms 00:26:53.429 00:26:53.429 --- 10.0.0.2 ping statistics --- 00:26:53.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:53.429 rtt min/avg/max/mdev = 0.598/0.598/0.598/0.000 ms 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:53.429 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:53.429 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:26:53.429 00:26:53.429 --- 10.0.0.1 ping statistics --- 00:26:53.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:53.429 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=2358695 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 2358695 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2358695 ']' 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:53.429 16:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.689 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:53.689 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:26:53.689 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:53.689 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:53.689 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.689 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:53.689 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:26:53.689 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:26:53.689 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:53.689 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:53.689 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:53.689 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:53.689 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:53.689 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:53.689 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4a21bc203bedca7eba48cb0f3aaad3ba 00:26:53.689 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:53.689 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.HIh 00:26:53.689 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4a21bc203bedca7eba48cb0f3aaad3ba 0 00:26:53.689 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4a21bc203bedca7eba48cb0f3aaad3ba 0 00:26:53.689 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:53.689 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:53.689 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4a21bc203bedca7eba48cb0f3aaad3ba 00:26:53.689 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:53.689 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:53.689 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.HIh 00:26:53.689 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.HIh 00:26:53.689 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.HIh 00:26:53.689 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:26:53.689 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:53.689 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:53.689 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:53.689 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:26:53.689 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:26:53.689 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:53.689 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9fd2148360bfdaa74fc84f4aff65061b5eda72fe2e68b8fc4b2a10d42b579c13 00:26:53.689 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:26:53.689 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.PTS 00:26:53.689 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9fd2148360bfdaa74fc84f4aff65061b5eda72fe2e68b8fc4b2a10d42b579c13 3 00:26:53.689 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9fd2148360bfdaa74fc84f4aff65061b5eda72fe2e68b8fc4b2a10d42b579c13 3 00:26:53.689 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:53.689 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:53.689 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9fd2148360bfdaa74fc84f4aff65061b5eda72fe2e68b8fc4b2a10d42b579c13 00:26:53.689 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:26:53.689 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.PTS 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.PTS 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.PTS 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=48b0fddfb67dd6dff6aef83b1311c594d3c34df8386e687d 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Hx9 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 48b0fddfb67dd6dff6aef83b1311c594d3c34df8386e687d 0 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 48b0fddfb67dd6dff6aef83b1311c594d3c34df8386e687d 0 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=48b0fddfb67dd6dff6aef83b1311c594d3c34df8386e687d 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Hx9 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Hx9 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Hx9 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=25da98788cbb9f25e62efd230748df91af1d4bbf942b20ca 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.xcr 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 25da98788cbb9f25e62efd230748df91af1d4bbf942b20ca 2 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 25da98788cbb9f25e62efd230748df91af1d4bbf942b20ca 2 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=25da98788cbb9f25e62efd230748df91af1d4bbf942b20ca 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.xcr 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.xcr 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.xcr 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7e80bf3247c66801d604c3872588b3a4 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.WED 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7e80bf3247c66801d604c3872588b3a4 1 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7e80bf3247c66801d604c3872588b3a4 1 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7e80bf3247c66801d604c3872588b3a4 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.WED 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.WED 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.WED 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:53.950 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:53.951 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:53.951 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:26:53.951 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:53.951 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:53.951 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f8ae49738d9711b008f3b01b17dabb1d 00:26:53.951 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:26:53.951 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.tMN 00:26:53.951 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f8ae49738d9711b008f3b01b17dabb1d 1 00:26:53.951 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f8ae49738d9711b008f3b01b17dabb1d 1 00:26:53.951 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:53.951 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:53.951 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f8ae49738d9711b008f3b01b17dabb1d 00:26:53.951 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:26:53.951 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:53.951 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.tMN 00:26:53.951 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.tMN 00:26:53.951 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.tMN 00:26:54.211 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:26:54.211 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:54.211 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:54.211 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:54.211 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:26:54.211 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:54.211 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:54.211 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5f013f9ebb90f1039a1d85e255b4fe6cb40a5f08ca571dfa 00:26:54.211 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:26:54.211 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.r2D 00:26:54.211 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5f013f9ebb90f1039a1d85e255b4fe6cb40a5f08ca571dfa 2 00:26:54.211 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5f013f9ebb90f1039a1d85e255b4fe6cb40a5f08ca571dfa 2 00:26:54.211 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:54.211 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:54.211 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5f013f9ebb90f1039a1d85e255b4fe6cb40a5f08ca571dfa 00:26:54.211 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:26:54.211 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:54.211 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.r2D 00:26:54.211 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.r2D 00:26:54.211 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.r2D 00:26:54.211 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:26:54.211 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:54.211 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:54.211 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:54.211 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:54.211 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:54.211 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:54.211 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=333345947eb7733e253e32f57a6f38e9 00:26:54.211 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:54.211 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.jXm 00:26:54.211 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 333345947eb7733e253e32f57a6f38e9 0 00:26:54.211 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 333345947eb7733e253e32f57a6f38e9 0 00:26:54.211 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:54.211 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:54.211 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=333345947eb7733e253e32f57a6f38e9 00:26:54.211 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:54.211 16:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:54.211 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.jXm 00:26:54.211 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.jXm 00:26:54.211 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.jXm 00:26:54.211 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:26:54.211 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:54.211 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:54.211 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:54.211 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:26:54.211 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:26:54.211 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:54.211 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6debd6c30c1ea33ca253c47ebd2ba340ef31f3b4052d89c830b438bb68319d91 00:26:54.211 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:26:54.211 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.0iy 00:26:54.211 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6debd6c30c1ea33ca253c47ebd2ba340ef31f3b4052d89c830b438bb68319d91 3 00:26:54.211 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6debd6c30c1ea33ca253c47ebd2ba340ef31f3b4052d89c830b438bb68319d91 3 00:26:54.211 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:54.211 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:54.211 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6debd6c30c1ea33ca253c47ebd2ba340ef31f3b4052d89c830b438bb68319d91 00:26:54.211 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:26:54.211 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:54.211 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.0iy 00:26:54.211 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.0iy 00:26:54.211 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.0iy 00:26:54.211 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:26:54.211 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2358695 00:26:54.211 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2358695 ']' 00:26:54.211 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:54.211 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:54.211 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:54.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:54.212 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:54.212 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.471 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:54.471 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:26:54.471 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:54.471 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.HIh 00:26:54.471 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.471 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.471 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.471 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.PTS ]] 00:26:54.471 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.PTS 00:26:54.471 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.471 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.471 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.471 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:54.471 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Hx9 00:26:54.471 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.471 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.471 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.472 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.xcr ]] 00:26:54.472 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.xcr 00:26:54.472 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.472 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.472 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.472 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:54.472 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.WED 00:26:54.472 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.472 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.472 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.472 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.tMN ]] 00:26:54.472 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.tMN 00:26:54.472 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.472 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.472 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.472 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:54.472 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.r2D 00:26:54.472 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.472 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.472 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.472 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.jXm ]] 00:26:54.472 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.jXm 00:26:54.472 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.472 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.472 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.472 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:54.472 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.0iy 00:26:54.472 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.472 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.472 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.472 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:26:54.472 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:26:54.472 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:26:54.472 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:54.472 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:54.472 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:54.472 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.472 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.472 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:54.472 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.472 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:54.472 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:54.472 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:54.472 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:26:54.472 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:26:54.472 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:26:54.472 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:54.472 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:54.472 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:54.472 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:26:54.472 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:26:54.472 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:26:54.472 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:54.472 16:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:57.763 Waiting for block devices as requested 00:26:58.024 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:58.024 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:58.024 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:58.284 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:58.284 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:58.284 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:58.284 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:58.544 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:58.544 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:26:58.803 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:58.803 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:58.803 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:59.063 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:59.063 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:59.063 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:59.063 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:59.323 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:00.265 16:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:00.265 16:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:00.265 16:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:00.265 16:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:27:00.265 16:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:00.265 16:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:00.265 16:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:00.265 16:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:00.265 16:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:00.265 No valid GPT data, bailing 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 10.0.0.1 -t tcp -s 4420 00:27:00.265 00:27:00.265 Discovery Log Number of Records 2, Generation counter 2 00:27:00.265 =====Discovery Log Entry 0====== 00:27:00.265 trtype: tcp 00:27:00.265 adrfam: ipv4 00:27:00.265 subtype: current discovery subsystem 00:27:00.265 treq: not specified, sq flow control disable supported 00:27:00.265 portid: 1 00:27:00.265 trsvcid: 4420 00:27:00.265 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:00.265 traddr: 10.0.0.1 00:27:00.265 eflags: none 00:27:00.265 sectype: none 00:27:00.265 =====Discovery Log Entry 1====== 00:27:00.265 trtype: tcp 00:27:00.265 adrfam: ipv4 00:27:00.265 subtype: nvme subsystem 00:27:00.265 treq: not specified, sq flow control disable supported 00:27:00.265 portid: 1 00:27:00.265 trsvcid: 4420 00:27:00.265 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:00.265 traddr: 10.0.0.1 00:27:00.265 eflags: none 00:27:00.265 sectype: none 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDhiMGZkZGZiNjdkZDZkZmY2YWVmODNiMTMxMWM1OTRkM2MzNGRmODM4NmU2ODdkGTSLIQ==: 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjVkYTk4Nzg4Y2JiOWYyNWU2MmVmZDIzMDc0OGRmOTFhZjFkNGJiZjk0MmIyMGNh1TtAiw==: 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDhiMGZkZGZiNjdkZDZkZmY2YWVmODNiMTMxMWM1OTRkM2MzNGRmODM4NmU2ODdkGTSLIQ==: 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjVkYTk4Nzg4Y2JiOWYyNWU2MmVmZDIzMDc0OGRmOTFhZjFkNGJiZjk0MmIyMGNh1TtAiw==: ]] 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjVkYTk4Nzg4Y2JiOWYyNWU2MmVmZDIzMDc0OGRmOTFhZjFkNGJiZjk0MmIyMGNh1TtAiw==: 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.265 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.526 nvme0n1 00:27:00.526 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.526 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.526 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.526 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.526 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.526 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.526 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.526 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.526 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.526 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.526 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.526 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:00.526 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:00.526 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.526 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:00.526 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.526 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:00.526 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:00.526 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:00.526 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGEyMWJjMjAzYmVkY2E3ZWJhNDhjYjBmM2FhYWQzYmGrESbY: 00:27:00.526 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWZkMjE0ODM2MGJmZGFhNzRmYzg0ZjRhZmY2NTA2MWI1ZWRhNzJmZTJlNjhiOGZjNGIyYTEwZDQyYjU3OWMxM95kvXk=: 00:27:00.526 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:00.526 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:00.526 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGEyMWJjMjAzYmVkY2E3ZWJhNDhjYjBmM2FhYWQzYmGrESbY: 00:27:00.526 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWZkMjE0ODM2MGJmZGFhNzRmYzg0ZjRhZmY2NTA2MWI1ZWRhNzJmZTJlNjhiOGZjNGIyYTEwZDQyYjU3OWMxM95kvXk=: ]] 00:27:00.526 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWZkMjE0ODM2MGJmZGFhNzRmYzg0ZjRhZmY2NTA2MWI1ZWRhNzJmZTJlNjhiOGZjNGIyYTEwZDQyYjU3OWMxM95kvXk=: 00:27:00.526 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:00.526 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.526 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:00.526 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:00.526 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:00.526 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.526 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:00.526 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.526 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.526 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.526 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.526 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:00.526 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:00.526 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:00.526 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.526 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.526 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:00.526 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.526 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:00.526 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:00.526 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:00.526 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:00.526 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.526 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.786 nvme0n1 00:27:00.786 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.786 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.786 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.786 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.786 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.786 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.786 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.786 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.786 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.786 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.786 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.786 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.786 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:00.786 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.786 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:00.786 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:00.786 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:00.786 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDhiMGZkZGZiNjdkZDZkZmY2YWVmODNiMTMxMWM1OTRkM2MzNGRmODM4NmU2ODdkGTSLIQ==: 00:27:00.786 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjVkYTk4Nzg4Y2JiOWYyNWU2MmVmZDIzMDc0OGRmOTFhZjFkNGJiZjk0MmIyMGNh1TtAiw==: 00:27:00.786 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:00.786 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:00.786 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDhiMGZkZGZiNjdkZDZkZmY2YWVmODNiMTMxMWM1OTRkM2MzNGRmODM4NmU2ODdkGTSLIQ==: 00:27:00.786 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjVkYTk4Nzg4Y2JiOWYyNWU2MmVmZDIzMDc0OGRmOTFhZjFkNGJiZjk0MmIyMGNh1TtAiw==: ]] 00:27:00.786 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjVkYTk4Nzg4Y2JiOWYyNWU2MmVmZDIzMDc0OGRmOTFhZjFkNGJiZjk0MmIyMGNh1TtAiw==: 00:27:00.786 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:00.786 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.786 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:00.786 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:00.786 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:00.786 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.786 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:00.786 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.786 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.786 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.786 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.786 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:00.786 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:00.786 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:00.786 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.786 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.786 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:00.786 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.786 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:00.786 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:00.786 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:00.786 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:00.786 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.786 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.046 nvme0n1 00:27:01.046 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.046 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.046 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.046 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.046 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.046 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.046 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.046 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.046 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.046 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.046 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.046 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.046 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:01.046 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.046 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:01.046 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:01.046 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:01.046 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2U4MGJmMzI0N2M2NjgwMWQ2MDRjMzg3MjU4OGIzYTQk37dK: 00:27:01.046 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjhhZTQ5NzM4ZDk3MTFiMDA4ZjNiMDFiMTdkYWJiMWRGFOI9: 00:27:01.046 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:01.046 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:01.046 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2U4MGJmMzI0N2M2NjgwMWQ2MDRjMzg3MjU4OGIzYTQk37dK: 00:27:01.046 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjhhZTQ5NzM4ZDk3MTFiMDA4ZjNiMDFiMTdkYWJiMWRGFOI9: ]] 00:27:01.046 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjhhZTQ5NzM4ZDk3MTFiMDA4ZjNiMDFiMTdkYWJiMWRGFOI9: 00:27:01.046 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:01.046 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.046 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:01.046 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:01.046 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:01.046 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.046 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:01.046 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.046 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.046 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.046 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.046 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:01.046 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:01.046 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:01.046 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.046 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.046 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:01.046 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.046 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:01.046 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:01.046 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:01.046 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:01.046 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.046 16:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.306 nvme0n1 00:27:01.306 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.306 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.306 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.306 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.306 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.306 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.306 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.306 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.306 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.307 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.307 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.307 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.307 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:01.307 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.307 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:01.307 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:01.307 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:01.307 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWYwMTNmOWViYjkwZjEwMzlhMWQ4NWUyNTViNGZlNmNiNDBhNWYwOGNhNTcxZGZhnkzJKw==: 00:27:01.307 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzMzMzQ1OTQ3ZWI3NzMzZTI1M2UzMmY1N2E2ZjM4ZTkovwQG: 00:27:01.307 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:01.307 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:01.307 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWYwMTNmOWViYjkwZjEwMzlhMWQ4NWUyNTViNGZlNmNiNDBhNWYwOGNhNTcxZGZhnkzJKw==: 00:27:01.307 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzMzMzQ1OTQ3ZWI3NzMzZTI1M2UzMmY1N2E2ZjM4ZTkovwQG: ]] 00:27:01.307 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzMzMzQ1OTQ3ZWI3NzMzZTI1M2UzMmY1N2E2ZjM4ZTkovwQG: 00:27:01.307 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:01.307 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.307 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:01.307 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:01.307 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:01.307 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.307 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:01.307 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.307 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.307 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.307 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.307 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:01.307 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:01.307 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:01.307 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.307 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.307 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:01.307 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.307 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:01.307 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:01.307 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:01.307 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:01.307 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.307 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.567 nvme0n1 00:27:01.567 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.567 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.567 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.567 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.567 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.567 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.567 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.567 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.567 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.567 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.567 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.567 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.567 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:01.567 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.567 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:01.567 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:01.567 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:01.567 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmRlYmQ2YzMwYzFlYTMzY2EyNTNjNDdlYmQyYmEzNDBlZjMxZjNiNDA1MmQ4OWM4MzBiNDM4YmI2ODMxOWQ5MZCylCg=: 00:27:01.567 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:01.567 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:01.567 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:01.567 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmRlYmQ2YzMwYzFlYTMzY2EyNTNjNDdlYmQyYmEzNDBlZjMxZjNiNDA1MmQ4OWM4MzBiNDM4YmI2ODMxOWQ5MZCylCg=: 00:27:01.567 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:01.567 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:01.567 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.567 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:01.567 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:01.567 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:01.567 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.567 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:01.567 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.567 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.567 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.567 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.567 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:01.567 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:01.568 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:01.568 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.568 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.568 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:01.568 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.568 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:01.568 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:01.568 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:01.568 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:01.568 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.568 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.568 nvme0n1 00:27:01.568 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.568 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.568 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.568 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.828 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.828 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.828 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.828 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.828 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.828 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.828 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.828 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:01.828 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.828 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:01.828 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.828 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:01.828 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:01.828 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:01.828 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGEyMWJjMjAzYmVkY2E3ZWJhNDhjYjBmM2FhYWQzYmGrESbY: 00:27:01.828 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWZkMjE0ODM2MGJmZGFhNzRmYzg0ZjRhZmY2NTA2MWI1ZWRhNzJmZTJlNjhiOGZjNGIyYTEwZDQyYjU3OWMxM95kvXk=: 00:27:01.828 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:01.828 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:01.828 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGEyMWJjMjAzYmVkY2E3ZWJhNDhjYjBmM2FhYWQzYmGrESbY: 00:27:01.828 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWZkMjE0ODM2MGJmZGFhNzRmYzg0ZjRhZmY2NTA2MWI1ZWRhNzJmZTJlNjhiOGZjNGIyYTEwZDQyYjU3OWMxM95kvXk=: ]] 00:27:01.828 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWZkMjE0ODM2MGJmZGFhNzRmYzg0ZjRhZmY2NTA2MWI1ZWRhNzJmZTJlNjhiOGZjNGIyYTEwZDQyYjU3OWMxM95kvXk=: 00:27:01.828 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:01.828 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.828 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:01.828 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:01.828 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:01.828 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.828 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:01.828 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.828 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.828 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.828 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.828 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:01.828 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:01.828 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:01.828 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.828 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.828 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:01.828 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.828 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:01.828 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:01.828 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:01.828 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:01.828 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.828 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.089 nvme0n1 00:27:02.089 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.089 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.089 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.089 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.089 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.089 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.089 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.089 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.089 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.089 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.089 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.089 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.089 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:02.089 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.089 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:02.089 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:02.089 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:02.089 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDhiMGZkZGZiNjdkZDZkZmY2YWVmODNiMTMxMWM1OTRkM2MzNGRmODM4NmU2ODdkGTSLIQ==: 00:27:02.089 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjVkYTk4Nzg4Y2JiOWYyNWU2MmVmZDIzMDc0OGRmOTFhZjFkNGJiZjk0MmIyMGNh1TtAiw==: 00:27:02.089 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:02.089 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:02.089 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDhiMGZkZGZiNjdkZDZkZmY2YWVmODNiMTMxMWM1OTRkM2MzNGRmODM4NmU2ODdkGTSLIQ==: 00:27:02.089 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjVkYTk4Nzg4Y2JiOWYyNWU2MmVmZDIzMDc0OGRmOTFhZjFkNGJiZjk0MmIyMGNh1TtAiw==: ]] 00:27:02.090 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjVkYTk4Nzg4Y2JiOWYyNWU2MmVmZDIzMDc0OGRmOTFhZjFkNGJiZjk0MmIyMGNh1TtAiw==: 00:27:02.090 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:02.090 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.090 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:02.090 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:02.090 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:02.090 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.090 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:02.090 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.090 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.090 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.090 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.090 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:02.090 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:02.090 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:02.090 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.090 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.090 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:02.090 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.090 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:02.090 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:02.090 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:02.090 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:02.090 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.090 16:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.352 nvme0n1 00:27:02.352 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.352 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.352 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.352 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.352 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.352 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.352 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.352 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.352 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.352 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.352 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.352 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.352 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:02.352 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.352 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:02.352 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:02.352 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:02.352 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2U4MGJmMzI0N2M2NjgwMWQ2MDRjMzg3MjU4OGIzYTQk37dK: 00:27:02.352 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjhhZTQ5NzM4ZDk3MTFiMDA4ZjNiMDFiMTdkYWJiMWRGFOI9: 00:27:02.352 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:02.352 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:02.352 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2U4MGJmMzI0N2M2NjgwMWQ2MDRjMzg3MjU4OGIzYTQk37dK: 00:27:02.352 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjhhZTQ5NzM4ZDk3MTFiMDA4ZjNiMDFiMTdkYWJiMWRGFOI9: ]] 00:27:02.352 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjhhZTQ5NzM4ZDk3MTFiMDA4ZjNiMDFiMTdkYWJiMWRGFOI9: 00:27:02.352 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:02.352 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.352 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:02.352 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:02.352 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:02.352 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.352 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:02.352 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.352 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.352 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.352 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.352 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:02.352 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:02.352 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:02.352 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.352 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.352 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:02.352 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.352 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:02.352 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:02.352 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:02.352 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:02.352 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.352 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.613 nvme0n1 00:27:02.613 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.613 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.613 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.613 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.613 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.613 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.613 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.613 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.613 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.613 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.613 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.613 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.613 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:02.613 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.613 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:02.613 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:02.613 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:02.613 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWYwMTNmOWViYjkwZjEwMzlhMWQ4NWUyNTViNGZlNmNiNDBhNWYwOGNhNTcxZGZhnkzJKw==: 00:27:02.613 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzMzMzQ1OTQ3ZWI3NzMzZTI1M2UzMmY1N2E2ZjM4ZTkovwQG: 00:27:02.613 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:02.613 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:02.613 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWYwMTNmOWViYjkwZjEwMzlhMWQ4NWUyNTViNGZlNmNiNDBhNWYwOGNhNTcxZGZhnkzJKw==: 00:27:02.613 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzMzMzQ1OTQ3ZWI3NzMzZTI1M2UzMmY1N2E2ZjM4ZTkovwQG: ]] 00:27:02.613 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzMzMzQ1OTQ3ZWI3NzMzZTI1M2UzMmY1N2E2ZjM4ZTkovwQG: 00:27:02.613 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:02.613 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.613 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:02.613 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:02.613 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:02.613 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.613 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:02.613 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.613 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.613 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.613 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.613 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:02.613 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:02.613 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:02.613 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.613 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.613 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:02.613 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.613 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:02.613 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:02.613 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:02.613 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:02.613 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.613 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.873 nvme0n1 00:27:02.873 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.873 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.873 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.873 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.873 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.873 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.873 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.873 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.873 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.873 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.873 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.873 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.873 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:02.873 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.874 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:02.874 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:02.874 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:02.874 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmRlYmQ2YzMwYzFlYTMzY2EyNTNjNDdlYmQyYmEzNDBlZjMxZjNiNDA1MmQ4OWM4MzBiNDM4YmI2ODMxOWQ5MZCylCg=: 00:27:02.874 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:02.874 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:02.874 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:02.874 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmRlYmQ2YzMwYzFlYTMzY2EyNTNjNDdlYmQyYmEzNDBlZjMxZjNiNDA1MmQ4OWM4MzBiNDM4YmI2ODMxOWQ5MZCylCg=: 00:27:02.874 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:02.874 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:02.874 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.874 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:02.874 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:02.874 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:02.874 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.874 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:02.874 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.874 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.874 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.874 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.874 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:02.874 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:02.874 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:02.874 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.874 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.874 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:02.874 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.874 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:02.874 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:02.874 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:02.874 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:02.874 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.874 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.136 nvme0n1 00:27:03.136 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.136 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.136 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.136 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.136 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.136 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.136 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.136 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.136 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.136 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.136 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.136 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:03.136 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.136 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:03.136 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.136 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:03.136 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:03.136 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:03.136 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGEyMWJjMjAzYmVkY2E3ZWJhNDhjYjBmM2FhYWQzYmGrESbY: 00:27:03.136 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWZkMjE0ODM2MGJmZGFhNzRmYzg0ZjRhZmY2NTA2MWI1ZWRhNzJmZTJlNjhiOGZjNGIyYTEwZDQyYjU3OWMxM95kvXk=: 00:27:03.136 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:03.136 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:03.136 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGEyMWJjMjAzYmVkY2E3ZWJhNDhjYjBmM2FhYWQzYmGrESbY: 00:27:03.136 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWZkMjE0ODM2MGJmZGFhNzRmYzg0ZjRhZmY2NTA2MWI1ZWRhNzJmZTJlNjhiOGZjNGIyYTEwZDQyYjU3OWMxM95kvXk=: ]] 00:27:03.136 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWZkMjE0ODM2MGJmZGFhNzRmYzg0ZjRhZmY2NTA2MWI1ZWRhNzJmZTJlNjhiOGZjNGIyYTEwZDQyYjU3OWMxM95kvXk=: 00:27:03.136 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:03.136 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.136 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:03.136 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:03.136 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:03.136 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.136 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:03.136 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.136 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.136 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.136 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.136 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:03.136 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:03.136 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:03.136 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.136 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.136 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:03.136 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.136 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:03.136 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:03.136 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:03.136 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:03.136 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.136 16:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.397 nvme0n1 00:27:03.397 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.397 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.397 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.397 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.397 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.397 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.397 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.397 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.397 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.397 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.397 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.397 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.397 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:03.397 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.397 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:03.397 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:03.397 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:03.397 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDhiMGZkZGZiNjdkZDZkZmY2YWVmODNiMTMxMWM1OTRkM2MzNGRmODM4NmU2ODdkGTSLIQ==: 00:27:03.397 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjVkYTk4Nzg4Y2JiOWYyNWU2MmVmZDIzMDc0OGRmOTFhZjFkNGJiZjk0MmIyMGNh1TtAiw==: 00:27:03.397 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:03.397 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:03.397 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDhiMGZkZGZiNjdkZDZkZmY2YWVmODNiMTMxMWM1OTRkM2MzNGRmODM4NmU2ODdkGTSLIQ==: 00:27:03.397 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjVkYTk4Nzg4Y2JiOWYyNWU2MmVmZDIzMDc0OGRmOTFhZjFkNGJiZjk0MmIyMGNh1TtAiw==: ]] 00:27:03.397 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjVkYTk4Nzg4Y2JiOWYyNWU2MmVmZDIzMDc0OGRmOTFhZjFkNGJiZjk0MmIyMGNh1TtAiw==: 00:27:03.397 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:03.397 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.397 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:03.397 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:03.397 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:03.397 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.397 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:03.397 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.397 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.397 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.397 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.397 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:03.397 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:03.397 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:03.397 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.397 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.397 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:03.397 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.397 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:03.397 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:03.397 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:03.397 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:03.397 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.397 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.657 nvme0n1 00:27:03.657 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.657 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.657 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.657 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.657 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.918 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.918 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.918 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.918 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.918 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.918 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.918 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.918 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:03.918 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.918 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:03.918 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:03.918 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:03.918 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2U4MGJmMzI0N2M2NjgwMWQ2MDRjMzg3MjU4OGIzYTQk37dK: 00:27:03.918 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjhhZTQ5NzM4ZDk3MTFiMDA4ZjNiMDFiMTdkYWJiMWRGFOI9: 00:27:03.918 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:03.918 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:03.918 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2U4MGJmMzI0N2M2NjgwMWQ2MDRjMzg3MjU4OGIzYTQk37dK: 00:27:03.918 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjhhZTQ5NzM4ZDk3MTFiMDA4ZjNiMDFiMTdkYWJiMWRGFOI9: ]] 00:27:03.918 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjhhZTQ5NzM4ZDk3MTFiMDA4ZjNiMDFiMTdkYWJiMWRGFOI9: 00:27:03.918 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:03.918 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.918 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:03.918 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:03.918 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:03.918 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.918 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:03.918 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.918 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.918 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.918 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.918 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:03.918 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:03.918 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:03.918 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.918 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.918 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:03.918 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.918 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:03.918 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:03.918 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:03.918 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:03.918 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.918 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.179 nvme0n1 00:27:04.179 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.179 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.179 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.179 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.179 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.179 16:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.179 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.179 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.179 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.179 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.179 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.179 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.179 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:04.179 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.179 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:04.179 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:04.179 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:04.179 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWYwMTNmOWViYjkwZjEwMzlhMWQ4NWUyNTViNGZlNmNiNDBhNWYwOGNhNTcxZGZhnkzJKw==: 00:27:04.179 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzMzMzQ1OTQ3ZWI3NzMzZTI1M2UzMmY1N2E2ZjM4ZTkovwQG: 00:27:04.179 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:04.179 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:04.179 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWYwMTNmOWViYjkwZjEwMzlhMWQ4NWUyNTViNGZlNmNiNDBhNWYwOGNhNTcxZGZhnkzJKw==: 00:27:04.179 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzMzMzQ1OTQ3ZWI3NzMzZTI1M2UzMmY1N2E2ZjM4ZTkovwQG: ]] 00:27:04.179 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzMzMzQ1OTQ3ZWI3NzMzZTI1M2UzMmY1N2E2ZjM4ZTkovwQG: 00:27:04.179 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:04.179 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.179 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:04.179 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:04.179 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:04.179 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.179 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:04.179 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.179 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.179 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.179 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.179 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:04.179 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:04.179 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:04.179 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.179 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.179 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:04.179 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.179 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:04.179 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:04.179 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:04.179 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:04.179 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.179 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.442 nvme0n1 00:27:04.442 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.442 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.442 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.442 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.442 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.442 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.442 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.442 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.442 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.442 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.442 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.442 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.442 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:04.442 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.442 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:04.442 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:04.442 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:04.442 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmRlYmQ2YzMwYzFlYTMzY2EyNTNjNDdlYmQyYmEzNDBlZjMxZjNiNDA1MmQ4OWM4MzBiNDM4YmI2ODMxOWQ5MZCylCg=: 00:27:04.442 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:04.442 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:04.442 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:04.442 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmRlYmQ2YzMwYzFlYTMzY2EyNTNjNDdlYmQyYmEzNDBlZjMxZjNiNDA1MmQ4OWM4MzBiNDM4YmI2ODMxOWQ5MZCylCg=: 00:27:04.442 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:04.442 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:04.442 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.442 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:04.442 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:04.442 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:04.442 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.442 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:04.442 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.442 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.705 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.705 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.705 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:04.705 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:04.705 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:04.705 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.705 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.705 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:04.705 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.705 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:04.705 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:04.705 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:04.705 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:04.705 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.705 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.965 nvme0n1 00:27:04.965 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.965 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.966 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.966 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.966 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.966 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.966 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.966 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.966 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.966 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.966 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.966 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:04.966 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.966 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:04.966 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.966 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:04.966 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:04.966 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:04.966 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGEyMWJjMjAzYmVkY2E3ZWJhNDhjYjBmM2FhYWQzYmGrESbY: 00:27:04.966 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWZkMjE0ODM2MGJmZGFhNzRmYzg0ZjRhZmY2NTA2MWI1ZWRhNzJmZTJlNjhiOGZjNGIyYTEwZDQyYjU3OWMxM95kvXk=: 00:27:04.966 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:04.966 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:04.966 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGEyMWJjMjAzYmVkY2E3ZWJhNDhjYjBmM2FhYWQzYmGrESbY: 00:27:04.966 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWZkMjE0ODM2MGJmZGFhNzRmYzg0ZjRhZmY2NTA2MWI1ZWRhNzJmZTJlNjhiOGZjNGIyYTEwZDQyYjU3OWMxM95kvXk=: ]] 00:27:04.966 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWZkMjE0ODM2MGJmZGFhNzRmYzg0ZjRhZmY2NTA2MWI1ZWRhNzJmZTJlNjhiOGZjNGIyYTEwZDQyYjU3OWMxM95kvXk=: 00:27:04.966 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:04.966 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.966 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:04.966 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:04.966 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:04.966 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.966 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:04.966 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.966 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.966 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.966 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.966 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:04.966 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:04.966 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:04.966 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.966 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.966 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:04.966 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.966 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:04.966 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:04.966 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:04.966 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:04.966 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.966 16:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.538 nvme0n1 00:27:05.538 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.538 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.538 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:05.538 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.538 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.538 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.538 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.538 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.538 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.538 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.538 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.538 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:05.538 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:05.538 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.538 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:05.538 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:05.538 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:05.538 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDhiMGZkZGZiNjdkZDZkZmY2YWVmODNiMTMxMWM1OTRkM2MzNGRmODM4NmU2ODdkGTSLIQ==: 00:27:05.538 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjVkYTk4Nzg4Y2JiOWYyNWU2MmVmZDIzMDc0OGRmOTFhZjFkNGJiZjk0MmIyMGNh1TtAiw==: 00:27:05.538 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:05.538 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:05.538 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDhiMGZkZGZiNjdkZDZkZmY2YWVmODNiMTMxMWM1OTRkM2MzNGRmODM4NmU2ODdkGTSLIQ==: 00:27:05.538 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjVkYTk4Nzg4Y2JiOWYyNWU2MmVmZDIzMDc0OGRmOTFhZjFkNGJiZjk0MmIyMGNh1TtAiw==: ]] 00:27:05.538 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjVkYTk4Nzg4Y2JiOWYyNWU2MmVmZDIzMDc0OGRmOTFhZjFkNGJiZjk0MmIyMGNh1TtAiw==: 00:27:05.539 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:05.539 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:05.539 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:05.539 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:05.539 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:05.539 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:05.539 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:05.539 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.539 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.539 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.539 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:05.539 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:05.539 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:05.539 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:05.539 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.539 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.539 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:05.539 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.539 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:05.539 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:05.539 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:05.539 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:05.539 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.539 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.110 nvme0n1 00:27:06.110 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.110 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.110 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.110 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.110 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.110 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.110 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.110 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.110 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.110 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.110 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.110 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.110 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:06.110 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.110 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:06.110 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:06.110 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:06.110 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2U4MGJmMzI0N2M2NjgwMWQ2MDRjMzg3MjU4OGIzYTQk37dK: 00:27:06.110 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjhhZTQ5NzM4ZDk3MTFiMDA4ZjNiMDFiMTdkYWJiMWRGFOI9: 00:27:06.110 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:06.110 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:06.110 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2U4MGJmMzI0N2M2NjgwMWQ2MDRjMzg3MjU4OGIzYTQk37dK: 00:27:06.110 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjhhZTQ5NzM4ZDk3MTFiMDA4ZjNiMDFiMTdkYWJiMWRGFOI9: ]] 00:27:06.110 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjhhZTQ5NzM4ZDk3MTFiMDA4ZjNiMDFiMTdkYWJiMWRGFOI9: 00:27:06.110 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:06.110 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.110 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:06.110 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:06.110 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:06.110 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.110 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:06.110 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.110 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.110 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.110 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.110 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:06.110 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:06.110 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:06.110 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.110 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.110 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:06.110 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.110 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:06.110 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:06.110 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:06.110 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:06.110 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.110 16:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.681 nvme0n1 00:27:06.681 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.681 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.681 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.681 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.681 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.681 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.681 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.681 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.681 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.681 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.681 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.681 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.681 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:06.681 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.681 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:06.681 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:06.681 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:06.681 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWYwMTNmOWViYjkwZjEwMzlhMWQ4NWUyNTViNGZlNmNiNDBhNWYwOGNhNTcxZGZhnkzJKw==: 00:27:06.681 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzMzMzQ1OTQ3ZWI3NzMzZTI1M2UzMmY1N2E2ZjM4ZTkovwQG: 00:27:06.681 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:06.681 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:06.681 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWYwMTNmOWViYjkwZjEwMzlhMWQ4NWUyNTViNGZlNmNiNDBhNWYwOGNhNTcxZGZhnkzJKw==: 00:27:06.681 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzMzMzQ1OTQ3ZWI3NzMzZTI1M2UzMmY1N2E2ZjM4ZTkovwQG: ]] 00:27:06.681 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzMzMzQ1OTQ3ZWI3NzMzZTI1M2UzMmY1N2E2ZjM4ZTkovwQG: 00:27:06.681 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:06.681 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.681 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:06.681 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:06.681 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:06.681 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.681 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:06.681 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.681 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.681 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.681 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.681 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:06.681 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:06.681 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:06.681 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.681 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.681 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:06.681 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.681 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:06.681 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:06.681 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:06.681 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:06.681 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.681 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.942 nvme0n1 00:27:06.942 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.942 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.942 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.942 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.942 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.942 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.202 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.202 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.202 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.202 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.202 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.202 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.202 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:07.203 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.203 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:07.203 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:07.203 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:07.203 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmRlYmQ2YzMwYzFlYTMzY2EyNTNjNDdlYmQyYmEzNDBlZjMxZjNiNDA1MmQ4OWM4MzBiNDM4YmI2ODMxOWQ5MZCylCg=: 00:27:07.203 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:07.203 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:07.203 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:07.203 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmRlYmQ2YzMwYzFlYTMzY2EyNTNjNDdlYmQyYmEzNDBlZjMxZjNiNDA1MmQ4OWM4MzBiNDM4YmI2ODMxOWQ5MZCylCg=: 00:27:07.203 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:07.203 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:07.203 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.203 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:07.203 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:07.203 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:07.203 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.203 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:07.203 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.203 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.203 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.203 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.203 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:07.203 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:07.203 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:07.203 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.203 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.203 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:07.203 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.203 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:07.203 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:07.203 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:07.203 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:07.203 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.203 16:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.774 nvme0n1 00:27:07.774 16:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.774 16:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.774 16:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.774 16:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.774 16:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.774 16:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.774 16:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.774 16:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.774 16:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.774 16:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.774 16:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.774 16:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:07.774 16:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.774 16:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:07.774 16:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.774 16:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:07.774 16:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:07.774 16:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:07.774 16:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGEyMWJjMjAzYmVkY2E3ZWJhNDhjYjBmM2FhYWQzYmGrESbY: 00:27:07.774 16:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWZkMjE0ODM2MGJmZGFhNzRmYzg0ZjRhZmY2NTA2MWI1ZWRhNzJmZTJlNjhiOGZjNGIyYTEwZDQyYjU3OWMxM95kvXk=: 00:27:07.774 16:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:07.774 16:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:07.774 16:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGEyMWJjMjAzYmVkY2E3ZWJhNDhjYjBmM2FhYWQzYmGrESbY: 00:27:07.774 16:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWZkMjE0ODM2MGJmZGFhNzRmYzg0ZjRhZmY2NTA2MWI1ZWRhNzJmZTJlNjhiOGZjNGIyYTEwZDQyYjU3OWMxM95kvXk=: ]] 00:27:07.774 16:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWZkMjE0ODM2MGJmZGFhNzRmYzg0ZjRhZmY2NTA2MWI1ZWRhNzJmZTJlNjhiOGZjNGIyYTEwZDQyYjU3OWMxM95kvXk=: 00:27:07.774 16:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:07.774 16:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.774 16:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:07.774 16:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:07.774 16:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:07.774 16:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.774 16:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:07.774 16:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.774 16:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.774 16:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.774 16:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.774 16:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:07.774 16:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:07.774 16:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:07.774 16:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.774 16:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.774 16:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:07.774 16:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.774 16:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:07.774 16:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:07.774 16:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:07.774 16:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:07.774 16:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.774 16:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.346 nvme0n1 00:27:08.346 16:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.346 16:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.346 16:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.346 16:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.346 16:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.346 16:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.607 16:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.607 16:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.607 16:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.607 16:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.607 16:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.607 16:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.607 16:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:08.607 16:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.607 16:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:08.607 16:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:08.607 16:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:08.607 16:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDhiMGZkZGZiNjdkZDZkZmY2YWVmODNiMTMxMWM1OTRkM2MzNGRmODM4NmU2ODdkGTSLIQ==: 00:27:08.607 16:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjVkYTk4Nzg4Y2JiOWYyNWU2MmVmZDIzMDc0OGRmOTFhZjFkNGJiZjk0MmIyMGNh1TtAiw==: 00:27:08.607 16:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:08.607 16:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:08.607 16:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDhiMGZkZGZiNjdkZDZkZmY2YWVmODNiMTMxMWM1OTRkM2MzNGRmODM4NmU2ODdkGTSLIQ==: 00:27:08.607 16:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjVkYTk4Nzg4Y2JiOWYyNWU2MmVmZDIzMDc0OGRmOTFhZjFkNGJiZjk0MmIyMGNh1TtAiw==: ]] 00:27:08.607 16:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjVkYTk4Nzg4Y2JiOWYyNWU2MmVmZDIzMDc0OGRmOTFhZjFkNGJiZjk0MmIyMGNh1TtAiw==: 00:27:08.607 16:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:08.607 16:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.607 16:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:08.607 16:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:08.607 16:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:08.607 16:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.607 16:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:08.607 16:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.607 16:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.607 16:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.607 16:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.607 16:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:08.607 16:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:08.607 16:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:08.607 16:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.607 16:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.607 16:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:08.607 16:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.607 16:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:08.607 16:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:08.607 16:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:08.607 16:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:08.607 16:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.607 16:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.178 nvme0n1 00:27:09.178 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.178 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.178 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.178 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.178 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.178 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.440 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.440 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.440 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.440 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.440 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.440 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.440 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:09.440 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.440 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:09.440 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:09.440 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:09.440 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2U4MGJmMzI0N2M2NjgwMWQ2MDRjMzg3MjU4OGIzYTQk37dK: 00:27:09.440 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjhhZTQ5NzM4ZDk3MTFiMDA4ZjNiMDFiMTdkYWJiMWRGFOI9: 00:27:09.440 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:09.440 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:09.440 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2U4MGJmMzI0N2M2NjgwMWQ2MDRjMzg3MjU4OGIzYTQk37dK: 00:27:09.440 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjhhZTQ5NzM4ZDk3MTFiMDA4ZjNiMDFiMTdkYWJiMWRGFOI9: ]] 00:27:09.440 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjhhZTQ5NzM4ZDk3MTFiMDA4ZjNiMDFiMTdkYWJiMWRGFOI9: 00:27:09.440 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:09.440 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.440 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:09.440 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:09.440 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:09.440 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.440 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:09.440 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.440 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.440 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.440 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.440 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:09.440 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:09.440 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:09.440 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.440 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.440 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:09.440 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.440 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:09.440 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:09.440 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:09.440 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:09.440 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.440 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.021 nvme0n1 00:27:10.021 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.021 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.021 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.021 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.021 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.021 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.281 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.281 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.281 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.281 16:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.281 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.281 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:10.281 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:10.281 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.281 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:10.281 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:10.281 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:10.281 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWYwMTNmOWViYjkwZjEwMzlhMWQ4NWUyNTViNGZlNmNiNDBhNWYwOGNhNTcxZGZhnkzJKw==: 00:27:10.281 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzMzMzQ1OTQ3ZWI3NzMzZTI1M2UzMmY1N2E2ZjM4ZTkovwQG: 00:27:10.281 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:10.281 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:10.281 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWYwMTNmOWViYjkwZjEwMzlhMWQ4NWUyNTViNGZlNmNiNDBhNWYwOGNhNTcxZGZhnkzJKw==: 00:27:10.281 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzMzMzQ1OTQ3ZWI3NzMzZTI1M2UzMmY1N2E2ZjM4ZTkovwQG: ]] 00:27:10.281 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzMzMzQ1OTQ3ZWI3NzMzZTI1M2UzMmY1N2E2ZjM4ZTkovwQG: 00:27:10.281 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:10.281 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.281 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:10.281 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:10.281 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:10.281 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.281 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:10.281 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.281 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.281 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.281 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.281 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:10.281 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:10.281 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:10.281 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.281 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.281 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:10.281 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.281 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:10.281 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:10.281 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:10.281 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:10.281 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.281 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.851 nvme0n1 00:27:10.851 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.851 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.851 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.851 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.851 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.851 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.111 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.111 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.111 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.111 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.111 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.111 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.111 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:11.111 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.111 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:11.111 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:11.111 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:11.111 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmRlYmQ2YzMwYzFlYTMzY2EyNTNjNDdlYmQyYmEzNDBlZjMxZjNiNDA1MmQ4OWM4MzBiNDM4YmI2ODMxOWQ5MZCylCg=: 00:27:11.111 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:11.111 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:11.111 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:11.111 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmRlYmQ2YzMwYzFlYTMzY2EyNTNjNDdlYmQyYmEzNDBlZjMxZjNiNDA1MmQ4OWM4MzBiNDM4YmI2ODMxOWQ5MZCylCg=: 00:27:11.111 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:11.111 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:11.111 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.111 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:11.111 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:11.111 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:11.111 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.111 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:11.111 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.111 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.111 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.111 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.111 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:11.111 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:11.111 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:11.111 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.111 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.111 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:11.111 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.111 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:11.111 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:11.111 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:11.111 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:11.111 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.111 16:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.682 nvme0n1 00:27:11.682 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.682 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.682 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.682 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.682 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.682 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.942 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.942 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.942 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.942 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.942 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.942 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:11.942 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:11.942 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.942 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:11.942 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.942 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:11.942 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:11.942 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:11.942 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGEyMWJjMjAzYmVkY2E3ZWJhNDhjYjBmM2FhYWQzYmGrESbY: 00:27:11.942 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWZkMjE0ODM2MGJmZGFhNzRmYzg0ZjRhZmY2NTA2MWI1ZWRhNzJmZTJlNjhiOGZjNGIyYTEwZDQyYjU3OWMxM95kvXk=: 00:27:11.942 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:11.942 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:11.942 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGEyMWJjMjAzYmVkY2E3ZWJhNDhjYjBmM2FhYWQzYmGrESbY: 00:27:11.942 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWZkMjE0ODM2MGJmZGFhNzRmYzg0ZjRhZmY2NTA2MWI1ZWRhNzJmZTJlNjhiOGZjNGIyYTEwZDQyYjU3OWMxM95kvXk=: ]] 00:27:11.942 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWZkMjE0ODM2MGJmZGFhNzRmYzg0ZjRhZmY2NTA2MWI1ZWRhNzJmZTJlNjhiOGZjNGIyYTEwZDQyYjU3OWMxM95kvXk=: 00:27:11.942 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:11.942 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.942 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:11.942 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:11.942 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:11.942 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.942 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:11.942 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.942 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.942 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.942 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.942 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:11.942 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:11.942 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:11.942 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.942 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.942 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:11.942 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.942 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:11.942 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:11.942 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:11.942 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:11.942 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.942 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.942 nvme0n1 00:27:11.942 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.942 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.942 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.942 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.943 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.943 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.203 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.203 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.203 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.204 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.204 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.204 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.204 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:12.204 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.204 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:12.204 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:12.204 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:12.204 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDhiMGZkZGZiNjdkZDZkZmY2YWVmODNiMTMxMWM1OTRkM2MzNGRmODM4NmU2ODdkGTSLIQ==: 00:27:12.204 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjVkYTk4Nzg4Y2JiOWYyNWU2MmVmZDIzMDc0OGRmOTFhZjFkNGJiZjk0MmIyMGNh1TtAiw==: 00:27:12.204 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:12.204 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:12.204 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDhiMGZkZGZiNjdkZDZkZmY2YWVmODNiMTMxMWM1OTRkM2MzNGRmODM4NmU2ODdkGTSLIQ==: 00:27:12.204 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjVkYTk4Nzg4Y2JiOWYyNWU2MmVmZDIzMDc0OGRmOTFhZjFkNGJiZjk0MmIyMGNh1TtAiw==: ]] 00:27:12.204 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjVkYTk4Nzg4Y2JiOWYyNWU2MmVmZDIzMDc0OGRmOTFhZjFkNGJiZjk0MmIyMGNh1TtAiw==: 00:27:12.204 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:12.204 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.204 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:12.204 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:12.204 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:12.204 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.204 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:12.204 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.204 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.204 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.204 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.204 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:12.204 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:12.204 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:12.204 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.204 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.204 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:12.204 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.204 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:12.204 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:12.204 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:12.204 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:12.204 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.204 16:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.204 nvme0n1 00:27:12.204 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.204 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.204 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.204 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.204 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.204 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.204 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.204 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.204 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.204 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2U4MGJmMzI0N2M2NjgwMWQ2MDRjMzg3MjU4OGIzYTQk37dK: 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjhhZTQ5NzM4ZDk3MTFiMDA4ZjNiMDFiMTdkYWJiMWRGFOI9: 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2U4MGJmMzI0N2M2NjgwMWQ2MDRjMzg3MjU4OGIzYTQk37dK: 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjhhZTQ5NzM4ZDk3MTFiMDA4ZjNiMDFiMTdkYWJiMWRGFOI9: ]] 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjhhZTQ5NzM4ZDk3MTFiMDA4ZjNiMDFiMTdkYWJiMWRGFOI9: 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.465 nvme0n1 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWYwMTNmOWViYjkwZjEwMzlhMWQ4NWUyNTViNGZlNmNiNDBhNWYwOGNhNTcxZGZhnkzJKw==: 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzMzMzQ1OTQ3ZWI3NzMzZTI1M2UzMmY1N2E2ZjM4ZTkovwQG: 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWYwMTNmOWViYjkwZjEwMzlhMWQ4NWUyNTViNGZlNmNiNDBhNWYwOGNhNTcxZGZhnkzJKw==: 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzMzMzQ1OTQ3ZWI3NzMzZTI1M2UzMmY1N2E2ZjM4ZTkovwQG: ]] 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzMzMzQ1OTQ3ZWI3NzMzZTI1M2UzMmY1N2E2ZjM4ZTkovwQG: 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.465 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.466 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.726 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.726 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:12.726 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:12.726 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:12.726 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.726 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.726 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:12.726 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.726 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:12.726 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:12.726 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:12.726 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:12.726 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.726 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.726 nvme0n1 00:27:12.726 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.726 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.726 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.726 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.726 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.726 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.726 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.726 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.726 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.726 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.726 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.726 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.726 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:12.726 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.726 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:12.726 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:12.726 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:12.726 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmRlYmQ2YzMwYzFlYTMzY2EyNTNjNDdlYmQyYmEzNDBlZjMxZjNiNDA1MmQ4OWM4MzBiNDM4YmI2ODMxOWQ5MZCylCg=: 00:27:12.726 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:12.726 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:12.726 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:12.726 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmRlYmQ2YzMwYzFlYTMzY2EyNTNjNDdlYmQyYmEzNDBlZjMxZjNiNDA1MmQ4OWM4MzBiNDM4YmI2ODMxOWQ5MZCylCg=: 00:27:12.726 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:12.726 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:12.726 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.726 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:12.726 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:12.726 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:12.726 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.726 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:12.727 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.727 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.727 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.727 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.727 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:12.727 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:12.727 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:12.727 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.727 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.727 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:12.727 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.727 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:12.727 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:12.727 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:12.727 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:12.727 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.727 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.987 nvme0n1 00:27:12.987 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.987 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.987 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.987 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.987 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.987 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.987 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.987 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.987 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.987 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.987 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.987 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:12.987 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.987 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:12.987 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.987 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:12.987 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:12.987 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:12.987 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGEyMWJjMjAzYmVkY2E3ZWJhNDhjYjBmM2FhYWQzYmGrESbY: 00:27:12.987 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWZkMjE0ODM2MGJmZGFhNzRmYzg0ZjRhZmY2NTA2MWI1ZWRhNzJmZTJlNjhiOGZjNGIyYTEwZDQyYjU3OWMxM95kvXk=: 00:27:12.987 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:12.987 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:12.987 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGEyMWJjMjAzYmVkY2E3ZWJhNDhjYjBmM2FhYWQzYmGrESbY: 00:27:12.987 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWZkMjE0ODM2MGJmZGFhNzRmYzg0ZjRhZmY2NTA2MWI1ZWRhNzJmZTJlNjhiOGZjNGIyYTEwZDQyYjU3OWMxM95kvXk=: ]] 00:27:12.987 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWZkMjE0ODM2MGJmZGFhNzRmYzg0ZjRhZmY2NTA2MWI1ZWRhNzJmZTJlNjhiOGZjNGIyYTEwZDQyYjU3OWMxM95kvXk=: 00:27:12.987 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:12.987 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.987 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:12.987 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:12.987 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:12.987 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.987 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:12.987 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.987 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.987 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.987 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.987 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:12.987 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:12.987 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:12.987 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.987 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.987 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:12.987 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.988 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:12.988 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:12.988 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:12.988 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:12.988 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.988 16:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.248 nvme0n1 00:27:13.248 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.248 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.248 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.248 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.248 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.248 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.248 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.248 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.248 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.248 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.248 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.248 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.248 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:13.248 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.248 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:13.248 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:13.248 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:13.248 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDhiMGZkZGZiNjdkZDZkZmY2YWVmODNiMTMxMWM1OTRkM2MzNGRmODM4NmU2ODdkGTSLIQ==: 00:27:13.248 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjVkYTk4Nzg4Y2JiOWYyNWU2MmVmZDIzMDc0OGRmOTFhZjFkNGJiZjk0MmIyMGNh1TtAiw==: 00:27:13.248 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:13.248 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:13.248 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDhiMGZkZGZiNjdkZDZkZmY2YWVmODNiMTMxMWM1OTRkM2MzNGRmODM4NmU2ODdkGTSLIQ==: 00:27:13.248 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjVkYTk4Nzg4Y2JiOWYyNWU2MmVmZDIzMDc0OGRmOTFhZjFkNGJiZjk0MmIyMGNh1TtAiw==: ]] 00:27:13.249 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjVkYTk4Nzg4Y2JiOWYyNWU2MmVmZDIzMDc0OGRmOTFhZjFkNGJiZjk0MmIyMGNh1TtAiw==: 00:27:13.249 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:13.249 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.249 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:13.249 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:13.249 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:13.249 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.249 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:13.249 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.249 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.249 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.249 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.249 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:13.249 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:13.249 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:13.249 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.249 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.249 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:13.249 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.249 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:13.249 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:13.249 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:13.249 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:13.249 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.249 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.508 nvme0n1 00:27:13.508 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.508 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.508 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.508 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.508 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.508 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.508 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.508 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.508 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.508 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.508 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.508 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.508 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:13.508 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.508 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:13.508 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:13.508 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:13.508 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2U4MGJmMzI0N2M2NjgwMWQ2MDRjMzg3MjU4OGIzYTQk37dK: 00:27:13.508 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjhhZTQ5NzM4ZDk3MTFiMDA4ZjNiMDFiMTdkYWJiMWRGFOI9: 00:27:13.508 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:13.508 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:13.508 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2U4MGJmMzI0N2M2NjgwMWQ2MDRjMzg3MjU4OGIzYTQk37dK: 00:27:13.508 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjhhZTQ5NzM4ZDk3MTFiMDA4ZjNiMDFiMTdkYWJiMWRGFOI9: ]] 00:27:13.508 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjhhZTQ5NzM4ZDk3MTFiMDA4ZjNiMDFiMTdkYWJiMWRGFOI9: 00:27:13.508 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:13.508 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.508 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:13.508 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:13.508 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:13.508 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.508 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:13.508 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.508 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.508 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.508 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.508 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:13.508 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:13.509 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:13.509 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.509 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.509 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:13.509 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.509 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:13.509 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:13.509 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:13.509 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:13.509 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.509 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.769 nvme0n1 00:27:13.769 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.769 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.769 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.769 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.769 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.769 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.769 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.769 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.769 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.769 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.769 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.769 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.769 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:13.769 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.769 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:13.769 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:13.769 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:13.769 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWYwMTNmOWViYjkwZjEwMzlhMWQ4NWUyNTViNGZlNmNiNDBhNWYwOGNhNTcxZGZhnkzJKw==: 00:27:13.769 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzMzMzQ1OTQ3ZWI3NzMzZTI1M2UzMmY1N2E2ZjM4ZTkovwQG: 00:27:13.769 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:13.769 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:13.769 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWYwMTNmOWViYjkwZjEwMzlhMWQ4NWUyNTViNGZlNmNiNDBhNWYwOGNhNTcxZGZhnkzJKw==: 00:27:13.769 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzMzMzQ1OTQ3ZWI3NzMzZTI1M2UzMmY1N2E2ZjM4ZTkovwQG: ]] 00:27:13.769 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzMzMzQ1OTQ3ZWI3NzMzZTI1M2UzMmY1N2E2ZjM4ZTkovwQG: 00:27:13.769 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:13.769 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.769 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:13.769 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:13.769 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:13.769 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.769 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:13.769 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.769 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.769 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.769 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.769 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:13.769 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:13.769 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:13.769 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.769 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.769 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:13.769 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.769 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:13.769 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:13.769 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:13.769 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:13.769 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.769 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.030 nvme0n1 00:27:14.030 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.030 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.030 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.030 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.030 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.030 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.030 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.030 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.030 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.030 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.030 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.030 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.030 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:14.030 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.030 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:14.030 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:14.030 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:14.030 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmRlYmQ2YzMwYzFlYTMzY2EyNTNjNDdlYmQyYmEzNDBlZjMxZjNiNDA1MmQ4OWM4MzBiNDM4YmI2ODMxOWQ5MZCylCg=: 00:27:14.030 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:14.030 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:14.030 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:14.030 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmRlYmQ2YzMwYzFlYTMzY2EyNTNjNDdlYmQyYmEzNDBlZjMxZjNiNDA1MmQ4OWM4MzBiNDM4YmI2ODMxOWQ5MZCylCg=: 00:27:14.030 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:14.030 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:14.030 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.030 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:14.030 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:14.030 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:14.030 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.030 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:14.030 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.030 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.290 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.290 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.290 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:14.290 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:14.290 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:14.290 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.290 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.290 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:14.290 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.290 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:14.290 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:14.290 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:14.290 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:14.290 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.290 16:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.290 nvme0n1 00:27:14.290 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.290 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.290 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.290 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.290 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.290 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.291 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.291 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.291 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.291 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.551 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.551 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:14.551 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.551 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:14.551 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.551 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:14.551 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:14.551 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:14.551 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGEyMWJjMjAzYmVkY2E3ZWJhNDhjYjBmM2FhYWQzYmGrESbY: 00:27:14.551 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWZkMjE0ODM2MGJmZGFhNzRmYzg0ZjRhZmY2NTA2MWI1ZWRhNzJmZTJlNjhiOGZjNGIyYTEwZDQyYjU3OWMxM95kvXk=: 00:27:14.551 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:14.551 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:14.551 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGEyMWJjMjAzYmVkY2E3ZWJhNDhjYjBmM2FhYWQzYmGrESbY: 00:27:14.551 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWZkMjE0ODM2MGJmZGFhNzRmYzg0ZjRhZmY2NTA2MWI1ZWRhNzJmZTJlNjhiOGZjNGIyYTEwZDQyYjU3OWMxM95kvXk=: ]] 00:27:14.551 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWZkMjE0ODM2MGJmZGFhNzRmYzg0ZjRhZmY2NTA2MWI1ZWRhNzJmZTJlNjhiOGZjNGIyYTEwZDQyYjU3OWMxM95kvXk=: 00:27:14.551 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:14.551 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.551 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:14.551 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:14.551 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:14.551 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.551 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:14.551 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.551 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.551 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.551 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.551 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:14.551 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:14.551 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:14.552 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.552 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.552 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:14.552 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.552 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:14.552 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:14.552 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:14.552 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:14.552 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.552 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.812 nvme0n1 00:27:14.812 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.812 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.812 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.812 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.812 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.812 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.812 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.812 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.812 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.812 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.812 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.812 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.812 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:14.812 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.812 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:14.812 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:14.812 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:14.812 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDhiMGZkZGZiNjdkZDZkZmY2YWVmODNiMTMxMWM1OTRkM2MzNGRmODM4NmU2ODdkGTSLIQ==: 00:27:14.812 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjVkYTk4Nzg4Y2JiOWYyNWU2MmVmZDIzMDc0OGRmOTFhZjFkNGJiZjk0MmIyMGNh1TtAiw==: 00:27:14.812 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:14.812 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:14.812 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDhiMGZkZGZiNjdkZDZkZmY2YWVmODNiMTMxMWM1OTRkM2MzNGRmODM4NmU2ODdkGTSLIQ==: 00:27:14.812 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjVkYTk4Nzg4Y2JiOWYyNWU2MmVmZDIzMDc0OGRmOTFhZjFkNGJiZjk0MmIyMGNh1TtAiw==: ]] 00:27:14.812 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjVkYTk4Nzg4Y2JiOWYyNWU2MmVmZDIzMDc0OGRmOTFhZjFkNGJiZjk0MmIyMGNh1TtAiw==: 00:27:14.812 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:14.812 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.812 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:14.812 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:14.812 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:14.812 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.812 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:14.812 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.812 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.812 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.812 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.812 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:14.812 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:14.812 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:14.812 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.812 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.812 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:14.812 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.812 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:14.812 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:14.812 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:14.812 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:14.812 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.812 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.073 nvme0n1 00:27:15.073 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.073 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.074 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.074 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.074 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.074 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.074 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.074 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.074 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.074 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.074 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.074 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.074 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:15.074 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.074 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:15.074 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:15.074 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:15.074 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2U4MGJmMzI0N2M2NjgwMWQ2MDRjMzg3MjU4OGIzYTQk37dK: 00:27:15.074 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjhhZTQ5NzM4ZDk3MTFiMDA4ZjNiMDFiMTdkYWJiMWRGFOI9: 00:27:15.074 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:15.074 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:15.074 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2U4MGJmMzI0N2M2NjgwMWQ2MDRjMzg3MjU4OGIzYTQk37dK: 00:27:15.074 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjhhZTQ5NzM4ZDk3MTFiMDA4ZjNiMDFiMTdkYWJiMWRGFOI9: ]] 00:27:15.074 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjhhZTQ5NzM4ZDk3MTFiMDA4ZjNiMDFiMTdkYWJiMWRGFOI9: 00:27:15.074 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:15.074 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.074 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:15.074 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:15.074 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:15.074 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.074 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:15.074 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.074 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.074 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.074 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.074 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:15.074 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:15.074 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:15.074 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.074 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.074 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:15.074 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.074 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:15.074 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:15.074 16:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:15.074 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:15.074 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.074 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.335 nvme0n1 00:27:15.335 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.335 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.335 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.335 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.335 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.335 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.597 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.597 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.597 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.597 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.597 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.597 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.597 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:15.597 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.597 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:15.597 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:15.597 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:15.597 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWYwMTNmOWViYjkwZjEwMzlhMWQ4NWUyNTViNGZlNmNiNDBhNWYwOGNhNTcxZGZhnkzJKw==: 00:27:15.597 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzMzMzQ1OTQ3ZWI3NzMzZTI1M2UzMmY1N2E2ZjM4ZTkovwQG: 00:27:15.597 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:15.597 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:15.597 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWYwMTNmOWViYjkwZjEwMzlhMWQ4NWUyNTViNGZlNmNiNDBhNWYwOGNhNTcxZGZhnkzJKw==: 00:27:15.597 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzMzMzQ1OTQ3ZWI3NzMzZTI1M2UzMmY1N2E2ZjM4ZTkovwQG: ]] 00:27:15.597 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzMzMzQ1OTQ3ZWI3NzMzZTI1M2UzMmY1N2E2ZjM4ZTkovwQG: 00:27:15.597 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:15.597 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.597 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:15.597 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:15.597 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:15.597 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.597 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:15.597 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.597 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.597 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.597 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.597 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:15.597 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:15.597 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:15.597 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.597 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.597 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:15.597 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.597 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:15.597 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:15.597 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:15.597 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:15.597 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.597 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.858 nvme0n1 00:27:15.858 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.858 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.858 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.858 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.858 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.858 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.858 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.858 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.858 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.858 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.858 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.858 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.858 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:15.858 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.858 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:15.858 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:15.858 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:15.858 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmRlYmQ2YzMwYzFlYTMzY2EyNTNjNDdlYmQyYmEzNDBlZjMxZjNiNDA1MmQ4OWM4MzBiNDM4YmI2ODMxOWQ5MZCylCg=: 00:27:15.858 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:15.858 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:15.858 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:15.858 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmRlYmQ2YzMwYzFlYTMzY2EyNTNjNDdlYmQyYmEzNDBlZjMxZjNiNDA1MmQ4OWM4MzBiNDM4YmI2ODMxOWQ5MZCylCg=: 00:27:15.858 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:15.858 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:15.858 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.858 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:15.858 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:15.858 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:15.858 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.858 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:15.858 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.858 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.859 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.859 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.859 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:15.859 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:15.859 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:15.859 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.859 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.859 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:15.859 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.859 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:15.859 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:15.859 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:15.859 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:15.859 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.859 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.119 nvme0n1 00:27:16.119 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.119 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.119 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.119 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.119 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.119 16:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.119 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.119 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.119 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.119 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.119 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.119 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:16.119 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.119 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:16.119 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.119 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:16.119 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:16.119 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:16.119 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGEyMWJjMjAzYmVkY2E3ZWJhNDhjYjBmM2FhYWQzYmGrESbY: 00:27:16.119 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWZkMjE0ODM2MGJmZGFhNzRmYzg0ZjRhZmY2NTA2MWI1ZWRhNzJmZTJlNjhiOGZjNGIyYTEwZDQyYjU3OWMxM95kvXk=: 00:27:16.119 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:16.119 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:16.119 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGEyMWJjMjAzYmVkY2E3ZWJhNDhjYjBmM2FhYWQzYmGrESbY: 00:27:16.119 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWZkMjE0ODM2MGJmZGFhNzRmYzg0ZjRhZmY2NTA2MWI1ZWRhNzJmZTJlNjhiOGZjNGIyYTEwZDQyYjU3OWMxM95kvXk=: ]] 00:27:16.119 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWZkMjE0ODM2MGJmZGFhNzRmYzg0ZjRhZmY2NTA2MWI1ZWRhNzJmZTJlNjhiOGZjNGIyYTEwZDQyYjU3OWMxM95kvXk=: 00:27:16.119 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:16.119 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.119 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:16.119 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:16.119 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:16.119 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.119 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:16.119 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.119 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.119 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.119 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.119 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:16.119 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:16.119 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:16.119 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.119 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.119 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:16.119 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.119 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:16.119 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:16.119 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:16.119 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:16.119 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.119 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.690 nvme0n1 00:27:16.690 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.690 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.690 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.690 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.690 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.690 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.690 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.690 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.690 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.690 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.690 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.690 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.690 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:16.690 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.690 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:16.690 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:16.690 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:16.690 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDhiMGZkZGZiNjdkZDZkZmY2YWVmODNiMTMxMWM1OTRkM2MzNGRmODM4NmU2ODdkGTSLIQ==: 00:27:16.690 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjVkYTk4Nzg4Y2JiOWYyNWU2MmVmZDIzMDc0OGRmOTFhZjFkNGJiZjk0MmIyMGNh1TtAiw==: 00:27:16.690 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:16.690 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:16.690 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDhiMGZkZGZiNjdkZDZkZmY2YWVmODNiMTMxMWM1OTRkM2MzNGRmODM4NmU2ODdkGTSLIQ==: 00:27:16.690 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjVkYTk4Nzg4Y2JiOWYyNWU2MmVmZDIzMDc0OGRmOTFhZjFkNGJiZjk0MmIyMGNh1TtAiw==: ]] 00:27:16.690 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjVkYTk4Nzg4Y2JiOWYyNWU2MmVmZDIzMDc0OGRmOTFhZjFkNGJiZjk0MmIyMGNh1TtAiw==: 00:27:16.690 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:16.690 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.690 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:16.690 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:16.690 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:16.690 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.690 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:16.690 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.690 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.690 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.690 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.690 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:16.690 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:16.690 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:16.690 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.690 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.690 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:16.690 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.690 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:16.690 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:16.690 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:16.690 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:16.690 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.690 16:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.263 nvme0n1 00:27:17.263 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.263 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.263 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.263 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.263 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.263 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.263 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.263 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.263 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.263 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.263 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.263 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.263 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:17.263 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.263 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:17.263 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:17.263 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:17.263 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2U4MGJmMzI0N2M2NjgwMWQ2MDRjMzg3MjU4OGIzYTQk37dK: 00:27:17.263 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjhhZTQ5NzM4ZDk3MTFiMDA4ZjNiMDFiMTdkYWJiMWRGFOI9: 00:27:17.263 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:17.263 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:17.263 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2U4MGJmMzI0N2M2NjgwMWQ2MDRjMzg3MjU4OGIzYTQk37dK: 00:27:17.263 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjhhZTQ5NzM4ZDk3MTFiMDA4ZjNiMDFiMTdkYWJiMWRGFOI9: ]] 00:27:17.263 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjhhZTQ5NzM4ZDk3MTFiMDA4ZjNiMDFiMTdkYWJiMWRGFOI9: 00:27:17.263 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:17.263 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.263 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:17.263 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:17.263 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:17.263 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.263 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:17.263 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.263 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.263 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.263 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.263 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:17.263 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:17.263 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:17.263 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.263 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.263 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:17.263 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.263 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:17.263 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:17.263 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:17.263 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:17.263 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.263 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.835 nvme0n1 00:27:17.835 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.835 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.835 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.835 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.835 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.835 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.835 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.835 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.835 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.835 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.835 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.835 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.835 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:17.835 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.835 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:17.835 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:17.835 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:17.835 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWYwMTNmOWViYjkwZjEwMzlhMWQ4NWUyNTViNGZlNmNiNDBhNWYwOGNhNTcxZGZhnkzJKw==: 00:27:17.835 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzMzMzQ1OTQ3ZWI3NzMzZTI1M2UzMmY1N2E2ZjM4ZTkovwQG: 00:27:17.835 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:17.835 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:17.835 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWYwMTNmOWViYjkwZjEwMzlhMWQ4NWUyNTViNGZlNmNiNDBhNWYwOGNhNTcxZGZhnkzJKw==: 00:27:17.835 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzMzMzQ1OTQ3ZWI3NzMzZTI1M2UzMmY1N2E2ZjM4ZTkovwQG: ]] 00:27:17.835 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzMzMzQ1OTQ3ZWI3NzMzZTI1M2UzMmY1N2E2ZjM4ZTkovwQG: 00:27:17.835 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:17.835 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.835 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:17.835 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:17.835 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:17.835 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.835 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:17.835 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.835 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.835 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.835 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.835 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:17.835 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:17.835 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:17.835 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.835 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.835 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:17.835 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.835 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:17.835 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:17.835 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:17.835 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:17.835 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.835 16:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.407 nvme0n1 00:27:18.407 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.407 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.407 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.407 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.407 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.407 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.407 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.407 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.407 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.407 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.407 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.407 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.407 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:18.407 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.407 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:18.407 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:18.407 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:18.407 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmRlYmQ2YzMwYzFlYTMzY2EyNTNjNDdlYmQyYmEzNDBlZjMxZjNiNDA1MmQ4OWM4MzBiNDM4YmI2ODMxOWQ5MZCylCg=: 00:27:18.407 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:18.407 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:18.407 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:18.407 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmRlYmQ2YzMwYzFlYTMzY2EyNTNjNDdlYmQyYmEzNDBlZjMxZjNiNDA1MmQ4OWM4MzBiNDM4YmI2ODMxOWQ5MZCylCg=: 00:27:18.407 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:18.407 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:18.407 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.407 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:18.407 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:18.407 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:18.407 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.407 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:18.407 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.407 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.407 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.407 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.407 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:18.407 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:18.407 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:18.407 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.407 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.408 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:18.408 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.408 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:18.408 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:18.408 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:18.408 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:18.408 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.408 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.979 nvme0n1 00:27:18.979 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.979 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.979 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.979 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.979 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.979 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.979 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.979 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.979 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.979 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.979 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.979 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:18.979 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.979 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:18.979 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.979 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:18.979 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:18.979 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:18.979 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGEyMWJjMjAzYmVkY2E3ZWJhNDhjYjBmM2FhYWQzYmGrESbY: 00:27:18.979 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWZkMjE0ODM2MGJmZGFhNzRmYzg0ZjRhZmY2NTA2MWI1ZWRhNzJmZTJlNjhiOGZjNGIyYTEwZDQyYjU3OWMxM95kvXk=: 00:27:18.979 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:18.979 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:18.979 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGEyMWJjMjAzYmVkY2E3ZWJhNDhjYjBmM2FhYWQzYmGrESbY: 00:27:18.979 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWZkMjE0ODM2MGJmZGFhNzRmYzg0ZjRhZmY2NTA2MWI1ZWRhNzJmZTJlNjhiOGZjNGIyYTEwZDQyYjU3OWMxM95kvXk=: ]] 00:27:18.979 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWZkMjE0ODM2MGJmZGFhNzRmYzg0ZjRhZmY2NTA2MWI1ZWRhNzJmZTJlNjhiOGZjNGIyYTEwZDQyYjU3OWMxM95kvXk=: 00:27:18.979 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:18.979 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.979 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:18.979 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:18.979 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:18.979 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.979 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:18.979 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.979 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.979 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.979 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.979 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:18.979 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:18.979 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:18.979 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.979 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.979 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:18.979 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.979 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:18.979 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:18.979 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:18.979 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:18.979 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.979 16:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.922 nvme0n1 00:27:19.922 16:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.922 16:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.922 16:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.922 16:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.922 16:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.922 16:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.922 16:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.922 16:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.922 16:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.922 16:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.922 16:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.922 16:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.922 16:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:19.922 16:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.922 16:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:19.922 16:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:19.922 16:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:19.922 16:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDhiMGZkZGZiNjdkZDZkZmY2YWVmODNiMTMxMWM1OTRkM2MzNGRmODM4NmU2ODdkGTSLIQ==: 00:27:19.922 16:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjVkYTk4Nzg4Y2JiOWYyNWU2MmVmZDIzMDc0OGRmOTFhZjFkNGJiZjk0MmIyMGNh1TtAiw==: 00:27:19.922 16:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:19.922 16:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:19.922 16:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDhiMGZkZGZiNjdkZDZkZmY2YWVmODNiMTMxMWM1OTRkM2MzNGRmODM4NmU2ODdkGTSLIQ==: 00:27:19.922 16:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjVkYTk4Nzg4Y2JiOWYyNWU2MmVmZDIzMDc0OGRmOTFhZjFkNGJiZjk0MmIyMGNh1TtAiw==: ]] 00:27:19.922 16:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjVkYTk4Nzg4Y2JiOWYyNWU2MmVmZDIzMDc0OGRmOTFhZjFkNGJiZjk0MmIyMGNh1TtAiw==: 00:27:19.922 16:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:19.922 16:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.922 16:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:19.922 16:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:19.922 16:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:19.922 16:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.922 16:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:19.922 16:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.922 16:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.922 16:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.922 16:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.922 16:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:19.922 16:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:19.922 16:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:19.922 16:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.922 16:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.922 16:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:19.922 16:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.922 16:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:19.922 16:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:19.922 16:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:19.922 16:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:19.922 16:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.922 16:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.494 nvme0n1 00:27:20.494 16:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.494 16:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.494 16:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.494 16:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.494 16:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.494 16:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.494 16:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.494 16:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.494 16:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.494 16:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.755 16:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.755 16:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.755 16:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:20.755 16:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.755 16:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:20.755 16:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:20.755 16:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:20.755 16:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2U4MGJmMzI0N2M2NjgwMWQ2MDRjMzg3MjU4OGIzYTQk37dK: 00:27:20.755 16:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjhhZTQ5NzM4ZDk3MTFiMDA4ZjNiMDFiMTdkYWJiMWRGFOI9: 00:27:20.755 16:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:20.755 16:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:20.755 16:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2U4MGJmMzI0N2M2NjgwMWQ2MDRjMzg3MjU4OGIzYTQk37dK: 00:27:20.755 16:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjhhZTQ5NzM4ZDk3MTFiMDA4ZjNiMDFiMTdkYWJiMWRGFOI9: ]] 00:27:20.755 16:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjhhZTQ5NzM4ZDk3MTFiMDA4ZjNiMDFiMTdkYWJiMWRGFOI9: 00:27:20.755 16:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:20.755 16:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.755 16:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:20.755 16:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:20.755 16:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:20.755 16:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.755 16:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:20.755 16:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.755 16:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.755 16:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.755 16:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.755 16:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:20.755 16:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:20.755 16:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:20.755 16:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.755 16:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.755 16:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:20.755 16:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.755 16:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:20.755 16:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:20.755 16:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:20.755 16:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:20.755 16:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.755 16:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.326 nvme0n1 00:27:21.326 16:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.326 16:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.326 16:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.326 16:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.326 16:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.326 16:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.326 16:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.326 16:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.587 16:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.587 16:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.587 16:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.587 16:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.587 16:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:21.587 16:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.587 16:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:21.587 16:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:21.587 16:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:21.587 16:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWYwMTNmOWViYjkwZjEwMzlhMWQ4NWUyNTViNGZlNmNiNDBhNWYwOGNhNTcxZGZhnkzJKw==: 00:27:21.587 16:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzMzMzQ1OTQ3ZWI3NzMzZTI1M2UzMmY1N2E2ZjM4ZTkovwQG: 00:27:21.587 16:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:21.587 16:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:21.587 16:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWYwMTNmOWViYjkwZjEwMzlhMWQ4NWUyNTViNGZlNmNiNDBhNWYwOGNhNTcxZGZhnkzJKw==: 00:27:21.587 16:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzMzMzQ1OTQ3ZWI3NzMzZTI1M2UzMmY1N2E2ZjM4ZTkovwQG: ]] 00:27:21.587 16:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzMzMzQ1OTQ3ZWI3NzMzZTI1M2UzMmY1N2E2ZjM4ZTkovwQG: 00:27:21.587 16:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:21.587 16:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.587 16:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:21.587 16:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:21.587 16:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:21.587 16:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.587 16:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:21.587 16:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.587 16:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.587 16:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.587 16:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.587 16:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:21.587 16:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:21.587 16:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:21.587 16:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.587 16:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.587 16:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:21.587 16:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.587 16:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:21.587 16:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:21.587 16:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:21.587 16:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:21.587 16:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.587 16:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.157 nvme0n1 00:27:22.157 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.157 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.157 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.157 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.157 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.157 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.419 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.419 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.419 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.419 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.419 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.419 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.419 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:22.419 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.419 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:22.419 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:22.419 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:22.419 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmRlYmQ2YzMwYzFlYTMzY2EyNTNjNDdlYmQyYmEzNDBlZjMxZjNiNDA1MmQ4OWM4MzBiNDM4YmI2ODMxOWQ5MZCylCg=: 00:27:22.419 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:22.419 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:22.419 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:22.419 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmRlYmQ2YzMwYzFlYTMzY2EyNTNjNDdlYmQyYmEzNDBlZjMxZjNiNDA1MmQ4OWM4MzBiNDM4YmI2ODMxOWQ5MZCylCg=: 00:27:22.419 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:22.419 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:22.419 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.419 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:22.419 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:22.419 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:22.419 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.419 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:22.419 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.419 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.419 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.419 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.419 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:22.419 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:22.419 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:22.419 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.419 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.419 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:22.419 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.419 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:22.419 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:22.419 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:22.419 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:22.419 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.419 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.991 nvme0n1 00:27:22.991 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.991 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.991 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.991 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.991 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.991 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.252 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.252 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.252 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.252 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.252 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.252 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:23.252 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:23.252 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.252 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:23.252 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.252 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:23.252 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:23.252 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:23.252 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGEyMWJjMjAzYmVkY2E3ZWJhNDhjYjBmM2FhYWQzYmGrESbY: 00:27:23.252 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWZkMjE0ODM2MGJmZGFhNzRmYzg0ZjRhZmY2NTA2MWI1ZWRhNzJmZTJlNjhiOGZjNGIyYTEwZDQyYjU3OWMxM95kvXk=: 00:27:23.252 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:23.252 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:23.252 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGEyMWJjMjAzYmVkY2E3ZWJhNDhjYjBmM2FhYWQzYmGrESbY: 00:27:23.252 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWZkMjE0ODM2MGJmZGFhNzRmYzg0ZjRhZmY2NTA2MWI1ZWRhNzJmZTJlNjhiOGZjNGIyYTEwZDQyYjU3OWMxM95kvXk=: ]] 00:27:23.252 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWZkMjE0ODM2MGJmZGFhNzRmYzg0ZjRhZmY2NTA2MWI1ZWRhNzJmZTJlNjhiOGZjNGIyYTEwZDQyYjU3OWMxM95kvXk=: 00:27:23.252 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:23.252 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.252 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:23.252 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:23.252 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:23.252 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.252 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:23.252 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.252 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.252 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.252 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.252 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:23.252 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:23.252 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:23.253 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.253 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.253 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:23.253 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.253 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:23.253 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:23.253 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:23.253 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:23.253 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.253 16:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.253 nvme0n1 00:27:23.253 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.253 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.253 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.253 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.253 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.253 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.253 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.253 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.253 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.253 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDhiMGZkZGZiNjdkZDZkZmY2YWVmODNiMTMxMWM1OTRkM2MzNGRmODM4NmU2ODdkGTSLIQ==: 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjVkYTk4Nzg4Y2JiOWYyNWU2MmVmZDIzMDc0OGRmOTFhZjFkNGJiZjk0MmIyMGNh1TtAiw==: 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDhiMGZkZGZiNjdkZDZkZmY2YWVmODNiMTMxMWM1OTRkM2MzNGRmODM4NmU2ODdkGTSLIQ==: 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjVkYTk4Nzg4Y2JiOWYyNWU2MmVmZDIzMDc0OGRmOTFhZjFkNGJiZjk0MmIyMGNh1TtAiw==: ]] 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjVkYTk4Nzg4Y2JiOWYyNWU2MmVmZDIzMDc0OGRmOTFhZjFkNGJiZjk0MmIyMGNh1TtAiw==: 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.513 nvme0n1 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2U4MGJmMzI0N2M2NjgwMWQ2MDRjMzg3MjU4OGIzYTQk37dK: 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjhhZTQ5NzM4ZDk3MTFiMDA4ZjNiMDFiMTdkYWJiMWRGFOI9: 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2U4MGJmMzI0N2M2NjgwMWQ2MDRjMzg3MjU4OGIzYTQk37dK: 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjhhZTQ5NzM4ZDk3MTFiMDA4ZjNiMDFiMTdkYWJiMWRGFOI9: ]] 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjhhZTQ5NzM4ZDk3MTFiMDA4ZjNiMDFiMTdkYWJiMWRGFOI9: 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.513 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.774 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.774 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.774 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:23.774 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:23.774 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:23.774 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.774 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.774 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:23.774 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.774 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:23.774 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:23.774 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:23.774 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:23.774 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.774 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.774 nvme0n1 00:27:23.774 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.774 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.774 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.774 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.774 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.774 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.774 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.775 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.775 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.775 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.775 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.775 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.775 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:23.775 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.775 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:23.775 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:23.775 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:23.775 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWYwMTNmOWViYjkwZjEwMzlhMWQ4NWUyNTViNGZlNmNiNDBhNWYwOGNhNTcxZGZhnkzJKw==: 00:27:23.775 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzMzMzQ1OTQ3ZWI3NzMzZTI1M2UzMmY1N2E2ZjM4ZTkovwQG: 00:27:23.775 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:23.775 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:23.775 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWYwMTNmOWViYjkwZjEwMzlhMWQ4NWUyNTViNGZlNmNiNDBhNWYwOGNhNTcxZGZhnkzJKw==: 00:27:23.775 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzMzMzQ1OTQ3ZWI3NzMzZTI1M2UzMmY1N2E2ZjM4ZTkovwQG: ]] 00:27:23.775 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzMzMzQ1OTQ3ZWI3NzMzZTI1M2UzMmY1N2E2ZjM4ZTkovwQG: 00:27:23.775 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:23.775 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.775 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:23.775 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:23.775 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:23.775 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.775 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:23.775 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.775 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.775 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.775 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.775 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:23.775 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:23.775 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:23.775 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.775 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.775 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:23.775 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.775 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:23.775 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:23.775 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:23.775 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:23.775 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.775 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.036 nvme0n1 00:27:24.036 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.036 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.036 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.036 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.036 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.036 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.036 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.036 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.036 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.036 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.036 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.036 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.036 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:24.036 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.036 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:24.036 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:24.036 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:24.036 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmRlYmQ2YzMwYzFlYTMzY2EyNTNjNDdlYmQyYmEzNDBlZjMxZjNiNDA1MmQ4OWM4MzBiNDM4YmI2ODMxOWQ5MZCylCg=: 00:27:24.036 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:24.036 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:24.036 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:24.036 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmRlYmQ2YzMwYzFlYTMzY2EyNTNjNDdlYmQyYmEzNDBlZjMxZjNiNDA1MmQ4OWM4MzBiNDM4YmI2ODMxOWQ5MZCylCg=: 00:27:24.036 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:24.036 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:24.036 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.036 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:24.036 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:24.036 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:24.036 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.036 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:24.036 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.036 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.036 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.036 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.036 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:24.036 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:24.036 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:24.036 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.036 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.036 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:24.036 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.036 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:24.036 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:24.036 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:24.036 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:24.036 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.036 16:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.297 nvme0n1 00:27:24.297 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.297 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.297 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.297 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.297 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.297 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.297 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.297 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.297 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.297 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.297 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.297 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:24.297 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.297 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:24.297 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.297 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:24.297 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:24.297 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:24.297 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGEyMWJjMjAzYmVkY2E3ZWJhNDhjYjBmM2FhYWQzYmGrESbY: 00:27:24.297 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWZkMjE0ODM2MGJmZGFhNzRmYzg0ZjRhZmY2NTA2MWI1ZWRhNzJmZTJlNjhiOGZjNGIyYTEwZDQyYjU3OWMxM95kvXk=: 00:27:24.297 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:24.297 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:24.297 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGEyMWJjMjAzYmVkY2E3ZWJhNDhjYjBmM2FhYWQzYmGrESbY: 00:27:24.297 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWZkMjE0ODM2MGJmZGFhNzRmYzg0ZjRhZmY2NTA2MWI1ZWRhNzJmZTJlNjhiOGZjNGIyYTEwZDQyYjU3OWMxM95kvXk=: ]] 00:27:24.297 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWZkMjE0ODM2MGJmZGFhNzRmYzg0ZjRhZmY2NTA2MWI1ZWRhNzJmZTJlNjhiOGZjNGIyYTEwZDQyYjU3OWMxM95kvXk=: 00:27:24.297 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:24.297 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.297 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:24.297 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:24.297 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:24.297 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.297 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:24.297 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.297 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.297 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.297 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.297 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:24.297 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:24.297 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:24.297 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.297 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.297 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:24.297 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.297 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:24.297 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:24.297 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:24.297 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:24.297 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.297 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.558 nvme0n1 00:27:24.558 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.558 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.558 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.559 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.559 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.559 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.559 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.559 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.559 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.559 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.559 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.559 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.559 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:24.559 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.559 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:24.559 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:24.559 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:24.559 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDhiMGZkZGZiNjdkZDZkZmY2YWVmODNiMTMxMWM1OTRkM2MzNGRmODM4NmU2ODdkGTSLIQ==: 00:27:24.559 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjVkYTk4Nzg4Y2JiOWYyNWU2MmVmZDIzMDc0OGRmOTFhZjFkNGJiZjk0MmIyMGNh1TtAiw==: 00:27:24.559 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:24.559 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:24.559 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDhiMGZkZGZiNjdkZDZkZmY2YWVmODNiMTMxMWM1OTRkM2MzNGRmODM4NmU2ODdkGTSLIQ==: 00:27:24.559 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjVkYTk4Nzg4Y2JiOWYyNWU2MmVmZDIzMDc0OGRmOTFhZjFkNGJiZjk0MmIyMGNh1TtAiw==: ]] 00:27:24.559 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjVkYTk4Nzg4Y2JiOWYyNWU2MmVmZDIzMDc0OGRmOTFhZjFkNGJiZjk0MmIyMGNh1TtAiw==: 00:27:24.559 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:24.559 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.559 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:24.559 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:24.559 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:24.559 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.559 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:24.559 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.559 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.559 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.559 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.559 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:24.559 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:24.559 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:24.559 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.559 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.559 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:24.559 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.559 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:24.559 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:24.559 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:24.559 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:24.559 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.559 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.820 nvme0n1 00:27:24.820 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.820 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.820 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.820 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.820 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.820 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.820 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.820 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.820 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.820 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.820 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.820 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.820 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:24.820 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.820 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:24.820 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:24.820 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:24.820 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2U4MGJmMzI0N2M2NjgwMWQ2MDRjMzg3MjU4OGIzYTQk37dK: 00:27:24.820 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjhhZTQ5NzM4ZDk3MTFiMDA4ZjNiMDFiMTdkYWJiMWRGFOI9: 00:27:24.820 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:24.820 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:24.820 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2U4MGJmMzI0N2M2NjgwMWQ2MDRjMzg3MjU4OGIzYTQk37dK: 00:27:24.820 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjhhZTQ5NzM4ZDk3MTFiMDA4ZjNiMDFiMTdkYWJiMWRGFOI9: ]] 00:27:24.821 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjhhZTQ5NzM4ZDk3MTFiMDA4ZjNiMDFiMTdkYWJiMWRGFOI9: 00:27:24.821 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:24.821 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.821 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:24.821 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:24.821 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:24.821 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.821 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:24.821 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.821 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.821 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.821 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.821 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:24.821 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:24.821 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:24.821 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.821 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.821 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:24.821 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.821 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:24.821 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:24.821 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:24.821 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:24.821 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.821 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.081 nvme0n1 00:27:25.081 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.081 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.081 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.081 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.081 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.081 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.081 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.081 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.081 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.081 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.081 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.081 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.081 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:25.081 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.081 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:25.081 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:25.081 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:25.081 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWYwMTNmOWViYjkwZjEwMzlhMWQ4NWUyNTViNGZlNmNiNDBhNWYwOGNhNTcxZGZhnkzJKw==: 00:27:25.081 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzMzMzQ1OTQ3ZWI3NzMzZTI1M2UzMmY1N2E2ZjM4ZTkovwQG: 00:27:25.081 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:25.081 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:25.081 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWYwMTNmOWViYjkwZjEwMzlhMWQ4NWUyNTViNGZlNmNiNDBhNWYwOGNhNTcxZGZhnkzJKw==: 00:27:25.081 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzMzMzQ1OTQ3ZWI3NzMzZTI1M2UzMmY1N2E2ZjM4ZTkovwQG: ]] 00:27:25.081 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzMzMzQ1OTQ3ZWI3NzMzZTI1M2UzMmY1N2E2ZjM4ZTkovwQG: 00:27:25.081 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:25.081 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.081 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:25.081 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:25.081 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:25.081 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.081 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:25.081 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.081 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.081 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.081 16:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.081 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:25.081 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:25.081 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:25.081 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.081 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.081 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:25.081 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.081 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:25.081 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:25.081 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:25.081 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:25.081 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.081 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.342 nvme0n1 00:27:25.342 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.342 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.342 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.342 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.342 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.342 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.342 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.342 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.342 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.342 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.342 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.342 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.342 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:25.342 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.342 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:25.342 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:25.342 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:25.342 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmRlYmQ2YzMwYzFlYTMzY2EyNTNjNDdlYmQyYmEzNDBlZjMxZjNiNDA1MmQ4OWM4MzBiNDM4YmI2ODMxOWQ5MZCylCg=: 00:27:25.342 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:25.342 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:25.342 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:25.342 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmRlYmQ2YzMwYzFlYTMzY2EyNTNjNDdlYmQyYmEzNDBlZjMxZjNiNDA1MmQ4OWM4MzBiNDM4YmI2ODMxOWQ5MZCylCg=: 00:27:25.342 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:25.342 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:25.342 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.342 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:25.342 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:25.342 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:25.342 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.342 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:25.342 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.342 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.342 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.342 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.342 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:25.342 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:25.342 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:25.342 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.342 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.342 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:25.342 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.342 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:25.342 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:25.342 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:25.342 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:25.342 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.342 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.603 nvme0n1 00:27:25.603 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.603 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.603 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.603 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.603 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.603 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.603 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.603 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.603 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.603 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.603 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.603 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:25.603 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.603 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:25.603 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.603 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:25.603 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:25.603 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:25.603 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGEyMWJjMjAzYmVkY2E3ZWJhNDhjYjBmM2FhYWQzYmGrESbY: 00:27:25.603 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWZkMjE0ODM2MGJmZGFhNzRmYzg0ZjRhZmY2NTA2MWI1ZWRhNzJmZTJlNjhiOGZjNGIyYTEwZDQyYjU3OWMxM95kvXk=: 00:27:25.603 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:25.603 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:25.603 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGEyMWJjMjAzYmVkY2E3ZWJhNDhjYjBmM2FhYWQzYmGrESbY: 00:27:25.603 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWZkMjE0ODM2MGJmZGFhNzRmYzg0ZjRhZmY2NTA2MWI1ZWRhNzJmZTJlNjhiOGZjNGIyYTEwZDQyYjU3OWMxM95kvXk=: ]] 00:27:25.603 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWZkMjE0ODM2MGJmZGFhNzRmYzg0ZjRhZmY2NTA2MWI1ZWRhNzJmZTJlNjhiOGZjNGIyYTEwZDQyYjU3OWMxM95kvXk=: 00:27:25.603 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:25.603 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.603 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:25.603 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:25.603 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:25.603 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.603 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:25.603 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.603 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.603 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.603 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.603 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:25.603 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:25.603 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:25.603 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.603 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.603 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:25.603 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.603 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:25.603 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:25.603 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:25.603 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:25.603 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.603 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.863 nvme0n1 00:27:26.124 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.124 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.124 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.124 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.124 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.124 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.124 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.124 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.124 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.124 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.124 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.124 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.124 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:26.124 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.124 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:26.124 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:26.124 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:26.124 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDhiMGZkZGZiNjdkZDZkZmY2YWVmODNiMTMxMWM1OTRkM2MzNGRmODM4NmU2ODdkGTSLIQ==: 00:27:26.124 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjVkYTk4Nzg4Y2JiOWYyNWU2MmVmZDIzMDc0OGRmOTFhZjFkNGJiZjk0MmIyMGNh1TtAiw==: 00:27:26.124 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:26.124 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:26.124 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDhiMGZkZGZiNjdkZDZkZmY2YWVmODNiMTMxMWM1OTRkM2MzNGRmODM4NmU2ODdkGTSLIQ==: 00:27:26.124 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjVkYTk4Nzg4Y2JiOWYyNWU2MmVmZDIzMDc0OGRmOTFhZjFkNGJiZjk0MmIyMGNh1TtAiw==: ]] 00:27:26.124 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjVkYTk4Nzg4Y2JiOWYyNWU2MmVmZDIzMDc0OGRmOTFhZjFkNGJiZjk0MmIyMGNh1TtAiw==: 00:27:26.124 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:26.124 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.124 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:26.124 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:26.124 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:26.124 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.124 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:26.124 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.124 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.124 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.124 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.124 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:26.124 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:26.124 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:26.124 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.124 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.124 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:26.124 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.124 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:26.124 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:26.124 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:26.124 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:26.124 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.124 16:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.385 nvme0n1 00:27:26.385 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.385 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.385 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.385 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.385 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.385 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.385 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.385 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.385 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.385 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.385 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.385 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.385 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:26.385 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.385 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:26.385 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:26.385 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:26.385 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2U4MGJmMzI0N2M2NjgwMWQ2MDRjMzg3MjU4OGIzYTQk37dK: 00:27:26.385 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjhhZTQ5NzM4ZDk3MTFiMDA4ZjNiMDFiMTdkYWJiMWRGFOI9: 00:27:26.385 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:26.385 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:26.385 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2U4MGJmMzI0N2M2NjgwMWQ2MDRjMzg3MjU4OGIzYTQk37dK: 00:27:26.385 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjhhZTQ5NzM4ZDk3MTFiMDA4ZjNiMDFiMTdkYWJiMWRGFOI9: ]] 00:27:26.385 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjhhZTQ5NzM4ZDk3MTFiMDA4ZjNiMDFiMTdkYWJiMWRGFOI9: 00:27:26.385 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:26.385 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.385 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:26.385 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:26.385 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:26.385 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.385 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:26.385 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.385 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.385 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.385 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.385 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:26.385 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:26.385 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:26.385 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.385 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.385 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:26.385 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.385 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:26.385 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:26.385 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:26.385 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:26.385 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.386 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.647 nvme0n1 00:27:26.647 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.647 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.647 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.647 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.647 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.647 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.647 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.647 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.647 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.647 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.647 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.647 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.647 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:26.647 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.647 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:26.647 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:26.647 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:26.647 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWYwMTNmOWViYjkwZjEwMzlhMWQ4NWUyNTViNGZlNmNiNDBhNWYwOGNhNTcxZGZhnkzJKw==: 00:27:26.647 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzMzMzQ1OTQ3ZWI3NzMzZTI1M2UzMmY1N2E2ZjM4ZTkovwQG: 00:27:26.647 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:26.647 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:26.647 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWYwMTNmOWViYjkwZjEwMzlhMWQ4NWUyNTViNGZlNmNiNDBhNWYwOGNhNTcxZGZhnkzJKw==: 00:27:26.647 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzMzMzQ1OTQ3ZWI3NzMzZTI1M2UzMmY1N2E2ZjM4ZTkovwQG: ]] 00:27:26.647 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzMzMzQ1OTQ3ZWI3NzMzZTI1M2UzMmY1N2E2ZjM4ZTkovwQG: 00:27:26.907 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:26.907 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.907 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:26.907 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:26.907 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:26.907 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.907 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:26.907 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.907 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.907 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.907 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.907 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:26.907 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:26.907 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:26.907 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.907 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.907 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:26.907 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.907 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:26.907 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:26.907 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:26.907 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:26.907 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.907 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.169 nvme0n1 00:27:27.169 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.169 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.169 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.169 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.169 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.169 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.169 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.169 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.169 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.169 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.169 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.169 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.169 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:27.169 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.169 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:27.169 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:27.169 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:27.169 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmRlYmQ2YzMwYzFlYTMzY2EyNTNjNDdlYmQyYmEzNDBlZjMxZjNiNDA1MmQ4OWM4MzBiNDM4YmI2ODMxOWQ5MZCylCg=: 00:27:27.169 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:27.169 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:27.169 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:27.169 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmRlYmQ2YzMwYzFlYTMzY2EyNTNjNDdlYmQyYmEzNDBlZjMxZjNiNDA1MmQ4OWM4MzBiNDM4YmI2ODMxOWQ5MZCylCg=: 00:27:27.169 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:27.169 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:27.169 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.169 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:27.169 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:27.169 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:27.169 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.169 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:27.169 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.169 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.169 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.169 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.169 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:27.169 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:27.169 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:27.169 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.169 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.169 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:27.169 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.169 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:27.169 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:27.169 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:27.169 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:27.169 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.169 16:39:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.430 nvme0n1 00:27:27.430 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.430 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.430 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.430 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.430 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.430 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.430 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.430 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.430 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.430 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.430 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.430 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:27.430 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.430 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:27.430 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.430 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:27.430 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:27.430 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:27.430 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGEyMWJjMjAzYmVkY2E3ZWJhNDhjYjBmM2FhYWQzYmGrESbY: 00:27:27.430 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWZkMjE0ODM2MGJmZGFhNzRmYzg0ZjRhZmY2NTA2MWI1ZWRhNzJmZTJlNjhiOGZjNGIyYTEwZDQyYjU3OWMxM95kvXk=: 00:27:27.430 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:27.430 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:27.430 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGEyMWJjMjAzYmVkY2E3ZWJhNDhjYjBmM2FhYWQzYmGrESbY: 00:27:27.430 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWZkMjE0ODM2MGJmZGFhNzRmYzg0ZjRhZmY2NTA2MWI1ZWRhNzJmZTJlNjhiOGZjNGIyYTEwZDQyYjU3OWMxM95kvXk=: ]] 00:27:27.430 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWZkMjE0ODM2MGJmZGFhNzRmYzg0ZjRhZmY2NTA2MWI1ZWRhNzJmZTJlNjhiOGZjNGIyYTEwZDQyYjU3OWMxM95kvXk=: 00:27:27.430 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:27.430 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.430 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:27.430 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:27.430 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:27.430 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.431 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:27.431 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.431 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.431 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.431 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.431 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:27.431 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:27.431 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:27.431 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.431 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.431 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:27.431 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.431 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:27.431 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:27.431 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:27.431 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:27.431 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.431 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.093 nvme0n1 00:27:28.093 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.093 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.093 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.093 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.093 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.093 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.093 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.093 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.093 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.093 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.093 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.093 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.093 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:28.093 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.093 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:28.093 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:28.093 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:28.093 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDhiMGZkZGZiNjdkZDZkZmY2YWVmODNiMTMxMWM1OTRkM2MzNGRmODM4NmU2ODdkGTSLIQ==: 00:27:28.093 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjVkYTk4Nzg4Y2JiOWYyNWU2MmVmZDIzMDc0OGRmOTFhZjFkNGJiZjk0MmIyMGNh1TtAiw==: 00:27:28.093 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:28.093 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:28.093 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDhiMGZkZGZiNjdkZDZkZmY2YWVmODNiMTMxMWM1OTRkM2MzNGRmODM4NmU2ODdkGTSLIQ==: 00:27:28.093 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjVkYTk4Nzg4Y2JiOWYyNWU2MmVmZDIzMDc0OGRmOTFhZjFkNGJiZjk0MmIyMGNh1TtAiw==: ]] 00:27:28.093 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjVkYTk4Nzg4Y2JiOWYyNWU2MmVmZDIzMDc0OGRmOTFhZjFkNGJiZjk0MmIyMGNh1TtAiw==: 00:27:28.093 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:28.093 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.093 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:28.093 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:28.093 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:28.093 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.093 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:28.093 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.093 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.093 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.093 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.093 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:28.093 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:28.093 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:28.093 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.093 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.093 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:28.093 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.093 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:28.093 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:28.093 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:28.093 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:28.093 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.093 16:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.698 nvme0n1 00:27:28.698 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.698 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.698 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.698 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.698 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.698 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.698 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.698 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.698 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.698 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.698 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.698 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.698 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:28.698 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.698 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:28.698 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:28.698 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:28.698 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2U4MGJmMzI0N2M2NjgwMWQ2MDRjMzg3MjU4OGIzYTQk37dK: 00:27:28.698 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjhhZTQ5NzM4ZDk3MTFiMDA4ZjNiMDFiMTdkYWJiMWRGFOI9: 00:27:28.698 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:28.698 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:28.698 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2U4MGJmMzI0N2M2NjgwMWQ2MDRjMzg3MjU4OGIzYTQk37dK: 00:27:28.698 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjhhZTQ5NzM4ZDk3MTFiMDA4ZjNiMDFiMTdkYWJiMWRGFOI9: ]] 00:27:28.698 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjhhZTQ5NzM4ZDk3MTFiMDA4ZjNiMDFiMTdkYWJiMWRGFOI9: 00:27:28.698 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:28.698 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.698 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:28.698 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:28.698 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:28.698 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.698 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:28.698 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.698 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.698 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.698 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.698 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:28.698 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:28.698 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:28.698 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.698 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.698 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:28.698 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.698 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:28.698 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:28.698 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:28.698 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:28.698 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.698 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.960 nvme0n1 00:27:28.960 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.960 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.960 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.960 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.960 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.960 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.220 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.220 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.220 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.220 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.220 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.220 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.220 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:29.220 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.220 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:29.220 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:29.220 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:29.220 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWYwMTNmOWViYjkwZjEwMzlhMWQ4NWUyNTViNGZlNmNiNDBhNWYwOGNhNTcxZGZhnkzJKw==: 00:27:29.220 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzMzMzQ1OTQ3ZWI3NzMzZTI1M2UzMmY1N2E2ZjM4ZTkovwQG: 00:27:29.220 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:29.220 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:29.220 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWYwMTNmOWViYjkwZjEwMzlhMWQ4NWUyNTViNGZlNmNiNDBhNWYwOGNhNTcxZGZhnkzJKw==: 00:27:29.220 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzMzMzQ1OTQ3ZWI3NzMzZTI1M2UzMmY1N2E2ZjM4ZTkovwQG: ]] 00:27:29.220 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzMzMzQ1OTQ3ZWI3NzMzZTI1M2UzMmY1N2E2ZjM4ZTkovwQG: 00:27:29.220 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:29.220 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.220 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:29.220 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:29.220 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:29.220 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.220 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:29.220 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.220 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.220 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.220 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.220 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:29.220 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:29.220 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:29.220 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.220 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.220 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:29.221 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.221 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:29.221 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:29.221 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:29.221 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:29.221 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.221 16:39:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.793 nvme0n1 00:27:29.793 16:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.793 16:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.793 16:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.793 16:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.793 16:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.793 16:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.793 16:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.793 16:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.793 16:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.793 16:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.793 16:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.793 16:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.793 16:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:29.793 16:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.793 16:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:29.793 16:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:29.793 16:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:29.793 16:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmRlYmQ2YzMwYzFlYTMzY2EyNTNjNDdlYmQyYmEzNDBlZjMxZjNiNDA1MmQ4OWM4MzBiNDM4YmI2ODMxOWQ5MZCylCg=: 00:27:29.793 16:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:29.793 16:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:29.793 16:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:29.793 16:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmRlYmQ2YzMwYzFlYTMzY2EyNTNjNDdlYmQyYmEzNDBlZjMxZjNiNDA1MmQ4OWM4MzBiNDM4YmI2ODMxOWQ5MZCylCg=: 00:27:29.793 16:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:29.793 16:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:29.793 16:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.793 16:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:29.793 16:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:29.793 16:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:29.793 16:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.793 16:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:29.793 16:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.793 16:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.793 16:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.793 16:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.793 16:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:29.793 16:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:29.793 16:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:29.793 16:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.793 16:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.793 16:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:29.793 16:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.793 16:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:29.793 16:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:29.793 16:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:29.793 16:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:29.793 16:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.793 16:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.054 nvme0n1 00:27:30.054 16:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.054 16:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.054 16:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.054 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.054 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.316 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.316 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.316 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.316 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.316 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.316 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.316 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:30.316 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.316 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:30.317 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.317 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:30.317 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:30.317 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:30.317 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGEyMWJjMjAzYmVkY2E3ZWJhNDhjYjBmM2FhYWQzYmGrESbY: 00:27:30.317 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWZkMjE0ODM2MGJmZGFhNzRmYzg0ZjRhZmY2NTA2MWI1ZWRhNzJmZTJlNjhiOGZjNGIyYTEwZDQyYjU3OWMxM95kvXk=: 00:27:30.317 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:30.317 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:30.317 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGEyMWJjMjAzYmVkY2E3ZWJhNDhjYjBmM2FhYWQzYmGrESbY: 00:27:30.317 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWZkMjE0ODM2MGJmZGFhNzRmYzg0ZjRhZmY2NTA2MWI1ZWRhNzJmZTJlNjhiOGZjNGIyYTEwZDQyYjU3OWMxM95kvXk=: ]] 00:27:30.317 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWZkMjE0ODM2MGJmZGFhNzRmYzg0ZjRhZmY2NTA2MWI1ZWRhNzJmZTJlNjhiOGZjNGIyYTEwZDQyYjU3OWMxM95kvXk=: 00:27:30.317 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:30.317 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.317 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:30.317 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:30.317 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:30.317 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.317 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:30.317 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.317 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.317 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.317 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.317 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:30.317 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:30.317 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:30.317 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.317 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.317 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:30.317 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.317 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:30.317 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:30.317 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:30.317 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:30.317 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.317 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.888 nvme0n1 00:27:30.888 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.888 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.888 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.888 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.888 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.149 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.149 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.149 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.149 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.149 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.149 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.149 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.149 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:31.149 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.149 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:31.149 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:31.149 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:31.149 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDhiMGZkZGZiNjdkZDZkZmY2YWVmODNiMTMxMWM1OTRkM2MzNGRmODM4NmU2ODdkGTSLIQ==: 00:27:31.149 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjVkYTk4Nzg4Y2JiOWYyNWU2MmVmZDIzMDc0OGRmOTFhZjFkNGJiZjk0MmIyMGNh1TtAiw==: 00:27:31.149 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:31.149 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:31.149 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDhiMGZkZGZiNjdkZDZkZmY2YWVmODNiMTMxMWM1OTRkM2MzNGRmODM4NmU2ODdkGTSLIQ==: 00:27:31.149 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjVkYTk4Nzg4Y2JiOWYyNWU2MmVmZDIzMDc0OGRmOTFhZjFkNGJiZjk0MmIyMGNh1TtAiw==: ]] 00:27:31.149 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjVkYTk4Nzg4Y2JiOWYyNWU2MmVmZDIzMDc0OGRmOTFhZjFkNGJiZjk0MmIyMGNh1TtAiw==: 00:27:31.149 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:31.149 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.149 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:31.149 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:31.149 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:31.149 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.149 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:31.149 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.149 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.149 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.149 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.149 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.149 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.150 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.150 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.150 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.150 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.150 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.150 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.150 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.150 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.150 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:31.150 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.150 16:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.722 nvme0n1 00:27:31.722 16:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.722 16:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.722 16:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.722 16:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.722 16:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.982 16:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.982 16:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.982 16:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.982 16:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.982 16:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.982 16:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.982 16:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.982 16:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:31.982 16:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.983 16:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:31.983 16:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:31.983 16:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:31.983 16:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2U4MGJmMzI0N2M2NjgwMWQ2MDRjMzg3MjU4OGIzYTQk37dK: 00:27:31.983 16:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjhhZTQ5NzM4ZDk3MTFiMDA4ZjNiMDFiMTdkYWJiMWRGFOI9: 00:27:31.983 16:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:31.983 16:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:31.983 16:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2U4MGJmMzI0N2M2NjgwMWQ2MDRjMzg3MjU4OGIzYTQk37dK: 00:27:31.983 16:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjhhZTQ5NzM4ZDk3MTFiMDA4ZjNiMDFiMTdkYWJiMWRGFOI9: ]] 00:27:31.983 16:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjhhZTQ5NzM4ZDk3MTFiMDA4ZjNiMDFiMTdkYWJiMWRGFOI9: 00:27:31.983 16:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:31.983 16:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.983 16:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:31.983 16:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:31.983 16:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:31.983 16:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.983 16:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:31.983 16:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.983 16:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.983 16:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.983 16:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.983 16:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.983 16:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.983 16:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.983 16:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.983 16:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.983 16:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.983 16:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.983 16:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.983 16:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.983 16:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.983 16:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:31.983 16:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.983 16:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.923 nvme0n1 00:27:32.923 16:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.923 16:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.923 16:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.923 16:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.923 16:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.923 16:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.923 16:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.923 16:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.923 16:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.923 16:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.923 16:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.923 16:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.923 16:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:32.923 16:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.923 16:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:32.923 16:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:32.923 16:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:32.923 16:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWYwMTNmOWViYjkwZjEwMzlhMWQ4NWUyNTViNGZlNmNiNDBhNWYwOGNhNTcxZGZhnkzJKw==: 00:27:32.923 16:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzMzMzQ1OTQ3ZWI3NzMzZTI1M2UzMmY1N2E2ZjM4ZTkovwQG: 00:27:32.923 16:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:32.923 16:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:32.923 16:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWYwMTNmOWViYjkwZjEwMzlhMWQ4NWUyNTViNGZlNmNiNDBhNWYwOGNhNTcxZGZhnkzJKw==: 00:27:32.923 16:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzMzMzQ1OTQ3ZWI3NzMzZTI1M2UzMmY1N2E2ZjM4ZTkovwQG: ]] 00:27:32.923 16:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzMzMzQ1OTQ3ZWI3NzMzZTI1M2UzMmY1N2E2ZjM4ZTkovwQG: 00:27:32.923 16:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:32.923 16:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.923 16:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:32.923 16:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:32.923 16:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:32.923 16:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.923 16:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:32.923 16:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.923 16:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.923 16:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.923 16:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.923 16:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:32.923 16:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:32.923 16:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:32.923 16:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.923 16:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.923 16:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:32.923 16:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.923 16:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:32.923 16:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:32.923 16:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:32.923 16:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:32.924 16:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.924 16:39:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.494 nvme0n1 00:27:33.494 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.494 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.494 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.494 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.494 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.494 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.494 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.494 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.494 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.494 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.494 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.494 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.494 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:33.494 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.494 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:33.494 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:33.494 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:33.494 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmRlYmQ2YzMwYzFlYTMzY2EyNTNjNDdlYmQyYmEzNDBlZjMxZjNiNDA1MmQ4OWM4MzBiNDM4YmI2ODMxOWQ5MZCylCg=: 00:27:33.494 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:33.494 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:33.494 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:33.494 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmRlYmQ2YzMwYzFlYTMzY2EyNTNjNDdlYmQyYmEzNDBlZjMxZjNiNDA1MmQ4OWM4MzBiNDM4YmI2ODMxOWQ5MZCylCg=: 00:27:33.494 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:33.494 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:33.494 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.494 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:33.494 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:33.494 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:33.494 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.494 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:33.494 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.494 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.494 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.494 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.494 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:33.494 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:33.494 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:33.494 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.494 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.494 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:33.494 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.494 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:33.494 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:33.494 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:33.494 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:33.494 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.494 16:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.439 nvme0n1 00:27:34.439 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.439 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.439 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.439 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.439 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.439 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.439 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.439 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDhiMGZkZGZiNjdkZDZkZmY2YWVmODNiMTMxMWM1OTRkM2MzNGRmODM4NmU2ODdkGTSLIQ==: 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjVkYTk4Nzg4Y2JiOWYyNWU2MmVmZDIzMDc0OGRmOTFhZjFkNGJiZjk0MmIyMGNh1TtAiw==: 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDhiMGZkZGZiNjdkZDZkZmY2YWVmODNiMTMxMWM1OTRkM2MzNGRmODM4NmU2ODdkGTSLIQ==: 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjVkYTk4Nzg4Y2JiOWYyNWU2MmVmZDIzMDc0OGRmOTFhZjFkNGJiZjk0MmIyMGNh1TtAiw==: ]] 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjVkYTk4Nzg4Y2JiOWYyNWU2MmVmZDIzMDc0OGRmOTFhZjFkNGJiZjk0MmIyMGNh1TtAiw==: 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.440 request: 00:27:34.440 { 00:27:34.440 "name": "nvme0", 00:27:34.440 "trtype": "tcp", 00:27:34.440 "traddr": "10.0.0.1", 00:27:34.440 "adrfam": "ipv4", 00:27:34.440 "trsvcid": "4420", 00:27:34.440 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:34.440 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:34.440 "prchk_reftag": false, 00:27:34.440 "prchk_guard": false, 00:27:34.440 "hdgst": false, 00:27:34.440 "ddgst": false, 00:27:34.440 "allow_unrecognized_csi": false, 00:27:34.440 "method": "bdev_nvme_attach_controller", 00:27:34.440 "req_id": 1 00:27:34.440 } 00:27:34.440 Got JSON-RPC error response 00:27:34.440 response: 00:27:34.440 { 00:27:34.440 "code": -5, 00:27:34.440 "message": "Input/output error" 00:27:34.440 } 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.440 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.702 request: 00:27:34.702 { 00:27:34.702 "name": "nvme0", 00:27:34.702 "trtype": "tcp", 00:27:34.702 "traddr": "10.0.0.1", 00:27:34.702 "adrfam": "ipv4", 00:27:34.702 "trsvcid": "4420", 00:27:34.702 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:34.702 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:34.702 "prchk_reftag": false, 00:27:34.702 "prchk_guard": false, 00:27:34.702 "hdgst": false, 00:27:34.702 "ddgst": false, 00:27:34.702 "dhchap_key": "key2", 00:27:34.702 "allow_unrecognized_csi": false, 00:27:34.702 "method": "bdev_nvme_attach_controller", 00:27:34.702 "req_id": 1 00:27:34.702 } 00:27:34.702 Got JSON-RPC error response 00:27:34.702 response: 00:27:34.702 { 00:27:34.702 "code": -5, 00:27:34.702 "message": "Input/output error" 00:27:34.702 } 00:27:34.702 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:34.702 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:34.702 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:34.702 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:34.702 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:34.702 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.702 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:34.702 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.702 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.702 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.702 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:34.702 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:34.702 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:34.702 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:34.702 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:34.702 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.702 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.702 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:34.702 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.702 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:34.702 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:34.702 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:34.702 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:34.702 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:34.702 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:34.702 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:34.702 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:34.702 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:34.702 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:34.702 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:34.702 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.702 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.702 request: 00:27:34.702 { 00:27:34.702 "name": "nvme0", 00:27:34.702 "trtype": "tcp", 00:27:34.702 "traddr": "10.0.0.1", 00:27:34.702 "adrfam": "ipv4", 00:27:34.702 "trsvcid": "4420", 00:27:34.702 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:34.702 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:34.702 "prchk_reftag": false, 00:27:34.702 "prchk_guard": false, 00:27:34.702 "hdgst": false, 00:27:34.702 "ddgst": false, 00:27:34.702 "dhchap_key": "key1", 00:27:34.702 "dhchap_ctrlr_key": "ckey2", 00:27:34.702 "allow_unrecognized_csi": false, 00:27:34.702 "method": "bdev_nvme_attach_controller", 00:27:34.702 "req_id": 1 00:27:34.702 } 00:27:34.702 Got JSON-RPC error response 00:27:34.702 response: 00:27:34.702 { 00:27:34.702 "code": -5, 00:27:34.702 "message": "Input/output error" 00:27:34.702 } 00:27:34.702 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:34.702 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:34.702 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:34.702 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:34.702 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:34.702 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:27:34.702 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:34.702 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:34.702 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:34.702 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.702 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.702 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:34.702 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.702 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:34.702 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:34.702 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:34.702 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:34.703 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.703 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.964 nvme0n1 00:27:34.964 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.964 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:34.964 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.964 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:34.964 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:34.964 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:34.964 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2U4MGJmMzI0N2M2NjgwMWQ2MDRjMzg3MjU4OGIzYTQk37dK: 00:27:34.964 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjhhZTQ5NzM4ZDk3MTFiMDA4ZjNiMDFiMTdkYWJiMWRGFOI9: 00:27:34.964 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:34.964 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:34.964 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2U4MGJmMzI0N2M2NjgwMWQ2MDRjMzg3MjU4OGIzYTQk37dK: 00:27:34.964 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjhhZTQ5NzM4ZDk3MTFiMDA4ZjNiMDFiMTdkYWJiMWRGFOI9: ]] 00:27:34.964 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjhhZTQ5NzM4ZDk3MTFiMDA4ZjNiMDFiMTdkYWJiMWRGFOI9: 00:27:34.964 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:34.964 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.964 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.964 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.964 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.964 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:27:34.964 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.964 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.964 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.964 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.964 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:34.964 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:34.964 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:34.964 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:34.964 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:34.964 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:34.964 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:34.964 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:34.964 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.964 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.964 request: 00:27:34.964 { 00:27:34.964 "name": "nvme0", 00:27:34.964 "dhchap_key": "key1", 00:27:34.964 "dhchap_ctrlr_key": "ckey2", 00:27:34.964 "method": "bdev_nvme_set_keys", 00:27:34.964 "req_id": 1 00:27:34.964 } 00:27:34.964 Got JSON-RPC error response 00:27:34.964 response: 00:27:34.964 { 00:27:34.964 "code": -13, 00:27:34.964 "message": "Permission denied" 00:27:34.964 } 00:27:34.964 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:34.964 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:34.964 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:34.964 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:34.964 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:34.964 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.964 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:34.964 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.964 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.226 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.226 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:35.226 16:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:36.169 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.169 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:36.169 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.169 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.169 16:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.169 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:36.169 16:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:37.111 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.111 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:37.111 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.111 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.111 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.111 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:27:37.111 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:37.111 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.111 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:37.111 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:37.111 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:37.111 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDhiMGZkZGZiNjdkZDZkZmY2YWVmODNiMTMxMWM1OTRkM2MzNGRmODM4NmU2ODdkGTSLIQ==: 00:27:37.111 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjVkYTk4Nzg4Y2JiOWYyNWU2MmVmZDIzMDc0OGRmOTFhZjFkNGJiZjk0MmIyMGNh1TtAiw==: 00:27:37.111 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:37.111 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:37.111 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDhiMGZkZGZiNjdkZDZkZmY2YWVmODNiMTMxMWM1OTRkM2MzNGRmODM4NmU2ODdkGTSLIQ==: 00:27:37.111 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjVkYTk4Nzg4Y2JiOWYyNWU2MmVmZDIzMDc0OGRmOTFhZjFkNGJiZjk0MmIyMGNh1TtAiw==: ]] 00:27:37.111 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjVkYTk4Nzg4Y2JiOWYyNWU2MmVmZDIzMDc0OGRmOTFhZjFkNGJiZjk0MmIyMGNh1TtAiw==: 00:27:37.111 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:27:37.111 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:37.111 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:37.111 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:37.111 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.111 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.111 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:37.111 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.111 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:37.111 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:37.111 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:37.111 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:37.111 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.111 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.371 nvme0n1 00:27:37.371 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.371 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:37.371 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.371 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:37.371 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:37.371 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:37.371 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2U4MGJmMzI0N2M2NjgwMWQ2MDRjMzg3MjU4OGIzYTQk37dK: 00:27:37.372 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjhhZTQ5NzM4ZDk3MTFiMDA4ZjNiMDFiMTdkYWJiMWRGFOI9: 00:27:37.372 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:37.372 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:37.372 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2U4MGJmMzI0N2M2NjgwMWQ2MDRjMzg3MjU4OGIzYTQk37dK: 00:27:37.372 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjhhZTQ5NzM4ZDk3MTFiMDA4ZjNiMDFiMTdkYWJiMWRGFOI9: ]] 00:27:37.372 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjhhZTQ5NzM4ZDk3MTFiMDA4ZjNiMDFiMTdkYWJiMWRGFOI9: 00:27:37.372 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:37.372 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:37.372 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:37.372 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:37.372 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:37.372 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:37.372 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:37.372 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:37.372 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.372 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.372 request: 00:27:37.372 { 00:27:37.372 "name": "nvme0", 00:27:37.372 "dhchap_key": "key2", 00:27:37.372 "dhchap_ctrlr_key": "ckey1", 00:27:37.372 "method": "bdev_nvme_set_keys", 00:27:37.372 "req_id": 1 00:27:37.372 } 00:27:37.372 Got JSON-RPC error response 00:27:37.372 response: 00:27:37.372 { 00:27:37.372 "code": -13, 00:27:37.372 "message": "Permission denied" 00:27:37.372 } 00:27:37.372 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:37.372 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:37.372 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:37.372 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:37.372 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:37.372 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.372 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:37.372 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.372 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.372 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.632 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:27:37.632 16:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:27:38.573 16:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:38.573 16:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.573 16:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.573 16:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.573 16:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.573 16:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:27:38.573 16:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:27:38.573 16:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:27:38.573 16:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:38.573 16:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:38.573 16:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:27:38.573 16:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:38.573 16:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:27:38.573 16:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:38.573 16:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:38.573 rmmod nvme_tcp 00:27:38.573 rmmod nvme_fabrics 00:27:38.573 16:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:38.573 16:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:27:38.573 16:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:27:38.573 16:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 2358695 ']' 00:27:38.573 16:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 2358695 00:27:38.573 16:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 2358695 ']' 00:27:38.573 16:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 2358695 00:27:38.573 16:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:27:38.573 16:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:38.573 16:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2358695 00:27:38.573 16:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:38.573 16:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:38.573 16:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2358695' 00:27:38.573 killing process with pid 2358695 00:27:38.573 16:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 2358695 00:27:38.573 16:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 2358695 00:27:38.833 16:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:38.833 16:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:38.833 16:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:38.833 16:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:27:38.833 16:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:27:38.833 16:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:38.833 16:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:27:38.833 16:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:38.833 16:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:38.833 16:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:38.833 16:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:38.833 16:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:40.745 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:40.745 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:40.745 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:40.745 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:40.745 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:40.745 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:27:40.745 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:40.745 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:40.745 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:40.745 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:40.745 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:27:40.745 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:27:41.005 16:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:44.305 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:44.305 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:44.305 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:44.305 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:44.305 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:44.305 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:44.305 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:44.305 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:44.305 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:44.305 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:44.305 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:44.305 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:44.305 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:44.305 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:44.305 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:44.305 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:44.305 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:44.876 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.HIh /tmp/spdk.key-null.Hx9 /tmp/spdk.key-sha256.WED /tmp/spdk.key-sha384.r2D /tmp/spdk.key-sha512.0iy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:27:44.876 16:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:48.174 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:27:48.174 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:27:48.174 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:27:48.174 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:27:48.174 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:27:48.174 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:27:48.174 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:27:48.174 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:27:48.174 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:27:48.174 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:27:48.174 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:27:48.174 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:27:48.174 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:27:48.174 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:27:48.174 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:27:48.174 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:27:48.174 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:27:48.435 00:27:48.435 real 1m3.411s 00:27:48.435 user 0m57.185s 00:27:48.435 sys 0m15.787s 00:27:48.435 16:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:48.435 16:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.435 ************************************ 00:27:48.435 END TEST nvmf_auth_host 00:27:48.435 ************************************ 00:27:48.435 16:39:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:27:48.435 16:39:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:48.435 16:39:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:48.435 16:39:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:48.435 16:39:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.697 ************************************ 00:27:48.697 START TEST nvmf_digest 00:27:48.697 ************************************ 00:27:48.697 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:48.697 * Looking for test storage... 00:27:48.697 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:48.697 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:48.697 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:27:48.697 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:48.697 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:48.697 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:48.697 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:48.697 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:48.697 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:27:48.697 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:27:48.697 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:27:48.697 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:27:48.697 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:27:48.697 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:27:48.697 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:27:48.697 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:48.697 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:27:48.697 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:27:48.697 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:48.697 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:48.697 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:27:48.697 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:27:48.697 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:48.697 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:27:48.697 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:27:48.697 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:27:48.697 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:27:48.697 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:48.697 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:27:48.697 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:27:48.697 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:48.697 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:48.697 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:27:48.697 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:48.697 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:48.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:48.697 --rc genhtml_branch_coverage=1 00:27:48.697 --rc genhtml_function_coverage=1 00:27:48.697 --rc genhtml_legend=1 00:27:48.697 --rc geninfo_all_blocks=1 00:27:48.697 --rc geninfo_unexecuted_blocks=1 00:27:48.697 00:27:48.697 ' 00:27:48.697 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:48.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:48.697 --rc genhtml_branch_coverage=1 00:27:48.697 --rc genhtml_function_coverage=1 00:27:48.697 --rc genhtml_legend=1 00:27:48.697 --rc geninfo_all_blocks=1 00:27:48.697 --rc geninfo_unexecuted_blocks=1 00:27:48.697 00:27:48.697 ' 00:27:48.697 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:48.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:48.697 --rc genhtml_branch_coverage=1 00:27:48.697 --rc genhtml_function_coverage=1 00:27:48.697 --rc genhtml_legend=1 00:27:48.697 --rc geninfo_all_blocks=1 00:27:48.697 --rc geninfo_unexecuted_blocks=1 00:27:48.697 00:27:48.697 ' 00:27:48.697 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:48.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:48.697 --rc genhtml_branch_coverage=1 00:27:48.697 --rc genhtml_function_coverage=1 00:27:48.697 --rc genhtml_legend=1 00:27:48.697 --rc geninfo_all_blocks=1 00:27:48.697 --rc geninfo_unexecuted_blocks=1 00:27:48.697 00:27:48.697 ' 00:27:48.697 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:48.697 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:27:48.697 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:48.697 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:48.697 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:48.697 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:48.697 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:48.697 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:48.697 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:48.697 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:48.697 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:48.697 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:48.697 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:27:48.697 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:27:48.697 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:48.697 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:48.698 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:48.698 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:48.698 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:48.698 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:27:48.698 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:48.698 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:48.698 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:48.698 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.698 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.698 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.698 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:27:48.698 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.698 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:27:48.698 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:48.698 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:48.698 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:48.698 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:48.698 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:48.698 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:48.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:48.698 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:48.698 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:48.698 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:48.698 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:48.698 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:27:48.698 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:27:48.698 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:27:48.698 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:27:48.698 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:48.698 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:48.698 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:48.698 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:48.698 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:48.698 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:48.698 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:48.698 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:48.698 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:48.698 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:48.698 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:27:48.698 16:39:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:56.845 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:56.845 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:56.845 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:56.846 Found net devices under 0000:31:00.0: cvl_0_0 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:56.846 Found net devices under 0000:31:00.1: cvl_0_1 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:56.846 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:56.846 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:27:56.846 00:27:56.846 --- 10.0.0.2 ping statistics --- 00:27:56.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:56.846 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:56.846 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:56.846 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:27:56.846 00:27:56.846 --- 10.0.0.1 ping statistics --- 00:27:56.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:56.846 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:56.846 ************************************ 00:27:56.846 START TEST nvmf_digest_clean 00:27:56.846 ************************************ 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=2376787 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 2376787 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2376787 ']' 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:56.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:56.846 16:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:56.846 [2024-11-20 16:39:41.746293] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:27:56.846 [2024-11-20 16:39:41.746356] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:56.846 [2024-11-20 16:39:41.830057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:56.846 [2024-11-20 16:39:41.870259] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:56.846 [2024-11-20 16:39:41.870294] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:56.846 [2024-11-20 16:39:41.870302] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:56.846 [2024-11-20 16:39:41.870309] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:56.846 [2024-11-20 16:39:41.870315] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:56.846 [2024-11-20 16:39:41.870896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:56.846 16:39:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:56.846 16:39:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:56.846 16:39:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:56.846 16:39:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:56.846 16:39:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:56.846 16:39:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:56.846 16:39:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:27:56.846 16:39:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:27:56.846 16:39:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:27:56.846 16:39:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.846 16:39:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:56.846 null0 00:27:56.846 [2024-11-20 16:39:42.653084] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:56.846 [2024-11-20 16:39:42.677275] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:56.846 16:39:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.846 16:39:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:27:56.847 16:39:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:56.847 16:39:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:56.847 16:39:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:56.847 16:39:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:56.847 16:39:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:56.847 16:39:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:56.847 16:39:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2376960 00:27:56.847 16:39:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:56.847 16:39:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2376960 /var/tmp/bperf.sock 00:27:56.847 16:39:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2376960 ']' 00:27:56.847 16:39:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:56.847 16:39:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:56.847 16:39:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:56.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:56.847 16:39:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:56.847 16:39:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:56.847 [2024-11-20 16:39:42.725708] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:27:56.847 [2024-11-20 16:39:42.725756] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2376960 ] 00:27:57.108 [2024-11-20 16:39:42.814225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:57.108 [2024-11-20 16:39:42.850791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:57.680 16:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:57.680 16:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:57.680 16:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:57.680 16:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:57.680 16:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:57.940 16:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:57.940 16:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:58.201 nvme0n1 00:27:58.462 16:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:58.462 16:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:58.462 Running I/O for 2 seconds... 00:28:00.348 19662.00 IOPS, 76.80 MiB/s [2024-11-20T15:39:46.307Z] 19723.50 IOPS, 77.04 MiB/s 00:28:00.348 Latency(us) 00:28:00.348 [2024-11-20T15:39:46.307Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:00.348 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:00.348 nvme0n1 : 2.00 19755.95 77.17 0.00 0.00 6472.94 2362.03 15073.28 00:28:00.348 [2024-11-20T15:39:46.307Z] =================================================================================================================== 00:28:00.348 [2024-11-20T15:39:46.307Z] Total : 19755.95 77.17 0.00 0.00 6472.94 2362.03 15073.28 00:28:00.348 { 00:28:00.348 "results": [ 00:28:00.348 { 00:28:00.348 "job": "nvme0n1", 00:28:00.348 "core_mask": "0x2", 00:28:00.348 "workload": "randread", 00:28:00.348 "status": "finished", 00:28:00.348 "queue_depth": 128, 00:28:00.348 "io_size": 4096, 00:28:00.348 "runtime": 2.003194, 00:28:00.348 "iops": 19755.949748252042, 00:28:00.348 "mibps": 77.17167870410954, 00:28:00.348 "io_failed": 0, 00:28:00.348 "io_timeout": 0, 00:28:00.348 "avg_latency_us": 6472.938383659718, 00:28:00.348 "min_latency_us": 2362.0266666666666, 00:28:00.348 "max_latency_us": 15073.28 00:28:00.348 } 00:28:00.348 ], 00:28:00.348 "core_count": 1 00:28:00.348 } 00:28:00.348 16:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:00.348 16:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:00.348 16:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:00.348 16:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:00.348 | select(.opcode=="crc32c") 00:28:00.348 | "\(.module_name) \(.executed)"' 00:28:00.348 16:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:00.608 16:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:00.608 16:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:00.608 16:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:00.608 16:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:00.608 16:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2376960 00:28:00.608 16:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2376960 ']' 00:28:00.608 16:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2376960 00:28:00.608 16:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:00.608 16:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:00.608 16:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2376960 00:28:00.608 16:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:00.608 16:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:00.608 16:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2376960' 00:28:00.608 killing process with pid 2376960 00:28:00.608 16:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2376960 00:28:00.608 Received shutdown signal, test time was about 2.000000 seconds 00:28:00.608 00:28:00.608 Latency(us) 00:28:00.608 [2024-11-20T15:39:46.567Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:00.608 [2024-11-20T15:39:46.567Z] =================================================================================================================== 00:28:00.608 [2024-11-20T15:39:46.567Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:00.608 16:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2376960 00:28:00.868 16:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:00.868 16:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:00.868 16:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:00.868 16:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:00.868 16:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:00.868 16:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:00.868 16:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:00.869 16:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2377818 00:28:00.869 16:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2377818 /var/tmp/bperf.sock 00:28:00.869 16:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2377818 ']' 00:28:00.869 16:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:00.869 16:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:00.869 16:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:00.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:00.869 16:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:00.869 16:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:00.869 16:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:00.869 [2024-11-20 16:39:46.669729] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:28:00.869 [2024-11-20 16:39:46.669784] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2377818 ] 00:28:00.869 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:00.869 Zero copy mechanism will not be used. 00:28:00.869 [2024-11-20 16:39:46.751888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:00.869 [2024-11-20 16:39:46.781118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:01.810 16:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:01.810 16:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:01.810 16:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:01.810 16:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:01.810 16:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:01.810 16:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:01.810 16:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:02.070 nvme0n1 00:28:02.070 16:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:02.070 16:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:02.331 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:02.331 Zero copy mechanism will not be used. 00:28:02.331 Running I/O for 2 seconds... 00:28:04.213 3759.00 IOPS, 469.88 MiB/s [2024-11-20T15:39:50.172Z] 3281.00 IOPS, 410.12 MiB/s 00:28:04.213 Latency(us) 00:28:04.213 [2024-11-20T15:39:50.172Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:04.213 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:04.213 nvme0n1 : 2.00 3284.10 410.51 0.00 0.00 4869.22 962.56 7864.32 00:28:04.213 [2024-11-20T15:39:50.172Z] =================================================================================================================== 00:28:04.213 [2024-11-20T15:39:50.172Z] Total : 3284.10 410.51 0.00 0.00 4869.22 962.56 7864.32 00:28:04.213 { 00:28:04.213 "results": [ 00:28:04.213 { 00:28:04.213 "job": "nvme0n1", 00:28:04.213 "core_mask": "0x2", 00:28:04.213 "workload": "randread", 00:28:04.213 "status": "finished", 00:28:04.213 "queue_depth": 16, 00:28:04.213 "io_size": 131072, 00:28:04.213 "runtime": 2.002985, 00:28:04.213 "iops": 3284.0984830141015, 00:28:04.213 "mibps": 410.5123103767627, 00:28:04.213 "io_failed": 0, 00:28:04.213 "io_timeout": 0, 00:28:04.213 "avg_latency_us": 4869.218072362421, 00:28:04.213 "min_latency_us": 962.56, 00:28:04.213 "max_latency_us": 7864.32 00:28:04.213 } 00:28:04.213 ], 00:28:04.213 "core_count": 1 00:28:04.213 } 00:28:04.213 16:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:04.213 16:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:04.213 16:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:04.213 16:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:04.213 | select(.opcode=="crc32c") 00:28:04.213 | "\(.module_name) \(.executed)"' 00:28:04.213 16:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:04.472 16:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:04.472 16:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:04.472 16:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:04.472 16:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:04.472 16:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2377818 00:28:04.472 16:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2377818 ']' 00:28:04.472 16:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2377818 00:28:04.472 16:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:04.472 16:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:04.472 16:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2377818 00:28:04.472 16:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:04.472 16:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:04.472 16:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2377818' 00:28:04.472 killing process with pid 2377818 00:28:04.472 16:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2377818 00:28:04.472 Received shutdown signal, test time was about 2.000000 seconds 00:28:04.472 00:28:04.472 Latency(us) 00:28:04.472 [2024-11-20T15:39:50.431Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:04.472 [2024-11-20T15:39:50.431Z] =================================================================================================================== 00:28:04.472 [2024-11-20T15:39:50.431Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:04.472 16:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2377818 00:28:04.472 16:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:04.472 16:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:04.472 16:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:04.472 16:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:04.472 16:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:04.472 16:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:04.472 16:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:04.472 16:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2378505 00:28:04.472 16:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2378505 /var/tmp/bperf.sock 00:28:04.472 16:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2378505 ']' 00:28:04.472 16:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:04.472 16:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:04.472 16:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:04.472 16:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:04.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:04.472 16:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:04.472 16:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:04.730 [2024-11-20 16:39:50.469064] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:28:04.731 [2024-11-20 16:39:50.469118] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2378505 ] 00:28:04.731 [2024-11-20 16:39:50.553400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:04.731 [2024-11-20 16:39:50.582179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:05.299 16:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:05.299 16:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:05.299 16:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:05.299 16:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:05.299 16:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:05.559 16:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:05.559 16:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:06.128 nvme0n1 00:28:06.128 16:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:06.128 16:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:06.128 Running I/O for 2 seconds... 00:28:08.452 21522.00 IOPS, 84.07 MiB/s [2024-11-20T15:39:54.411Z] 21613.00 IOPS, 84.43 MiB/s 00:28:08.452 Latency(us) 00:28:08.452 [2024-11-20T15:39:54.411Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:08.452 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:08.452 nvme0n1 : 2.01 21634.10 84.51 0.00 0.00 5908.54 2102.61 15182.51 00:28:08.452 [2024-11-20T15:39:54.411Z] =================================================================================================================== 00:28:08.452 [2024-11-20T15:39:54.411Z] Total : 21634.10 84.51 0.00 0.00 5908.54 2102.61 15182.51 00:28:08.452 { 00:28:08.452 "results": [ 00:28:08.452 { 00:28:08.452 "job": "nvme0n1", 00:28:08.452 "core_mask": "0x2", 00:28:08.452 "workload": "randwrite", 00:28:08.452 "status": "finished", 00:28:08.452 "queue_depth": 128, 00:28:08.452 "io_size": 4096, 00:28:08.452 "runtime": 2.006138, 00:28:08.452 "iops": 21634.104931963804, 00:28:08.452 "mibps": 84.50822239048361, 00:28:08.452 "io_failed": 0, 00:28:08.452 "io_timeout": 0, 00:28:08.452 "avg_latency_us": 5908.539535340968, 00:28:08.452 "min_latency_us": 2102.6133333333332, 00:28:08.452 "max_latency_us": 15182.506666666666 00:28:08.452 } 00:28:08.452 ], 00:28:08.452 "core_count": 1 00:28:08.452 } 00:28:08.452 16:39:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:08.452 16:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:08.452 16:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:08.452 16:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:08.452 | select(.opcode=="crc32c") 00:28:08.452 | "\(.module_name) \(.executed)"' 00:28:08.452 16:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:08.452 16:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:08.452 16:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:08.452 16:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:08.452 16:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:08.452 16:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2378505 00:28:08.452 16:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2378505 ']' 00:28:08.452 16:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2378505 00:28:08.452 16:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:08.452 16:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:08.452 16:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2378505 00:28:08.452 16:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:08.452 16:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:08.452 16:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2378505' 00:28:08.452 killing process with pid 2378505 00:28:08.452 16:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2378505 00:28:08.452 Received shutdown signal, test time was about 2.000000 seconds 00:28:08.452 00:28:08.453 Latency(us) 00:28:08.453 [2024-11-20T15:39:54.412Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:08.453 [2024-11-20T15:39:54.412Z] =================================================================================================================== 00:28:08.453 [2024-11-20T15:39:54.412Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:08.453 16:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2378505 00:28:08.453 16:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:08.453 16:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:08.453 16:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:08.453 16:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:08.453 16:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:08.453 16:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:08.453 16:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:08.453 16:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2379191 00:28:08.453 16:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2379191 /var/tmp/bperf.sock 00:28:08.453 16:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2379191 ']' 00:28:08.453 16:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:08.453 16:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:08.453 16:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:08.453 16:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:08.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:08.453 16:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:08.453 16:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:08.453 [2024-11-20 16:39:54.405215] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:28:08.453 [2024-11-20 16:39:54.405285] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2379191 ] 00:28:08.453 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:08.453 Zero copy mechanism will not be used. 00:28:08.714 [2024-11-20 16:39:54.491172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:08.714 [2024-11-20 16:39:54.520666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:09.285 16:39:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:09.285 16:39:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:09.285 16:39:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:09.285 16:39:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:09.285 16:39:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:09.546 16:39:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:09.546 16:39:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:09.806 nvme0n1 00:28:09.806 16:39:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:09.806 16:39:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:10.067 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:10.067 Zero copy mechanism will not be used. 00:28:10.067 Running I/O for 2 seconds... 00:28:11.952 5639.00 IOPS, 704.88 MiB/s [2024-11-20T15:39:57.911Z] 5273.00 IOPS, 659.12 MiB/s 00:28:11.952 Latency(us) 00:28:11.952 [2024-11-20T15:39:57.911Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:11.952 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:11.952 nvme0n1 : 2.01 5266.45 658.31 0.00 0.00 3031.94 1167.36 6553.60 00:28:11.952 [2024-11-20T15:39:57.911Z] =================================================================================================================== 00:28:11.952 [2024-11-20T15:39:57.911Z] Total : 5266.45 658.31 0.00 0.00 3031.94 1167.36 6553.60 00:28:11.952 { 00:28:11.952 "results": [ 00:28:11.952 { 00:28:11.952 "job": "nvme0n1", 00:28:11.952 "core_mask": "0x2", 00:28:11.952 "workload": "randwrite", 00:28:11.952 "status": "finished", 00:28:11.952 "queue_depth": 16, 00:28:11.952 "io_size": 131072, 00:28:11.952 "runtime": 2.006285, 00:28:11.952 "iops": 5266.450180308381, 00:28:11.952 "mibps": 658.3062725385477, 00:28:11.952 "io_failed": 0, 00:28:11.952 "io_timeout": 0, 00:28:11.952 "avg_latency_us": 3031.940659978548, 00:28:11.952 "min_latency_us": 1167.36, 00:28:11.952 "max_latency_us": 6553.6 00:28:11.952 } 00:28:11.952 ], 00:28:11.952 "core_count": 1 00:28:11.952 } 00:28:11.952 16:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:11.952 16:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:11.952 16:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:11.952 16:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:11.952 | select(.opcode=="crc32c") 00:28:11.952 | "\(.module_name) \(.executed)"' 00:28:11.952 16:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:12.212 16:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:12.212 16:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:12.212 16:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:12.212 16:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:12.212 16:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2379191 00:28:12.212 16:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2379191 ']' 00:28:12.212 16:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2379191 00:28:12.212 16:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:12.212 16:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:12.212 16:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2379191 00:28:12.212 16:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:12.212 16:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:12.212 16:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2379191' 00:28:12.212 killing process with pid 2379191 00:28:12.212 16:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2379191 00:28:12.212 Received shutdown signal, test time was about 2.000000 seconds 00:28:12.212 00:28:12.212 Latency(us) 00:28:12.212 [2024-11-20T15:39:58.171Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:12.212 [2024-11-20T15:39:58.171Z] =================================================================================================================== 00:28:12.212 [2024-11-20T15:39:58.171Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:12.212 16:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2379191 00:28:12.212 16:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2376787 00:28:12.212 16:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2376787 ']' 00:28:12.212 16:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2376787 00:28:12.212 16:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:12.212 16:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:12.212 16:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2376787 00:28:12.473 16:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:12.473 16:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:12.473 16:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2376787' 00:28:12.473 killing process with pid 2376787 00:28:12.473 16:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2376787 00:28:12.473 16:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2376787 00:28:12.473 00:28:12.473 real 0m16.650s 00:28:12.473 user 0m32.967s 00:28:12.473 sys 0m3.538s 00:28:12.473 16:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:12.473 16:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:12.473 ************************************ 00:28:12.473 END TEST nvmf_digest_clean 00:28:12.473 ************************************ 00:28:12.473 16:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:12.473 16:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:12.473 16:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:12.473 16:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:12.473 ************************************ 00:28:12.473 START TEST nvmf_digest_error 00:28:12.473 ************************************ 00:28:12.473 16:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:28:12.473 16:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:12.473 16:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:12.473 16:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:12.473 16:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:12.473 16:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=2379926 00:28:12.473 16:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 2379926 00:28:12.473 16:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:12.473 16:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2379926 ']' 00:28:12.473 16:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:12.473 16:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:12.473 16:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:12.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:12.473 16:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:12.473 16:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:12.733 [2024-11-20 16:39:58.468940] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:28:12.733 [2024-11-20 16:39:58.469001] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:12.733 [2024-11-20 16:39:58.551332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:12.733 [2024-11-20 16:39:58.588944] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:12.733 [2024-11-20 16:39:58.588986] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:12.733 [2024-11-20 16:39:58.588995] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:12.733 [2024-11-20 16:39:58.589002] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:12.733 [2024-11-20 16:39:58.589008] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:12.733 [2024-11-20 16:39:58.589603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:13.672 16:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:13.672 16:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:13.672 16:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:13.672 16:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:13.672 16:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:13.672 16:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:13.672 16:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:13.672 16:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.672 16:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:13.672 [2024-11-20 16:39:59.315677] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:13.672 16:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.672 16:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:13.672 16:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:13.672 16:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.672 16:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:13.672 null0 00:28:13.672 [2024-11-20 16:39:59.395224] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:13.672 [2024-11-20 16:39:59.419427] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:13.672 16:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.672 16:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:13.672 16:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:13.672 16:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:13.672 16:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:13.672 16:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:13.672 16:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2380252 00:28:13.672 16:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2380252 /var/tmp/bperf.sock 00:28:13.672 16:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2380252 ']' 00:28:13.672 16:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:13.672 16:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:13.672 16:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:13.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:13.673 16:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:13.673 16:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:13.673 16:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:13.673 [2024-11-20 16:39:59.474723] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:28:13.673 [2024-11-20 16:39:59.474770] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2380252 ] 00:28:13.673 [2024-11-20 16:39:59.556463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.673 [2024-11-20 16:39:59.586271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:14.614 16:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:14.614 16:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:14.614 16:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:14.614 16:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:14.614 16:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:14.614 16:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.614 16:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:14.614 16:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.614 16:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:14.614 16:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:15.186 nvme0n1 00:28:15.186 16:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:15.186 16:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.186 16:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:15.186 16:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.186 16:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:15.186 16:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:15.186 Running I/O for 2 seconds... 00:28:15.186 [2024-11-20 16:40:00.969426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.186 [2024-11-20 16:40:00.969455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.186 [2024-11-20 16:40:00.969464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.186 [2024-11-20 16:40:00.981566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.186 [2024-11-20 16:40:00.981584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.186 [2024-11-20 16:40:00.981592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.186 [2024-11-20 16:40:00.994217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.186 [2024-11-20 16:40:00.994234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.186 [2024-11-20 16:40:00.994241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.186 [2024-11-20 16:40:01.007199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.186 [2024-11-20 16:40:01.007217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:25208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.186 [2024-11-20 16:40:01.007223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.186 [2024-11-20 16:40:01.017123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.186 [2024-11-20 16:40:01.017141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.186 [2024-11-20 16:40:01.017148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.186 [2024-11-20 16:40:01.031236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.186 [2024-11-20 16:40:01.031253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.186 [2024-11-20 16:40:01.031266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.186 [2024-11-20 16:40:01.044354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.186 [2024-11-20 16:40:01.044372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.186 [2024-11-20 16:40:01.044379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.186 [2024-11-20 16:40:01.058372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.186 [2024-11-20 16:40:01.058391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.186 [2024-11-20 16:40:01.058398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.186 [2024-11-20 16:40:01.069406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.186 [2024-11-20 16:40:01.069423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.186 [2024-11-20 16:40:01.069430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.186 [2024-11-20 16:40:01.082951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.186 [2024-11-20 16:40:01.082968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.186 [2024-11-20 16:40:01.082975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.186 [2024-11-20 16:40:01.095002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.186 [2024-11-20 16:40:01.095019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.186 [2024-11-20 16:40:01.095025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.186 [2024-11-20 16:40:01.107414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.186 [2024-11-20 16:40:01.107431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.186 [2024-11-20 16:40:01.107438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.186 [2024-11-20 16:40:01.118394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.186 [2024-11-20 16:40:01.118411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.186 [2024-11-20 16:40:01.118418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.186 [2024-11-20 16:40:01.133256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.186 [2024-11-20 16:40:01.133273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.186 [2024-11-20 16:40:01.133279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.448 [2024-11-20 16:40:01.143724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.448 [2024-11-20 16:40:01.143742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.448 [2024-11-20 16:40:01.143748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.448 [2024-11-20 16:40:01.157654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.448 [2024-11-20 16:40:01.157670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.449 [2024-11-20 16:40:01.157677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.449 [2024-11-20 16:40:01.170430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.449 [2024-11-20 16:40:01.170447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:14886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.449 [2024-11-20 16:40:01.170454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.449 [2024-11-20 16:40:01.183436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.449 [2024-11-20 16:40:01.183453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.449 [2024-11-20 16:40:01.183460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.449 [2024-11-20 16:40:01.196212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.449 [2024-11-20 16:40:01.196229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.449 [2024-11-20 16:40:01.196236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.449 [2024-11-20 16:40:01.208163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.449 [2024-11-20 16:40:01.208181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.449 [2024-11-20 16:40:01.208187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.449 [2024-11-20 16:40:01.220756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.449 [2024-11-20 16:40:01.220774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.449 [2024-11-20 16:40:01.220780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.449 [2024-11-20 16:40:01.232546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.449 [2024-11-20 16:40:01.232563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.449 [2024-11-20 16:40:01.232570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.449 [2024-11-20 16:40:01.244193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.449 [2024-11-20 16:40:01.244210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.449 [2024-11-20 16:40:01.244219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.449 [2024-11-20 16:40:01.258276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.449 [2024-11-20 16:40:01.258292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.449 [2024-11-20 16:40:01.258299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.449 [2024-11-20 16:40:01.271364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.449 [2024-11-20 16:40:01.271380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.449 [2024-11-20 16:40:01.271387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.449 [2024-11-20 16:40:01.284161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.449 [2024-11-20 16:40:01.284179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.449 [2024-11-20 16:40:01.284186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.449 [2024-11-20 16:40:01.295721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.449 [2024-11-20 16:40:01.295736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.449 [2024-11-20 16:40:01.295743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.449 [2024-11-20 16:40:01.306846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.449 [2024-11-20 16:40:01.306863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.449 [2024-11-20 16:40:01.306870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.449 [2024-11-20 16:40:01.322175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.449 [2024-11-20 16:40:01.322192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.449 [2024-11-20 16:40:01.322198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.449 [2024-11-20 16:40:01.332838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.449 [2024-11-20 16:40:01.332855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.449 [2024-11-20 16:40:01.332862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.449 [2024-11-20 16:40:01.345587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.449 [2024-11-20 16:40:01.345604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.449 [2024-11-20 16:40:01.345610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.449 [2024-11-20 16:40:01.359319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.449 [2024-11-20 16:40:01.359338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.449 [2024-11-20 16:40:01.359345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.449 [2024-11-20 16:40:01.371333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.449 [2024-11-20 16:40:01.371349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.449 [2024-11-20 16:40:01.371356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.449 [2024-11-20 16:40:01.383949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.449 [2024-11-20 16:40:01.383966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.449 [2024-11-20 16:40:01.383972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.449 [2024-11-20 16:40:01.395516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.449 [2024-11-20 16:40:01.395532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.449 [2024-11-20 16:40:01.395538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.710 [2024-11-20 16:40:01.407200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.710 [2024-11-20 16:40:01.407216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.710 [2024-11-20 16:40:01.407222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.710 [2024-11-20 16:40:01.422587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.710 [2024-11-20 16:40:01.422603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.710 [2024-11-20 16:40:01.422610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.710 [2024-11-20 16:40:01.436628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.710 [2024-11-20 16:40:01.436645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:19480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.710 [2024-11-20 16:40:01.436652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.710 [2024-11-20 16:40:01.448229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.710 [2024-11-20 16:40:01.448245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.710 [2024-11-20 16:40:01.448251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.710 [2024-11-20 16:40:01.460367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.710 [2024-11-20 16:40:01.460383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:2348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.710 [2024-11-20 16:40:01.460390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.710 [2024-11-20 16:40:01.473536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.710 [2024-11-20 16:40:01.473553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.710 [2024-11-20 16:40:01.473560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.710 [2024-11-20 16:40:01.487054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.710 [2024-11-20 16:40:01.487070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.710 [2024-11-20 16:40:01.487077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.710 [2024-11-20 16:40:01.498328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.710 [2024-11-20 16:40:01.498345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.710 [2024-11-20 16:40:01.498352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.710 [2024-11-20 16:40:01.510448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.710 [2024-11-20 16:40:01.510465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:16767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.710 [2024-11-20 16:40:01.510472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.710 [2024-11-20 16:40:01.523969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.710 [2024-11-20 16:40:01.523988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:14772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.710 [2024-11-20 16:40:01.523995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.710 [2024-11-20 16:40:01.537039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.710 [2024-11-20 16:40:01.537055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:10881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.710 [2024-11-20 16:40:01.537061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.710 [2024-11-20 16:40:01.548156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.710 [2024-11-20 16:40:01.548172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.710 [2024-11-20 16:40:01.548178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.710 [2024-11-20 16:40:01.561116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.710 [2024-11-20 16:40:01.561131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.710 [2024-11-20 16:40:01.561138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.710 [2024-11-20 16:40:01.573273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.710 [2024-11-20 16:40:01.573290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.710 [2024-11-20 16:40:01.573299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.710 [2024-11-20 16:40:01.584788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.710 [2024-11-20 16:40:01.584805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.710 [2024-11-20 16:40:01.584811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.710 [2024-11-20 16:40:01.597863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.710 [2024-11-20 16:40:01.597879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.710 [2024-11-20 16:40:01.597886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.710 [2024-11-20 16:40:01.610973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.710 [2024-11-20 16:40:01.610993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:8808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.710 [2024-11-20 16:40:01.611000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.711 [2024-11-20 16:40:01.625197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.711 [2024-11-20 16:40:01.625213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.711 [2024-11-20 16:40:01.625221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.711 [2024-11-20 16:40:01.637920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.711 [2024-11-20 16:40:01.637936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.711 [2024-11-20 16:40:01.637943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.711 [2024-11-20 16:40:01.649977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.711 [2024-11-20 16:40:01.649996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.711 [2024-11-20 16:40:01.650002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.711 [2024-11-20 16:40:01.660216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.711 [2024-11-20 16:40:01.660233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.711 [2024-11-20 16:40:01.660239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.972 [2024-11-20 16:40:01.674158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.972 [2024-11-20 16:40:01.674175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.972 [2024-11-20 16:40:01.674181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.972 [2024-11-20 16:40:01.687276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.972 [2024-11-20 16:40:01.687292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.972 [2024-11-20 16:40:01.687299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.972 [2024-11-20 16:40:01.699515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.973 [2024-11-20 16:40:01.699531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.973 [2024-11-20 16:40:01.699537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.973 [2024-11-20 16:40:01.711014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.973 [2024-11-20 16:40:01.711031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.973 [2024-11-20 16:40:01.711037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.973 [2024-11-20 16:40:01.724206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.973 [2024-11-20 16:40:01.724222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.973 [2024-11-20 16:40:01.724229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.973 [2024-11-20 16:40:01.737677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.973 [2024-11-20 16:40:01.737694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.973 [2024-11-20 16:40:01.737700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.973 [2024-11-20 16:40:01.748839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.973 [2024-11-20 16:40:01.748855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.973 [2024-11-20 16:40:01.748861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.973 [2024-11-20 16:40:01.762417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.973 [2024-11-20 16:40:01.762433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.973 [2024-11-20 16:40:01.762440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.973 [2024-11-20 16:40:01.774830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.973 [2024-11-20 16:40:01.774847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.973 [2024-11-20 16:40:01.774853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.973 [2024-11-20 16:40:01.788383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.973 [2024-11-20 16:40:01.788399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.973 [2024-11-20 16:40:01.788408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.973 [2024-11-20 16:40:01.797836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.973 [2024-11-20 16:40:01.797853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.973 [2024-11-20 16:40:01.797859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.973 [2024-11-20 16:40:01.811192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.973 [2024-11-20 16:40:01.811209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:19151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.973 [2024-11-20 16:40:01.811216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.973 [2024-11-20 16:40:01.824402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.973 [2024-11-20 16:40:01.824418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:5437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.973 [2024-11-20 16:40:01.824425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.973 [2024-11-20 16:40:01.837978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.973 [2024-11-20 16:40:01.837998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.973 [2024-11-20 16:40:01.838005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.973 [2024-11-20 16:40:01.851057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.973 [2024-11-20 16:40:01.851074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:11837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.973 [2024-11-20 16:40:01.851081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.973 [2024-11-20 16:40:01.862867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.973 [2024-11-20 16:40:01.862883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.973 [2024-11-20 16:40:01.862890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.973 [2024-11-20 16:40:01.876510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.973 [2024-11-20 16:40:01.876527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.973 [2024-11-20 16:40:01.876534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.973 [2024-11-20 16:40:01.889082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.973 [2024-11-20 16:40:01.889099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.973 [2024-11-20 16:40:01.889106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.973 [2024-11-20 16:40:01.900156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.973 [2024-11-20 16:40:01.900175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.973 [2024-11-20 16:40:01.900182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.973 [2024-11-20 16:40:01.911569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.973 [2024-11-20 16:40:01.911586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.973 [2024-11-20 16:40:01.911592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.973 [2024-11-20 16:40:01.925808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:15.973 [2024-11-20 16:40:01.925824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.973 [2024-11-20 16:40:01.925831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.235 [2024-11-20 16:40:01.939111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.235 [2024-11-20 16:40:01.939127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.235 [2024-11-20 16:40:01.939134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.235 [2024-11-20 16:40:01.950542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.235 [2024-11-20 16:40:01.950559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:7543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.235 [2024-11-20 16:40:01.950565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.235 20175.00 IOPS, 78.81 MiB/s [2024-11-20T15:40:02.194Z] [2024-11-20 16:40:01.961901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.235 [2024-11-20 16:40:01.961918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.235 [2024-11-20 16:40:01.961924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.235 [2024-11-20 16:40:01.975810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.235 [2024-11-20 16:40:01.975827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.235 [2024-11-20 16:40:01.975833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.235 [2024-11-20 16:40:01.989219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.235 [2024-11-20 16:40:01.989236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.235 [2024-11-20 16:40:01.989242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.235 [2024-11-20 16:40:02.002259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.235 [2024-11-20 16:40:02.002277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.235 [2024-11-20 16:40:02.002283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.235 [2024-11-20 16:40:02.013462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.235 [2024-11-20 16:40:02.013479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.235 [2024-11-20 16:40:02.013485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.235 [2024-11-20 16:40:02.026287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.235 [2024-11-20 16:40:02.026303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:8910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.235 [2024-11-20 16:40:02.026310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.235 [2024-11-20 16:40:02.038798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.235 [2024-11-20 16:40:02.038815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.235 [2024-11-20 16:40:02.038821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.235 [2024-11-20 16:40:02.050557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.235 [2024-11-20 16:40:02.050574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.235 [2024-11-20 16:40:02.050581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.235 [2024-11-20 16:40:02.063730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.235 [2024-11-20 16:40:02.063747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.235 [2024-11-20 16:40:02.063753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.235 [2024-11-20 16:40:02.077273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.235 [2024-11-20 16:40:02.077289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.235 [2024-11-20 16:40:02.077295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.235 [2024-11-20 16:40:02.089992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.235 [2024-11-20 16:40:02.090008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.235 [2024-11-20 16:40:02.090014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.235 [2024-11-20 16:40:02.101136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.235 [2024-11-20 16:40:02.101153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:8200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.235 [2024-11-20 16:40:02.101159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.235 [2024-11-20 16:40:02.115921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.235 [2024-11-20 16:40:02.115940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:19937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.235 [2024-11-20 16:40:02.115947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.235 [2024-11-20 16:40:02.129443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.236 [2024-11-20 16:40:02.129460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.236 [2024-11-20 16:40:02.129466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.236 [2024-11-20 16:40:02.142555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.236 [2024-11-20 16:40:02.142572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.236 [2024-11-20 16:40:02.142579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.236 [2024-11-20 16:40:02.155833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.236 [2024-11-20 16:40:02.155851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.236 [2024-11-20 16:40:02.155857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.236 [2024-11-20 16:40:02.168993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.236 [2024-11-20 16:40:02.169010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.236 [2024-11-20 16:40:02.169016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.236 [2024-11-20 16:40:02.180912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.236 [2024-11-20 16:40:02.180928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.236 [2024-11-20 16:40:02.180934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.502 [2024-11-20 16:40:02.193323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.502 [2024-11-20 16:40:02.193340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.502 [2024-11-20 16:40:02.193346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.502 [2024-11-20 16:40:02.205997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.502 [2024-11-20 16:40:02.206014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:17379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.502 [2024-11-20 16:40:02.206020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.502 [2024-11-20 16:40:02.217582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.502 [2024-11-20 16:40:02.217598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.502 [2024-11-20 16:40:02.217606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.502 [2024-11-20 16:40:02.230979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.502 [2024-11-20 16:40:02.231000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.502 [2024-11-20 16:40:02.231006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.502 [2024-11-20 16:40:02.242588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.502 [2024-11-20 16:40:02.242604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:8981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.502 [2024-11-20 16:40:02.242610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.502 [2024-11-20 16:40:02.255555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.502 [2024-11-20 16:40:02.255572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.502 [2024-11-20 16:40:02.255579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.502 [2024-11-20 16:40:02.269511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.503 [2024-11-20 16:40:02.269527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.503 [2024-11-20 16:40:02.269534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.503 [2024-11-20 16:40:02.281849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.503 [2024-11-20 16:40:02.281865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.503 [2024-11-20 16:40:02.281872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.503 [2024-11-20 16:40:02.292259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.503 [2024-11-20 16:40:02.292275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.503 [2024-11-20 16:40:02.292281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.503 [2024-11-20 16:40:02.306681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.503 [2024-11-20 16:40:02.306698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.503 [2024-11-20 16:40:02.306705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.503 [2024-11-20 16:40:02.319522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.503 [2024-11-20 16:40:02.319539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.503 [2024-11-20 16:40:02.319545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.503 [2024-11-20 16:40:02.333850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.503 [2024-11-20 16:40:02.333867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:14889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.503 [2024-11-20 16:40:02.333876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.503 [2024-11-20 16:40:02.346932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.503 [2024-11-20 16:40:02.346948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.503 [2024-11-20 16:40:02.346955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.503 [2024-11-20 16:40:02.356322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.503 [2024-11-20 16:40:02.356338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.503 [2024-11-20 16:40:02.356344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.503 [2024-11-20 16:40:02.370550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.503 [2024-11-20 16:40:02.370566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.503 [2024-11-20 16:40:02.370572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.503 [2024-11-20 16:40:02.384966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.503 [2024-11-20 16:40:02.384988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.503 [2024-11-20 16:40:02.384995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.503 [2024-11-20 16:40:02.397408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.503 [2024-11-20 16:40:02.397425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.503 [2024-11-20 16:40:02.397431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.503 [2024-11-20 16:40:02.410856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.503 [2024-11-20 16:40:02.410872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.503 [2024-11-20 16:40:02.410878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.503 [2024-11-20 16:40:02.421065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.503 [2024-11-20 16:40:02.421081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.503 [2024-11-20 16:40:02.421088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.503 [2024-11-20 16:40:02.434056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.503 [2024-11-20 16:40:02.434073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:19605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.503 [2024-11-20 16:40:02.434079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.503 [2024-11-20 16:40:02.447098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.503 [2024-11-20 16:40:02.447118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:18956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.503 [2024-11-20 16:40:02.447124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.827 [2024-11-20 16:40:02.461986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.827 [2024-11-20 16:40:02.462003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.827 [2024-11-20 16:40:02.462009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.827 [2024-11-20 16:40:02.474280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.827 [2024-11-20 16:40:02.474296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.827 [2024-11-20 16:40:02.474302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.827 [2024-11-20 16:40:02.486903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.827 [2024-11-20 16:40:02.486920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.827 [2024-11-20 16:40:02.486926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.827 [2024-11-20 16:40:02.497739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.827 [2024-11-20 16:40:02.497755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.827 [2024-11-20 16:40:02.497761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.827 [2024-11-20 16:40:02.511127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.827 [2024-11-20 16:40:02.511143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.827 [2024-11-20 16:40:02.511149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.827 [2024-11-20 16:40:02.526328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.827 [2024-11-20 16:40:02.526345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.827 [2024-11-20 16:40:02.526351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.827 [2024-11-20 16:40:02.538516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.827 [2024-11-20 16:40:02.538532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.827 [2024-11-20 16:40:02.538539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.827 [2024-11-20 16:40:02.552246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.827 [2024-11-20 16:40:02.552262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.827 [2024-11-20 16:40:02.552268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.827 [2024-11-20 16:40:02.562259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.827 [2024-11-20 16:40:02.562276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.827 [2024-11-20 16:40:02.562282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.827 [2024-11-20 16:40:02.576979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.827 [2024-11-20 16:40:02.576998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.827 [2024-11-20 16:40:02.577005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.827 [2024-11-20 16:40:02.589685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.827 [2024-11-20 16:40:02.589700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:1962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.827 [2024-11-20 16:40:02.589706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.827 [2024-11-20 16:40:02.602185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.827 [2024-11-20 16:40:02.602200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.827 [2024-11-20 16:40:02.602206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.827 [2024-11-20 16:40:02.612952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.827 [2024-11-20 16:40:02.612969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.827 [2024-11-20 16:40:02.612975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.827 [2024-11-20 16:40:02.626133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.827 [2024-11-20 16:40:02.626149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.828 [2024-11-20 16:40:02.626155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.828 [2024-11-20 16:40:02.639081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.828 [2024-11-20 16:40:02.639098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.828 [2024-11-20 16:40:02.639104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.828 [2024-11-20 16:40:02.651412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.828 [2024-11-20 16:40:02.651428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.828 [2024-11-20 16:40:02.651434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.828 [2024-11-20 16:40:02.664637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.828 [2024-11-20 16:40:02.664654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.828 [2024-11-20 16:40:02.664665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.828 [2024-11-20 16:40:02.676844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.828 [2024-11-20 16:40:02.676860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.828 [2024-11-20 16:40:02.676866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.828 [2024-11-20 16:40:02.689194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.828 [2024-11-20 16:40:02.689210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.828 [2024-11-20 16:40:02.689216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.828 [2024-11-20 16:40:02.702357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.828 [2024-11-20 16:40:02.702373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.828 [2024-11-20 16:40:02.702380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.828 [2024-11-20 16:40:02.714572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.828 [2024-11-20 16:40:02.714588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.828 [2024-11-20 16:40:02.714594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.828 [2024-11-20 16:40:02.725989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.828 [2024-11-20 16:40:02.726005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:17306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.828 [2024-11-20 16:40:02.726012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.828 [2024-11-20 16:40:02.739440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.828 [2024-11-20 16:40:02.739457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:76 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.828 [2024-11-20 16:40:02.739463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.828 [2024-11-20 16:40:02.751220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:16.828 [2024-11-20 16:40:02.751236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.828 [2024-11-20 16:40:02.751242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.137 [2024-11-20 16:40:02.765473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:17.137 [2024-11-20 16:40:02.765489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.137 [2024-11-20 16:40:02.765495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.137 [2024-11-20 16:40:02.778815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:17.137 [2024-11-20 16:40:02.778831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.137 [2024-11-20 16:40:02.778837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.137 [2024-11-20 16:40:02.791532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:17.137 [2024-11-20 16:40:02.791549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.137 [2024-11-20 16:40:02.791555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.137 [2024-11-20 16:40:02.802170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:17.137 [2024-11-20 16:40:02.802186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.137 [2024-11-20 16:40:02.802192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.137 [2024-11-20 16:40:02.816040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:17.137 [2024-11-20 16:40:02.816056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.137 [2024-11-20 16:40:02.816062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.137 [2024-11-20 16:40:02.825645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:17.137 [2024-11-20 16:40:02.825662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.137 [2024-11-20 16:40:02.825668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.137 [2024-11-20 16:40:02.839755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:17.137 [2024-11-20 16:40:02.839771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.137 [2024-11-20 16:40:02.839778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.137 [2024-11-20 16:40:02.853298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:17.137 [2024-11-20 16:40:02.853314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.137 [2024-11-20 16:40:02.853321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.137 [2024-11-20 16:40:02.866468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:17.137 [2024-11-20 16:40:02.866484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.137 [2024-11-20 16:40:02.866491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.137 [2024-11-20 16:40:02.876824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:17.137 [2024-11-20 16:40:02.876840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.137 [2024-11-20 16:40:02.876850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.137 [2024-11-20 16:40:02.890357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:17.137 [2024-11-20 16:40:02.890374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.137 [2024-11-20 16:40:02.890380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.137 [2024-11-20 16:40:02.903818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:17.138 [2024-11-20 16:40:02.903834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.138 [2024-11-20 16:40:02.903841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.138 [2024-11-20 16:40:02.916493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:17.138 [2024-11-20 16:40:02.916509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.138 [2024-11-20 16:40:02.916516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.138 [2024-11-20 16:40:02.928454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:17.138 [2024-11-20 16:40:02.928471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.138 [2024-11-20 16:40:02.928477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.138 [2024-11-20 16:40:02.941975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:17.138 [2024-11-20 16:40:02.941994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.138 [2024-11-20 16:40:02.942001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.138 [2024-11-20 16:40:02.953608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13961c0) 00:28:17.138 [2024-11-20 16:40:02.953624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.138 [2024-11-20 16:40:02.953631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.138 20145.00 IOPS, 78.69 MiB/s 00:28:17.138 Latency(us) 00:28:17.138 [2024-11-20T15:40:03.097Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:17.138 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:17.138 nvme0n1 : 2.04 19775.96 77.25 0.00 0.00 6338.93 2198.19 50681.17 00:28:17.138 [2024-11-20T15:40:03.097Z] =================================================================================================================== 00:28:17.138 [2024-11-20T15:40:03.097Z] Total : 19775.96 77.25 0.00 0.00 6338.93 2198.19 50681.17 00:28:17.138 { 00:28:17.138 "results": [ 00:28:17.138 { 00:28:17.138 "job": "nvme0n1", 00:28:17.138 "core_mask": "0x2", 00:28:17.138 "workload": "randread", 00:28:17.138 "status": "finished", 00:28:17.138 "queue_depth": 128, 00:28:17.138 "io_size": 4096, 00:28:17.138 "runtime": 2.043795, 00:28:17.138 "iops": 19775.956003415216, 00:28:17.138 "mibps": 77.24982813834069, 00:28:17.138 "io_failed": 0, 00:28:17.138 "io_timeout": 0, 00:28:17.138 "avg_latency_us": 6338.926009863592, 00:28:17.138 "min_latency_us": 2198.1866666666665, 00:28:17.138 "max_latency_us": 50681.17333333333 00:28:17.138 } 00:28:17.138 ], 00:28:17.138 "core_count": 1 00:28:17.138 } 00:28:17.138 16:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:17.138 16:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:17.138 16:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:17.138 | .driver_specific 00:28:17.138 | .nvme_error 00:28:17.138 | .status_code 00:28:17.138 | .command_transient_transport_error' 00:28:17.138 16:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:17.400 16:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 158 > 0 )) 00:28:17.400 16:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2380252 00:28:17.400 16:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2380252 ']' 00:28:17.400 16:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2380252 00:28:17.400 16:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:17.400 16:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:17.400 16:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2380252 00:28:17.400 16:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:17.400 16:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:17.400 16:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2380252' 00:28:17.400 killing process with pid 2380252 00:28:17.400 16:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2380252 00:28:17.400 Received shutdown signal, test time was about 2.000000 seconds 00:28:17.400 00:28:17.400 Latency(us) 00:28:17.400 [2024-11-20T15:40:03.359Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:17.400 [2024-11-20T15:40:03.359Z] =================================================================================================================== 00:28:17.400 [2024-11-20T15:40:03.359Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:17.400 16:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2380252 00:28:17.400 16:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:17.400 16:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:17.400 16:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:17.661 16:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:17.661 16:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:17.661 16:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2380943 00:28:17.661 16:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2380943 /var/tmp/bperf.sock 00:28:17.661 16:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2380943 ']' 00:28:17.661 16:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:17.661 16:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:17.661 16:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:17.661 16:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:17.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:17.661 16:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:17.661 16:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:17.661 [2024-11-20 16:40:03.406300] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:28:17.661 [2024-11-20 16:40:03.406355] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2380943 ] 00:28:17.661 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:17.661 Zero copy mechanism will not be used. 00:28:17.661 [2024-11-20 16:40:03.490212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:17.661 [2024-11-20 16:40:03.519057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:18.604 16:40:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:18.604 16:40:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:18.604 16:40:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:18.604 16:40:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:18.604 16:40:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:18.604 16:40:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.604 16:40:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:18.604 16:40:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.604 16:40:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:18.604 16:40:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:18.866 nvme0n1 00:28:18.866 16:40:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:18.866 16:40:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.866 16:40:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:18.866 16:40:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.866 16:40:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:18.866 16:40:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:18.866 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:18.866 Zero copy mechanism will not be used. 00:28:18.866 Running I/O for 2 seconds... 00:28:18.866 [2024-11-20 16:40:04.765063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:18.866 [2024-11-20 16:40:04.765095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.866 [2024-11-20 16:40:04.765104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.866 [2024-11-20 16:40:04.775650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:18.866 [2024-11-20 16:40:04.775678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.866 [2024-11-20 16:40:04.775685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.866 [2024-11-20 16:40:04.783942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:18.866 [2024-11-20 16:40:04.783962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.866 [2024-11-20 16:40:04.783969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.866 [2024-11-20 16:40:04.790453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:18.866 [2024-11-20 16:40:04.790471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.866 [2024-11-20 16:40:04.790477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:18.866 [2024-11-20 16:40:04.793460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:18.866 [2024-11-20 16:40:04.793478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.866 [2024-11-20 16:40:04.793485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:18.866 [2024-11-20 16:40:04.801530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:18.866 [2024-11-20 16:40:04.801549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.866 [2024-11-20 16:40:04.801556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:18.866 [2024-11-20 16:40:04.809768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:18.866 [2024-11-20 16:40:04.809787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.866 [2024-11-20 16:40:04.809793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:18.866 [2024-11-20 16:40:04.819613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:18.866 [2024-11-20 16:40:04.819632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.866 [2024-11-20 16:40:04.819638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:19.128 [2024-11-20 16:40:04.829104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.128 [2024-11-20 16:40:04.829122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.128 [2024-11-20 16:40:04.829129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:19.128 [2024-11-20 16:40:04.836713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.128 [2024-11-20 16:40:04.836731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.128 [2024-11-20 16:40:04.836737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:19.128 [2024-11-20 16:40:04.843314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.128 [2024-11-20 16:40:04.843333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.128 [2024-11-20 16:40:04.843339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:19.128 [2024-11-20 16:40:04.851175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.128 [2024-11-20 16:40:04.851194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.128 [2024-11-20 16:40:04.851200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:19.128 [2024-11-20 16:40:04.860332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.128 [2024-11-20 16:40:04.860350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.128 [2024-11-20 16:40:04.860356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:19.128 [2024-11-20 16:40:04.869293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.128 [2024-11-20 16:40:04.869311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.128 [2024-11-20 16:40:04.869318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:19.128 [2024-11-20 16:40:04.880195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.128 [2024-11-20 16:40:04.880214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.128 [2024-11-20 16:40:04.880220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:19.128 [2024-11-20 16:40:04.885244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.128 [2024-11-20 16:40:04.885262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.128 [2024-11-20 16:40:04.885268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:19.128 [2024-11-20 16:40:04.896093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.128 [2024-11-20 16:40:04.896111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.128 [2024-11-20 16:40:04.896118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:19.128 [2024-11-20 16:40:04.907224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.128 [2024-11-20 16:40:04.907242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.129 [2024-11-20 16:40:04.907248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:19.129 [2024-11-20 16:40:04.916496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.129 [2024-11-20 16:40:04.916521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.129 [2024-11-20 16:40:04.916527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:19.129 [2024-11-20 16:40:04.927146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.129 [2024-11-20 16:40:04.927164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.129 [2024-11-20 16:40:04.927171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:19.129 [2024-11-20 16:40:04.933340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.129 [2024-11-20 16:40:04.933357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.129 [2024-11-20 16:40:04.933364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:19.129 [2024-11-20 16:40:04.942673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.129 [2024-11-20 16:40:04.942692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.129 [2024-11-20 16:40:04.942698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:19.129 [2024-11-20 16:40:04.949362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.129 [2024-11-20 16:40:04.949380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.129 [2024-11-20 16:40:04.949386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:19.129 [2024-11-20 16:40:04.960239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.129 [2024-11-20 16:40:04.960257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.129 [2024-11-20 16:40:04.960263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:19.129 [2024-11-20 16:40:04.969732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.129 [2024-11-20 16:40:04.969750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.129 [2024-11-20 16:40:04.969757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:19.129 [2024-11-20 16:40:04.974474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.129 [2024-11-20 16:40:04.974492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.129 [2024-11-20 16:40:04.974498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:19.129 [2024-11-20 16:40:04.981948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.129 [2024-11-20 16:40:04.981967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.129 [2024-11-20 16:40:04.981973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:19.129 [2024-11-20 16:40:04.992093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.129 [2024-11-20 16:40:04.992111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.129 [2024-11-20 16:40:04.992118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:19.129 [2024-11-20 16:40:05.002701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.129 [2024-11-20 16:40:05.002719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.129 [2024-11-20 16:40:05.002726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:19.129 [2024-11-20 16:40:05.011952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.129 [2024-11-20 16:40:05.011970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.129 [2024-11-20 16:40:05.011977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:19.129 [2024-11-20 16:40:05.020616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.129 [2024-11-20 16:40:05.020634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.129 [2024-11-20 16:40:05.020640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:19.129 [2024-11-20 16:40:05.029960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.129 [2024-11-20 16:40:05.029979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.129 [2024-11-20 16:40:05.029992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:19.129 [2024-11-20 16:40:05.040619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.129 [2024-11-20 16:40:05.040637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.129 [2024-11-20 16:40:05.040643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:19.129 [2024-11-20 16:40:05.050183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.129 [2024-11-20 16:40:05.050201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.129 [2024-11-20 16:40:05.050207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:19.129 [2024-11-20 16:40:05.058158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.129 [2024-11-20 16:40:05.058176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.129 [2024-11-20 16:40:05.058182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:19.129 [2024-11-20 16:40:05.068446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.129 [2024-11-20 16:40:05.068464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.129 [2024-11-20 16:40:05.068474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:19.129 [2024-11-20 16:40:05.077980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.129 [2024-11-20 16:40:05.078005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.129 [2024-11-20 16:40:05.078011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:19.391 [2024-11-20 16:40:05.089379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.391 [2024-11-20 16:40:05.089397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.391 [2024-11-20 16:40:05.089404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:19.391 [2024-11-20 16:40:05.100666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.391 [2024-11-20 16:40:05.100685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.391 [2024-11-20 16:40:05.100691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:19.391 [2024-11-20 16:40:05.111332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.391 [2024-11-20 16:40:05.111351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.391 [2024-11-20 16:40:05.111357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:19.391 [2024-11-20 16:40:05.123834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.391 [2024-11-20 16:40:05.123852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.391 [2024-11-20 16:40:05.123859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:19.391 [2024-11-20 16:40:05.134122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.391 [2024-11-20 16:40:05.134140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.391 [2024-11-20 16:40:05.134146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:19.391 [2024-11-20 16:40:05.144936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.391 [2024-11-20 16:40:05.144954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.391 [2024-11-20 16:40:05.144960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:19.391 [2024-11-20 16:40:05.155412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.391 [2024-11-20 16:40:05.155431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.391 [2024-11-20 16:40:05.155437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:19.391 [2024-11-20 16:40:05.167525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.391 [2024-11-20 16:40:05.167547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.391 [2024-11-20 16:40:05.167553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:19.391 [2024-11-20 16:40:05.179660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.391 [2024-11-20 16:40:05.179678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.391 [2024-11-20 16:40:05.179684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:19.391 [2024-11-20 16:40:05.191495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.391 [2024-11-20 16:40:05.191513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.391 [2024-11-20 16:40:05.191519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:19.391 [2024-11-20 16:40:05.203978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.391 [2024-11-20 16:40:05.204001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.391 [2024-11-20 16:40:05.204007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:19.391 [2024-11-20 16:40:05.216023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.391 [2024-11-20 16:40:05.216041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.391 [2024-11-20 16:40:05.216047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:19.391 [2024-11-20 16:40:05.228316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.391 [2024-11-20 16:40:05.228335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.392 [2024-11-20 16:40:05.228341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:19.392 [2024-11-20 16:40:05.240817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.392 [2024-11-20 16:40:05.240836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.392 [2024-11-20 16:40:05.240842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:19.392 [2024-11-20 16:40:05.252866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.392 [2024-11-20 16:40:05.252884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.392 [2024-11-20 16:40:05.252890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:19.392 [2024-11-20 16:40:05.265104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.392 [2024-11-20 16:40:05.265122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.392 [2024-11-20 16:40:05.265128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:19.392 [2024-11-20 16:40:05.276775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.392 [2024-11-20 16:40:05.276793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.392 [2024-11-20 16:40:05.276800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:19.392 [2024-11-20 16:40:05.288798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.392 [2024-11-20 16:40:05.288816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.392 [2024-11-20 16:40:05.288822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:19.392 [2024-11-20 16:40:05.299857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.392 [2024-11-20 16:40:05.299875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.392 [2024-11-20 16:40:05.299881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:19.392 [2024-11-20 16:40:05.310466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.392 [2024-11-20 16:40:05.310484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.392 [2024-11-20 16:40:05.310490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:19.392 [2024-11-20 16:40:05.319783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.392 [2024-11-20 16:40:05.319801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.392 [2024-11-20 16:40:05.319807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:19.392 [2024-11-20 16:40:05.331634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.392 [2024-11-20 16:40:05.331652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.392 [2024-11-20 16:40:05.331658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:19.392 [2024-11-20 16:40:05.342405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.392 [2024-11-20 16:40:05.342423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.392 [2024-11-20 16:40:05.342430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:19.653 [2024-11-20 16:40:05.353162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.653 [2024-11-20 16:40:05.353179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.653 [2024-11-20 16:40:05.353186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:19.653 [2024-11-20 16:40:05.362218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.653 [2024-11-20 16:40:05.362236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.653 [2024-11-20 16:40:05.362245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:19.653 [2024-11-20 16:40:05.373297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.653 [2024-11-20 16:40:05.373316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.653 [2024-11-20 16:40:05.373323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:19.653 [2024-11-20 16:40:05.383946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.653 [2024-11-20 16:40:05.383964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.653 [2024-11-20 16:40:05.383970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:19.654 [2024-11-20 16:40:05.395307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.654 [2024-11-20 16:40:05.395325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.654 [2024-11-20 16:40:05.395332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:19.654 [2024-11-20 16:40:05.405857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.654 [2024-11-20 16:40:05.405876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.654 [2024-11-20 16:40:05.405883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:19.654 [2024-11-20 16:40:05.417197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.654 [2024-11-20 16:40:05.417216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.654 [2024-11-20 16:40:05.417222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:19.654 [2024-11-20 16:40:05.428416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.654 [2024-11-20 16:40:05.428435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.654 [2024-11-20 16:40:05.428441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:19.654 [2024-11-20 16:40:05.440242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.654 [2024-11-20 16:40:05.440260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.654 [2024-11-20 16:40:05.440267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:19.654 [2024-11-20 16:40:05.450915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.654 [2024-11-20 16:40:05.450933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.654 [2024-11-20 16:40:05.450939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:19.654 [2024-11-20 16:40:05.460909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.654 [2024-11-20 16:40:05.460931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.654 [2024-11-20 16:40:05.460938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:19.654 [2024-11-20 16:40:05.471082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.654 [2024-11-20 16:40:05.471100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.654 [2024-11-20 16:40:05.471107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:19.654 [2024-11-20 16:40:05.481433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.654 [2024-11-20 16:40:05.481452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.654 [2024-11-20 16:40:05.481458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:19.654 [2024-11-20 16:40:05.492816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.654 [2024-11-20 16:40:05.492834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.654 [2024-11-20 16:40:05.492840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:19.654 [2024-11-20 16:40:05.503554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.654 [2024-11-20 16:40:05.503573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.654 [2024-11-20 16:40:05.503579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:19.654 [2024-11-20 16:40:05.515765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.654 [2024-11-20 16:40:05.515784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.654 [2024-11-20 16:40:05.515790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:19.654 [2024-11-20 16:40:05.525068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.654 [2024-11-20 16:40:05.525086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.654 [2024-11-20 16:40:05.525092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:19.654 [2024-11-20 16:40:05.532639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.654 [2024-11-20 16:40:05.532658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.654 [2024-11-20 16:40:05.532664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:19.654 [2024-11-20 16:40:05.541992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.654 [2024-11-20 16:40:05.542010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.654 [2024-11-20 16:40:05.542016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:19.654 [2024-11-20 16:40:05.553519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.654 [2024-11-20 16:40:05.553538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.654 [2024-11-20 16:40:05.553544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:19.654 [2024-11-20 16:40:05.563830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.654 [2024-11-20 16:40:05.563849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.654 [2024-11-20 16:40:05.563855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:19.654 [2024-11-20 16:40:05.572683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.654 [2024-11-20 16:40:05.572701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.654 [2024-11-20 16:40:05.572707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:19.654 [2024-11-20 16:40:05.583601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.654 [2024-11-20 16:40:05.583620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.654 [2024-11-20 16:40:05.583627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:19.654 [2024-11-20 16:40:05.593271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.654 [2024-11-20 16:40:05.593289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.654 [2024-11-20 16:40:05.593295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:19.654 [2024-11-20 16:40:05.604369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.654 [2024-11-20 16:40:05.604388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.654 [2024-11-20 16:40:05.604394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:19.917 [2024-11-20 16:40:05.615513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.917 [2024-11-20 16:40:05.615532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.917 [2024-11-20 16:40:05.615538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:19.917 [2024-11-20 16:40:05.625941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.917 [2024-11-20 16:40:05.625959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.917 [2024-11-20 16:40:05.625966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:19.917 [2024-11-20 16:40:05.635580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.917 [2024-11-20 16:40:05.635598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.917 [2024-11-20 16:40:05.635608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:19.917 [2024-11-20 16:40:05.646773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.917 [2024-11-20 16:40:05.646791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.917 [2024-11-20 16:40:05.646798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:19.917 [2024-11-20 16:40:05.655367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.917 [2024-11-20 16:40:05.655386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.917 [2024-11-20 16:40:05.655392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:19.917 [2024-11-20 16:40:05.665719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.917 [2024-11-20 16:40:05.665737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.917 [2024-11-20 16:40:05.665744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:19.917 [2024-11-20 16:40:05.676117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.917 [2024-11-20 16:40:05.676135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.917 [2024-11-20 16:40:05.676141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:19.917 [2024-11-20 16:40:05.686997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.917 [2024-11-20 16:40:05.687015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.917 [2024-11-20 16:40:05.687022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:19.917 [2024-11-20 16:40:05.695755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.917 [2024-11-20 16:40:05.695773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.917 [2024-11-20 16:40:05.695780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:19.917 [2024-11-20 16:40:05.704900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.917 [2024-11-20 16:40:05.704919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.917 [2024-11-20 16:40:05.704925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:19.917 [2024-11-20 16:40:05.716726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.917 [2024-11-20 16:40:05.716744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.917 [2024-11-20 16:40:05.716751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:19.917 [2024-11-20 16:40:05.728254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.917 [2024-11-20 16:40:05.728273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.917 [2024-11-20 16:40:05.728279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:19.917 [2024-11-20 16:40:05.739058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.917 [2024-11-20 16:40:05.739077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.917 [2024-11-20 16:40:05.739083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:19.917 [2024-11-20 16:40:05.748496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.917 [2024-11-20 16:40:05.748515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.917 [2024-11-20 16:40:05.748521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:19.917 3051.00 IOPS, 381.38 MiB/s [2024-11-20T15:40:05.876Z] [2024-11-20 16:40:05.760632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.917 [2024-11-20 16:40:05.760651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.917 [2024-11-20 16:40:05.760658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:19.917 [2024-11-20 16:40:05.772387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.917 [2024-11-20 16:40:05.772406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.917 [2024-11-20 16:40:05.772412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:19.917 [2024-11-20 16:40:05.780422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.917 [2024-11-20 16:40:05.780440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.917 [2024-11-20 16:40:05.780446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:19.917 [2024-11-20 16:40:05.791025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.917 [2024-11-20 16:40:05.791043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.917 [2024-11-20 16:40:05.791050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:19.917 [2024-11-20 16:40:05.802729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.917 [2024-11-20 16:40:05.802748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.917 [2024-11-20 16:40:05.802754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:19.917 [2024-11-20 16:40:05.815453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.917 [2024-11-20 16:40:05.815471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.917 [2024-11-20 16:40:05.815481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:19.917 [2024-11-20 16:40:05.828079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.917 [2024-11-20 16:40:05.828097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.917 [2024-11-20 16:40:05.828104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:19.917 [2024-11-20 16:40:05.841277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.917 [2024-11-20 16:40:05.841295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.917 [2024-11-20 16:40:05.841302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:19.917 [2024-11-20 16:40:05.851137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.917 [2024-11-20 16:40:05.851156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.917 [2024-11-20 16:40:05.851163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:19.917 [2024-11-20 16:40:05.862557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:19.917 [2024-11-20 16:40:05.862576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:19.917 [2024-11-20 16:40:05.862582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:20.179 [2024-11-20 16:40:05.874699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.179 [2024-11-20 16:40:05.874718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.179 [2024-11-20 16:40:05.874724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:20.179 [2024-11-20 16:40:05.886518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.179 [2024-11-20 16:40:05.886537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.179 [2024-11-20 16:40:05.886543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:20.179 [2024-11-20 16:40:05.896047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.179 [2024-11-20 16:40:05.896065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.179 [2024-11-20 16:40:05.896071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:20.179 [2024-11-20 16:40:05.907129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.179 [2024-11-20 16:40:05.907147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.179 [2024-11-20 16:40:05.907153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:20.179 [2024-11-20 16:40:05.917376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.179 [2024-11-20 16:40:05.917399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.179 [2024-11-20 16:40:05.917405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:20.179 [2024-11-20 16:40:05.929357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.179 [2024-11-20 16:40:05.929375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.179 [2024-11-20 16:40:05.929381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:20.180 [2024-11-20 16:40:05.939032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.180 [2024-11-20 16:40:05.939049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.180 [2024-11-20 16:40:05.939056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:20.180 [2024-11-20 16:40:05.949055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.180 [2024-11-20 16:40:05.949073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.180 [2024-11-20 16:40:05.949079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:20.180 [2024-11-20 16:40:05.959890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.180 [2024-11-20 16:40:05.959908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.180 [2024-11-20 16:40:05.959915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:20.180 [2024-11-20 16:40:05.971589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.180 [2024-11-20 16:40:05.971607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.180 [2024-11-20 16:40:05.971614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:20.180 [2024-11-20 16:40:05.980940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.180 [2024-11-20 16:40:05.980959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.180 [2024-11-20 16:40:05.980965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:20.180 [2024-11-20 16:40:05.992085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.180 [2024-11-20 16:40:05.992102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.180 [2024-11-20 16:40:05.992109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:20.180 [2024-11-20 16:40:06.002632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.180 [2024-11-20 16:40:06.002651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.180 [2024-11-20 16:40:06.002657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:20.180 [2024-11-20 16:40:06.013787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.180 [2024-11-20 16:40:06.013806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.180 [2024-11-20 16:40:06.013813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:20.180 [2024-11-20 16:40:06.025653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.180 [2024-11-20 16:40:06.025672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.180 [2024-11-20 16:40:06.025678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:20.180 [2024-11-20 16:40:06.037285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.180 [2024-11-20 16:40:06.037303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.180 [2024-11-20 16:40:06.037309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:20.180 [2024-11-20 16:40:06.047726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.180 [2024-11-20 16:40:06.047745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.180 [2024-11-20 16:40:06.047751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:20.180 [2024-11-20 16:40:06.059427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.180 [2024-11-20 16:40:06.059445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.180 [2024-11-20 16:40:06.059452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:20.180 [2024-11-20 16:40:06.069711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.180 [2024-11-20 16:40:06.069729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.180 [2024-11-20 16:40:06.069735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:20.180 [2024-11-20 16:40:06.080618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.180 [2024-11-20 16:40:06.080636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.180 [2024-11-20 16:40:06.080642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:20.180 [2024-11-20 16:40:06.090674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.180 [2024-11-20 16:40:06.090692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.180 [2024-11-20 16:40:06.090698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:20.180 [2024-11-20 16:40:06.101317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.180 [2024-11-20 16:40:06.101335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.180 [2024-11-20 16:40:06.101345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:20.180 [2024-11-20 16:40:06.111786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.180 [2024-11-20 16:40:06.111804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.180 [2024-11-20 16:40:06.111810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:20.180 [2024-11-20 16:40:06.122997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.180 [2024-11-20 16:40:06.123015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.180 [2024-11-20 16:40:06.123021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:20.180 [2024-11-20 16:40:06.131154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.180 [2024-11-20 16:40:06.131172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.180 [2024-11-20 16:40:06.131178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:20.442 [2024-11-20 16:40:06.142839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.442 [2024-11-20 16:40:06.142858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.442 [2024-11-20 16:40:06.142864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:20.442 [2024-11-20 16:40:06.153492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.442 [2024-11-20 16:40:06.153510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.442 [2024-11-20 16:40:06.153516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:20.442 [2024-11-20 16:40:06.163613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.442 [2024-11-20 16:40:06.163631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.442 [2024-11-20 16:40:06.163637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:20.442 [2024-11-20 16:40:06.173339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.442 [2024-11-20 16:40:06.173357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.442 [2024-11-20 16:40:06.173363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:20.442 [2024-11-20 16:40:06.182674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.442 [2024-11-20 16:40:06.182692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.442 [2024-11-20 16:40:06.182698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:20.442 [2024-11-20 16:40:06.193890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.442 [2024-11-20 16:40:06.193909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.442 [2024-11-20 16:40:06.193915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:20.442 [2024-11-20 16:40:06.204792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.442 [2024-11-20 16:40:06.204810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.442 [2024-11-20 16:40:06.204816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:20.442 [2024-11-20 16:40:06.214766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.442 [2024-11-20 16:40:06.214784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.442 [2024-11-20 16:40:06.214790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:20.442 [2024-11-20 16:40:06.225371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.442 [2024-11-20 16:40:06.225389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.442 [2024-11-20 16:40:06.225396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:20.442 [2024-11-20 16:40:06.237238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.442 [2024-11-20 16:40:06.237257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.442 [2024-11-20 16:40:06.237263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:20.442 [2024-11-20 16:40:06.249960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.442 [2024-11-20 16:40:06.249978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.442 [2024-11-20 16:40:06.249990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:20.442 [2024-11-20 16:40:06.262887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.442 [2024-11-20 16:40:06.262905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.442 [2024-11-20 16:40:06.262911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:20.442 [2024-11-20 16:40:06.275979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.442 [2024-11-20 16:40:06.276001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.442 [2024-11-20 16:40:06.276008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:20.442 [2024-11-20 16:40:06.289117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.442 [2024-11-20 16:40:06.289136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.442 [2024-11-20 16:40:06.289147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:20.442 [2024-11-20 16:40:06.302024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.442 [2024-11-20 16:40:06.302041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.442 [2024-11-20 16:40:06.302048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:20.442 [2024-11-20 16:40:06.315248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.442 [2024-11-20 16:40:06.315266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.442 [2024-11-20 16:40:06.315272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:20.442 [2024-11-20 16:40:06.328105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.442 [2024-11-20 16:40:06.328123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.442 [2024-11-20 16:40:06.328130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:20.442 [2024-11-20 16:40:06.340855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.442 [2024-11-20 16:40:06.340874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.442 [2024-11-20 16:40:06.340880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:20.442 [2024-11-20 16:40:06.353779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.442 [2024-11-20 16:40:06.353798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.442 [2024-11-20 16:40:06.353804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:20.442 [2024-11-20 16:40:06.365414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.443 [2024-11-20 16:40:06.365432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.443 [2024-11-20 16:40:06.365438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:20.443 [2024-11-20 16:40:06.377667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.443 [2024-11-20 16:40:06.377685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.443 [2024-11-20 16:40:06.377691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:20.443 [2024-11-20 16:40:06.388006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.443 [2024-11-20 16:40:06.388023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.443 [2024-11-20 16:40:06.388030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:20.443 [2024-11-20 16:40:06.396908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.443 [2024-11-20 16:40:06.396928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.443 [2024-11-20 16:40:06.396935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:20.704 [2024-11-20 16:40:06.407344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.704 [2024-11-20 16:40:06.407362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.704 [2024-11-20 16:40:06.407369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:20.704 [2024-11-20 16:40:06.418317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.704 [2024-11-20 16:40:06.418335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.704 [2024-11-20 16:40:06.418342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:20.704 [2024-11-20 16:40:06.430175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.704 [2024-11-20 16:40:06.430193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.704 [2024-11-20 16:40:06.430199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:20.704 [2024-11-20 16:40:06.441979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.704 [2024-11-20 16:40:06.442001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.704 [2024-11-20 16:40:06.442007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:20.704 [2024-11-20 16:40:06.450812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.704 [2024-11-20 16:40:06.450830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.704 [2024-11-20 16:40:06.450837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:20.704 [2024-11-20 16:40:06.462624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.704 [2024-11-20 16:40:06.462642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.704 [2024-11-20 16:40:06.462648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:20.704 [2024-11-20 16:40:06.474028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.704 [2024-11-20 16:40:06.474046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.704 [2024-11-20 16:40:06.474052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:20.704 [2024-11-20 16:40:06.484770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.704 [2024-11-20 16:40:06.484789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.704 [2024-11-20 16:40:06.484795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:20.704 [2024-11-20 16:40:06.496370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.704 [2024-11-20 16:40:06.496389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.704 [2024-11-20 16:40:06.496395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:20.704 [2024-11-20 16:40:06.507483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.704 [2024-11-20 16:40:06.507501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.704 [2024-11-20 16:40:06.507508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:20.704 [2024-11-20 16:40:06.519259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.704 [2024-11-20 16:40:06.519278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.704 [2024-11-20 16:40:06.519284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:20.704 [2024-11-20 16:40:06.529963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.704 [2024-11-20 16:40:06.529985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.704 [2024-11-20 16:40:06.529992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:20.704 [2024-11-20 16:40:06.539724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.704 [2024-11-20 16:40:06.539742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.704 [2024-11-20 16:40:06.539748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:20.704 [2024-11-20 16:40:06.550650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.704 [2024-11-20 16:40:06.550668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.704 [2024-11-20 16:40:06.550675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:20.704 [2024-11-20 16:40:06.561688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.704 [2024-11-20 16:40:06.561706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.704 [2024-11-20 16:40:06.561712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:20.704 [2024-11-20 16:40:06.572399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.704 [2024-11-20 16:40:06.572417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.704 [2024-11-20 16:40:06.572424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:20.704 [2024-11-20 16:40:06.584132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.704 [2024-11-20 16:40:06.584150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.704 [2024-11-20 16:40:06.584160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:20.704 [2024-11-20 16:40:06.595222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.705 [2024-11-20 16:40:06.595240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.705 [2024-11-20 16:40:06.595247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:20.705 [2024-11-20 16:40:06.606196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.705 [2024-11-20 16:40:06.606214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.705 [2024-11-20 16:40:06.606221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:20.705 [2024-11-20 16:40:06.618024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.705 [2024-11-20 16:40:06.618043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.705 [2024-11-20 16:40:06.618049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:20.705 [2024-11-20 16:40:06.628717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.705 [2024-11-20 16:40:06.628735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.705 [2024-11-20 16:40:06.628741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:20.705 [2024-11-20 16:40:06.639323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.705 [2024-11-20 16:40:06.639341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.705 [2024-11-20 16:40:06.639348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:20.705 [2024-11-20 16:40:06.651120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.705 [2024-11-20 16:40:06.651138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.705 [2024-11-20 16:40:06.651144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:20.975 [2024-11-20 16:40:06.661966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.975 [2024-11-20 16:40:06.661990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.975 [2024-11-20 16:40:06.661996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:20.975 [2024-11-20 16:40:06.672796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.975 [2024-11-20 16:40:06.672814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.975 [2024-11-20 16:40:06.672820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:20.975 [2024-11-20 16:40:06.685458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.975 [2024-11-20 16:40:06.685479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.975 [2024-11-20 16:40:06.685485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:20.975 [2024-11-20 16:40:06.697071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.975 [2024-11-20 16:40:06.697089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.975 [2024-11-20 16:40:06.697096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:20.975 [2024-11-20 16:40:06.707426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.975 [2024-11-20 16:40:06.707444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.975 [2024-11-20 16:40:06.707451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:20.975 [2024-11-20 16:40:06.719159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.975 [2024-11-20 16:40:06.719178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.975 [2024-11-20 16:40:06.719184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:20.975 [2024-11-20 16:40:06.730219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.975 [2024-11-20 16:40:06.730237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.975 [2024-11-20 16:40:06.730243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:20.975 [2024-11-20 16:40:06.740459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.975 [2024-11-20 16:40:06.740477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.975 [2024-11-20 16:40:06.740483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:20.975 [2024-11-20 16:40:06.750796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x194ea60) 00:28:20.975 [2024-11-20 16:40:06.750815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.975 [2024-11-20 16:40:06.750821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:20.975 2921.50 IOPS, 365.19 MiB/s 00:28:20.975 Latency(us) 00:28:20.975 [2024-11-20T15:40:06.934Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:20.975 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:20.975 nvme0n1 : 2.00 2924.20 365.53 0.00 0.00 5467.86 1119.57 20534.61 00:28:20.975 [2024-11-20T15:40:06.934Z] =================================================================================================================== 00:28:20.975 [2024-11-20T15:40:06.934Z] Total : 2924.20 365.53 0.00 0.00 5467.86 1119.57 20534.61 00:28:20.975 { 00:28:20.975 "results": [ 00:28:20.975 { 00:28:20.975 "job": "nvme0n1", 00:28:20.975 "core_mask": "0x2", 00:28:20.975 "workload": "randread", 00:28:20.975 "status": "finished", 00:28:20.975 "queue_depth": 16, 00:28:20.975 "io_size": 131072, 00:28:20.975 "runtime": 2.003622, 00:28:20.975 "iops": 2924.20426607414, 00:28:20.975 "mibps": 365.5255332592675, 00:28:20.975 "io_failed": 0, 00:28:20.975 "io_timeout": 0, 00:28:20.975 "avg_latency_us": 5467.862884451271, 00:28:20.975 "min_latency_us": 1119.5733333333333, 00:28:20.975 "max_latency_us": 20534.613333333335 00:28:20.975 } 00:28:20.975 ], 00:28:20.975 "core_count": 1 00:28:20.975 } 00:28:20.975 16:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:20.976 16:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:20.976 16:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:20.976 | .driver_specific 00:28:20.976 | .nvme_error 00:28:20.976 | .status_code 00:28:20.976 | .command_transient_transport_error' 00:28:20.976 16:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:21.244 16:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 189 > 0 )) 00:28:21.244 16:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2380943 00:28:21.244 16:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2380943 ']' 00:28:21.244 16:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2380943 00:28:21.244 16:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:21.244 16:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:21.244 16:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2380943 00:28:21.244 16:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:21.244 16:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:21.244 16:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2380943' 00:28:21.244 killing process with pid 2380943 00:28:21.244 16:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2380943 00:28:21.244 Received shutdown signal, test time was about 2.000000 seconds 00:28:21.244 00:28:21.244 Latency(us) 00:28:21.244 [2024-11-20T15:40:07.203Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:21.244 [2024-11-20T15:40:07.203Z] =================================================================================================================== 00:28:21.244 [2024-11-20T15:40:07.203Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:21.244 16:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2380943 00:28:21.244 16:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:21.244 16:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:21.244 16:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:21.244 16:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:21.244 16:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:21.244 16:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2381625 00:28:21.244 16:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2381625 /var/tmp/bperf.sock 00:28:21.244 16:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2381625 ']' 00:28:21.244 16:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:21.244 16:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:21.244 16:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:21.244 16:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:21.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:21.244 16:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:21.244 16:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:21.244 [2024-11-20 16:40:07.183441] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:28:21.244 [2024-11-20 16:40:07.183515] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2381625 ] 00:28:21.505 [2024-11-20 16:40:07.269195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.505 [2024-11-20 16:40:07.298786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:22.076 16:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:22.076 16:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:22.076 16:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:22.076 16:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:22.336 16:40:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:22.336 16:40:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.336 16:40:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:22.336 16:40:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.336 16:40:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:22.336 16:40:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:22.602 nvme0n1 00:28:22.602 16:40:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:22.602 16:40:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.602 16:40:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:22.602 16:40:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.602 16:40:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:22.602 16:40:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:22.863 Running I/O for 2 seconds... 00:28:22.863 [2024-11-20 16:40:08.626338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ef3a28 00:28:22.863 [2024-11-20 16:40:08.628421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.863 [2024-11-20 16:40:08.628446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:22.863 [2024-11-20 16:40:08.636752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee01f8 00:28:22.863 [2024-11-20 16:40:08.638153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.863 [2024-11-20 16:40:08.638172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:22.863 [2024-11-20 16:40:08.647932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee7818 00:28:22.863 [2024-11-20 16:40:08.649326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.863 [2024-11-20 16:40:08.649343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:22.863 [2024-11-20 16:40:08.662162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee7818 00:28:22.863 [2024-11-20 16:40:08.664223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.863 [2024-11-20 16:40:08.664238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:22.863 [2024-11-20 16:40:08.672524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee8088 00:28:22.863 [2024-11-20 16:40:08.673905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.863 [2024-11-20 16:40:08.673922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:22.863 [2024-11-20 16:40:08.683623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee88f8 00:28:22.863 [2024-11-20 16:40:08.684933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.863 [2024-11-20 16:40:08.684949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:22.863 [2024-11-20 16:40:08.696320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee1b48 00:28:22.863 [2024-11-20 16:40:08.697711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.863 [2024-11-20 16:40:08.697728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:22.863 [2024-11-20 16:40:08.708287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ef6458 00:28:22.863 [2024-11-20 16:40:08.709614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.863 [2024-11-20 16:40:08.709630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:22.863 [2024-11-20 16:40:08.720203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ef2d80 00:28:22.863 [2024-11-20 16:40:08.721554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.864 [2024-11-20 16:40:08.721570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:22.864 [2024-11-20 16:40:08.732146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016edfdc0 00:28:22.864 [2024-11-20 16:40:08.733532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.864 [2024-11-20 16:40:08.733548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:22.864 [2024-11-20 16:40:08.743326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee9168 00:28:22.864 [2024-11-20 16:40:08.744674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.864 [2024-11-20 16:40:08.744689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:22.864 [2024-11-20 16:40:08.755986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee9168 00:28:22.864 [2024-11-20 16:40:08.757325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.864 [2024-11-20 16:40:08.757341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:22.864 [2024-11-20 16:40:08.767887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee9168 00:28:22.864 [2024-11-20 16:40:08.769236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.864 [2024-11-20 16:40:08.769251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:22.864 [2024-11-20 16:40:08.779806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee9168 00:28:22.864 [2024-11-20 16:40:08.781154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.864 [2024-11-20 16:40:08.781170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:22.864 [2024-11-20 16:40:08.793211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee9168 00:28:22.864 [2024-11-20 16:40:08.795220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.864 [2024-11-20 16:40:08.795235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:22.864 [2024-11-20 16:40:08.803639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ef6890 00:28:22.864 [2024-11-20 16:40:08.804996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.864 [2024-11-20 16:40:08.805012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:22.864 [2024-11-20 16:40:08.815588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ef7970 00:28:22.864 [2024-11-20 16:40:08.816933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:9801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.864 [2024-11-20 16:40:08.816949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:23.125 [2024-11-20 16:40:08.829055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ef35f0 00:28:23.125 [2024-11-20 16:40:08.831031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.125 [2024-11-20 16:40:08.831047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:23.125 [2024-11-20 16:40:08.839453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016edf988 00:28:23.125 [2024-11-20 16:40:08.840826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.125 [2024-11-20 16:40:08.840844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:23.125 [2024-11-20 16:40:08.850519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee01f8 00:28:23.125 [2024-11-20 16:40:08.851846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.126 [2024-11-20 16:40:08.851862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:23.126 [2024-11-20 16:40:08.861259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016efeb58 00:28:23.126 [2024-11-20 16:40:08.862108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.126 [2024-11-20 16:40:08.862123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:23.126 [2024-11-20 16:40:08.873317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee3060 00:28:23.126 [2024-11-20 16:40:08.874161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.126 [2024-11-20 16:40:08.874176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:23.126 [2024-11-20 16:40:08.885243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee3060 00:28:23.126 [2024-11-20 16:40:08.886002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:25248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.126 [2024-11-20 16:40:08.886018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:23.126 [2024-11-20 16:40:08.899407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee3d08 00:28:23.126 [2024-11-20 16:40:08.900909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.126 [2024-11-20 16:40:08.900925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:23.126 [2024-11-20 16:40:08.910485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee3060 00:28:23.126 [2024-11-20 16:40:08.911979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.126 [2024-11-20 16:40:08.911997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:23.126 [2024-11-20 16:40:08.922329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee3d08 00:28:23.126 [2024-11-20 16:40:08.923798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.126 [2024-11-20 16:40:08.923814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.126 [2024-11-20 16:40:08.936498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee3060 00:28:23.126 [2024-11-20 16:40:08.938606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.126 [2024-11-20 16:40:08.938622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:23.126 [2024-11-20 16:40:08.946899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016eee5c8 00:28:23.126 [2024-11-20 16:40:08.948383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:8519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.126 [2024-11-20 16:40:08.948398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:23.126 [2024-11-20 16:40:08.958070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016efef90 00:28:23.126 [2024-11-20 16:40:08.959519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.126 [2024-11-20 16:40:08.959534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:23.126 [2024-11-20 16:40:08.970764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee27f0 00:28:23.126 [2024-11-20 16:40:08.972249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:10255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.126 [2024-11-20 16:40:08.972265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:23.126 [2024-11-20 16:40:08.982631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016efef90 00:28:23.126 [2024-11-20 16:40:08.984080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.126 [2024-11-20 16:40:08.984096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:23.126 [2024-11-20 16:40:08.996098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016eeee38 00:28:23.126 [2024-11-20 16:40:08.998205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.126 [2024-11-20 16:40:08.998222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:23.126 [2024-11-20 16:40:09.006448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016eed4e8 00:28:23.126 [2024-11-20 16:40:09.007891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.126 [2024-11-20 16:40:09.007906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:23.126 [2024-11-20 16:40:09.018372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016eed4e8 00:28:23.126 [2024-11-20 16:40:09.019809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:12136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.126 [2024-11-20 16:40:09.019825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:23.126 [2024-11-20 16:40:09.030265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee3d08 00:28:23.126 [2024-11-20 16:40:09.031735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:15737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.126 [2024-11-20 16:40:09.031751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:23.126 [2024-11-20 16:40:09.043734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016efef90 00:28:23.126 [2024-11-20 16:40:09.045818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.126 [2024-11-20 16:40:09.045834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:23.126 [2024-11-20 16:40:09.054079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016efb480 00:28:23.126 [2024-11-20 16:40:09.055522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.126 [2024-11-20 16:40:09.055538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:23.126 [2024-11-20 16:40:09.065954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016efb480 00:28:23.126 [2024-11-20 16:40:09.067397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.126 [2024-11-20 16:40:09.067413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:23.126 [2024-11-20 16:40:09.077904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016efb480 00:28:23.126 [2024-11-20 16:40:09.079372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:18670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.126 [2024-11-20 16:40:09.079388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:23.388 [2024-11-20 16:40:09.088756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016efdeb0 00:28:23.388 [2024-11-20 16:40:09.089720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.388 [2024-11-20 16:40:09.089736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:23.388 [2024-11-20 16:40:09.102078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016eeff18 00:28:23.388 [2024-11-20 16:40:09.103679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.388 [2024-11-20 16:40:09.103694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:23.388 [2024-11-20 16:40:09.113564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016edf988 00:28:23.388 [2024-11-20 16:40:09.115158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:21156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.388 [2024-11-20 16:40:09.115173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:23.388 [2024-11-20 16:40:09.123526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ef4f40 00:28:23.388 [2024-11-20 16:40:09.124633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.388 [2024-11-20 16:40:09.124648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:23.388 [2024-11-20 16:40:09.136200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ef4f40 00:28:23.388 [2024-11-20 16:40:09.137185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.388 [2024-11-20 16:40:09.137201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:23.388 [2024-11-20 16:40:09.148111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee38d0 00:28:23.388 [2024-11-20 16:40:09.149177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.388 [2024-11-20 16:40:09.149196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:23.388 [2024-11-20 16:40:09.160036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016efcdd0 00:28:23.388 [2024-11-20 16:40:09.161177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.388 [2024-11-20 16:40:09.161193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:23.388 [2024-11-20 16:40:09.171907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee38d0 00:28:23.388 [2024-11-20 16:40:09.173018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.388 [2024-11-20 16:40:09.173033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:23.388 [2024-11-20 16:40:09.183871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016eefae0 00:28:23.388 [2024-11-20 16:40:09.184927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:25441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.388 [2024-11-20 16:40:09.184942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:23.388 [2024-11-20 16:40:09.195041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ef4f40 00:28:23.388 [2024-11-20 16:40:09.196117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:14510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.388 [2024-11-20 16:40:09.196132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:23.389 [2024-11-20 16:40:09.207762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016efeb58 00:28:23.389 [2024-11-20 16:40:09.208845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.389 [2024-11-20 16:40:09.208861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:23.389 [2024-11-20 16:40:09.219699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016efb8b8 00:28:23.389 [2024-11-20 16:40:09.220771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.389 [2024-11-20 16:40:09.220787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:23.389 [2024-11-20 16:40:09.231660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016efc998 00:28:23.389 [2024-11-20 16:40:09.232759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:25335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.389 [2024-11-20 16:40:09.232775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:23.389 [2024-11-20 16:40:09.243577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016eec408 00:28:23.389 [2024-11-20 16:40:09.244692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.389 [2024-11-20 16:40:09.244708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:23.389 [2024-11-20 16:40:09.254731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ef9f68 00:28:23.389 [2024-11-20 16:40:09.255829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.389 [2024-11-20 16:40:09.255844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:23.389 [2024-11-20 16:40:09.267411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ef9f68 00:28:23.389 [2024-11-20 16:40:09.268494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:9774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.389 [2024-11-20 16:40:09.268510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:23.389 [2024-11-20 16:40:09.280841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ef9f68 00:28:23.389 [2024-11-20 16:40:09.282564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:18774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.389 [2024-11-20 16:40:09.282579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:23.389 [2024-11-20 16:40:09.292709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016eeff18 00:28:23.389 [2024-11-20 16:40:09.294429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.389 [2024-11-20 16:40:09.294445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:23.389 [2024-11-20 16:40:09.304559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016efa7d8 00:28:23.389 [2024-11-20 16:40:09.306233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:6001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.389 [2024-11-20 16:40:09.306249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:23.389 [2024-11-20 16:40:09.314957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ef0ff8 00:28:23.389 [2024-11-20 16:40:09.316025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.389 [2024-11-20 16:40:09.316041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:23.389 [2024-11-20 16:40:09.326125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016efc128 00:28:23.389 [2024-11-20 16:40:09.327136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.389 [2024-11-20 16:40:09.327151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:23.389 [2024-11-20 16:40:09.340403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016efd208 00:28:23.389 [2024-11-20 16:40:09.342104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:14661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.389 [2024-11-20 16:40:09.342120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:23.651 [2024-11-20 16:40:09.350024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ef2510 00:28:23.651 [2024-11-20 16:40:09.351038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.651 [2024-11-20 16:40:09.351053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:23.651 [2024-11-20 16:40:09.362727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016efa7d8 00:28:23.651 [2024-11-20 16:40:09.363781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.651 [2024-11-20 16:40:09.363796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:23.651 [2024-11-20 16:40:09.374719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ef96f8 00:28:23.651 [2024-11-20 16:40:09.375795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.651 [2024-11-20 16:40:09.375810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:23.651 [2024-11-20 16:40:09.386655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ef1868 00:28:23.651 [2024-11-20 16:40:09.387608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.651 [2024-11-20 16:40:09.387624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:23.651 [2024-11-20 16:40:09.400141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee27f0 00:28:23.651 [2024-11-20 16:40:09.401842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.651 [2024-11-20 16:40:09.401858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:23.651 [2024-11-20 16:40:09.410494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016eebfd0 00:28:23.651 [2024-11-20 16:40:09.411533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.651 [2024-11-20 16:40:09.411549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:23.651 [2024-11-20 16:40:09.422394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016eebfd0 00:28:23.651 [2024-11-20 16:40:09.423451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.651 [2024-11-20 16:40:09.423467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:23.651 [2024-11-20 16:40:09.434291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016eebfd0 00:28:23.651 [2024-11-20 16:40:09.435363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:18014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.651 [2024-11-20 16:40:09.435379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:23.651 [2024-11-20 16:40:09.445399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ef9b30 00:28:23.651 [2024-11-20 16:40:09.446431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.651 [2024-11-20 16:40:09.446447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:23.652 [2024-11-20 16:40:09.457275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016eeb760 00:28:23.652 [2024-11-20 16:40:09.458304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:10303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.652 [2024-11-20 16:40:09.458322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:23.652 [2024-11-20 16:40:09.469928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016eeb760 00:28:23.652 [2024-11-20 16:40:09.470930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:12898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.652 [2024-11-20 16:40:09.470946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:23.652 [2024-11-20 16:40:09.483485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ef92c0 00:28:23.652 [2024-11-20 16:40:09.485142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.652 [2024-11-20 16:40:09.485157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:23.652 [2024-11-20 16:40:09.493097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ef4b08 00:28:23.652 [2024-11-20 16:40:09.493998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.652 [2024-11-20 16:40:09.494014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:23.652 [2024-11-20 16:40:09.505801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016eeb760 00:28:23.652 [2024-11-20 16:40:09.506862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:10638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.652 [2024-11-20 16:40:09.506880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:23.652 [2024-11-20 16:40:09.519272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ef92c0 00:28:23.652 [2024-11-20 16:40:09.520938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:11418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.652 [2024-11-20 16:40:09.520954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:23.652 [2024-11-20 16:40:09.530047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee5220 00:28:23.652 [2024-11-20 16:40:09.531216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.652 [2024-11-20 16:40:09.531232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:23.652 [2024-11-20 16:40:09.543701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016eebb98 00:28:23.652 [2024-11-20 16:40:09.545551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:19498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.652 [2024-11-20 16:40:09.545566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:23.652 [2024-11-20 16:40:09.554031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016eeb328 00:28:23.652 [2024-11-20 16:40:09.555229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:18892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.652 [2024-11-20 16:40:09.555245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:23.652 [2024-11-20 16:40:09.567652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee5220 00:28:23.652 [2024-11-20 16:40:09.569478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.652 [2024-11-20 16:40:09.569494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:23.652 [2024-11-20 16:40:09.578435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016edece0 00:28:23.652 [2024-11-20 16:40:09.579782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.652 [2024-11-20 16:40:09.579798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:23.652 [2024-11-20 16:40:09.589695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ede470 00:28:23.652 [2024-11-20 16:40:09.591017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.652 [2024-11-20 16:40:09.591033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:23.652 [2024-11-20 16:40:09.600459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016efc998 00:28:23.652 [2024-11-20 16:40:09.601304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.652 [2024-11-20 16:40:09.601320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:23.914 21261.00 IOPS, 83.05 MiB/s [2024-11-20T15:40:09.873Z] [2024-11-20 16:40:09.616597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016efcdd0 00:28:23.914 [2024-11-20 16:40:09.618737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:6480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.914 [2024-11-20 16:40:09.618753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:23.914 [2024-11-20 16:40:09.626941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016efc998 00:28:23.914 [2024-11-20 16:40:09.628394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.914 [2024-11-20 16:40:09.628410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.914 [2024-11-20 16:40:09.638826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016efc998 00:28:23.914 [2024-11-20 16:40:09.640179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.914 [2024-11-20 16:40:09.640195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.914 [2024-11-20 16:40:09.650722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016efd208 00:28:23.914 [2024-11-20 16:40:09.652177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.914 [2024-11-20 16:40:09.652192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:23.914 [2024-11-20 16:40:09.662674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ef31b8 00:28:23.914 [2024-11-20 16:40:09.664140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.914 [2024-11-20 16:40:09.664155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:23.914 [2024-11-20 16:40:09.674618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee99d8 00:28:23.914 [2024-11-20 16:40:09.676050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.914 [2024-11-20 16:40:09.676065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:23.914 [2024-11-20 16:40:09.686510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee88f8 00:28:23.914 [2024-11-20 16:40:09.687953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.914 [2024-11-20 16:40:09.687969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:23.914 [2024-11-20 16:40:09.698429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016efa3a0 00:28:23.914 [2024-11-20 16:40:09.699897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.914 [2024-11-20 16:40:09.699913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:23.914 [2024-11-20 16:40:09.709508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016efac10 00:28:23.915 [2024-11-20 16:40:09.710933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.915 [2024-11-20 16:40:09.710949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:23.915 [2024-11-20 16:40:09.722168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016efac10 00:28:23.915 [2024-11-20 16:40:09.723608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.915 [2024-11-20 16:40:09.723624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:23.915 [2024-11-20 16:40:09.734060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016efc560 00:28:23.915 [2024-11-20 16:40:09.735511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.915 [2024-11-20 16:40:09.735527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:23.915 [2024-11-20 16:40:09.747501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ef6cc8 00:28:23.915 [2024-11-20 16:40:09.749572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.915 [2024-11-20 16:40:09.749587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:23.915 [2024-11-20 16:40:09.757107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee5a90 00:28:23.915 [2024-11-20 16:40:09.758517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:15593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.915 [2024-11-20 16:40:09.758532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:23.915 [2024-11-20 16:40:09.768965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016eeea00 00:28:23.915 [2024-11-20 16:40:09.770393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.915 [2024-11-20 16:40:09.770410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:23.915 [2024-11-20 16:40:09.783173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee6fa8 00:28:23.915 [2024-11-20 16:40:09.785204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.915 [2024-11-20 16:40:09.785219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:23.915 [2024-11-20 16:40:09.793568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee4578 00:28:23.915 [2024-11-20 16:40:09.794970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.915 [2024-11-20 16:40:09.794989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:23.915 [2024-11-20 16:40:09.807041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee5a90 00:28:23.915 [2024-11-20 16:40:09.809089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.915 [2024-11-20 16:40:09.809104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:23.915 [2024-11-20 16:40:09.817405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016eea248 00:28:23.915 [2024-11-20 16:40:09.818806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.915 [2024-11-20 16:40:09.818821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:23.915 [2024-11-20 16:40:09.829302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016eea248 00:28:23.915 [2024-11-20 16:40:09.830711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.915 [2024-11-20 16:40:09.830727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:23.915 [2024-11-20 16:40:09.841216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016eea248 00:28:23.915 [2024-11-20 16:40:09.842576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.915 [2024-11-20 16:40:09.842592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:23.915 [2024-11-20 16:40:09.854664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee6b70 00:28:23.915 [2024-11-20 16:40:09.856662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.915 [2024-11-20 16:40:09.856677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:23.915 [2024-11-20 16:40:09.865072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ef7100 00:28:23.915 [2024-11-20 16:40:09.866476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.915 [2024-11-20 16:40:09.866491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:24.177 [2024-11-20 16:40:09.877008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ef81e0 00:28:24.177 [2024-11-20 16:40:09.878394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:10345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.177 [2024-11-20 16:40:09.878409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:24.177 [2024-11-20 16:40:09.888936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee99d8 00:28:24.177 [2024-11-20 16:40:09.890356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:16969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.177 [2024-11-20 16:40:09.890372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:24.177 [2024-11-20 16:40:09.900793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ef81e0 00:28:24.177 [2024-11-20 16:40:09.902235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:3446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.177 [2024-11-20 16:40:09.902251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:24.177 [2024-11-20 16:40:09.914206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ef92c0 00:28:24.177 [2024-11-20 16:40:09.916236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.177 [2024-11-20 16:40:09.916251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:24.177 [2024-11-20 16:40:09.923351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ef6020 00:28:24.178 [2024-11-20 16:40:09.924387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:6433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.178 [2024-11-20 16:40:09.924402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:24.178 [2024-11-20 16:40:09.936853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee0630 00:28:24.178 [2024-11-20 16:40:09.938389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.178 [2024-11-20 16:40:09.938404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:24.178 [2024-11-20 16:40:09.948151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ef0ff8 00:28:24.178 [2024-11-20 16:40:09.949668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.178 [2024-11-20 16:40:09.949683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:24.178 [2024-11-20 16:40:09.958501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016eddc00 00:28:24.178 [2024-11-20 16:40:09.959369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:3668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.178 [2024-11-20 16:40:09.959385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:24.178 [2024-11-20 16:40:09.970393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016eddc00 00:28:24.178 [2024-11-20 16:40:09.971266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.178 [2024-11-20 16:40:09.971281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:24.178 [2024-11-20 16:40:09.982301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016eddc00 00:28:24.178 [2024-11-20 16:40:09.983154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.178 [2024-11-20 16:40:09.983170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:24.178 [2024-11-20 16:40:09.994209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016eddc00 00:28:24.178 [2024-11-20 16:40:09.995056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.178 [2024-11-20 16:40:09.995071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:24.178 [2024-11-20 16:40:10.006579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016edece0 00:28:24.178 [2024-11-20 16:40:10.007442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.178 [2024-11-20 16:40:10.007458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:24.178 [2024-11-20 16:40:10.019992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ede470 00:28:24.178 [2024-11-20 16:40:10.021492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.178 [2024-11-20 16:40:10.021508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:24.178 [2024-11-20 16:40:10.030329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016edece0 00:28:24.178 [2024-11-20 16:40:10.031150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.178 [2024-11-20 16:40:10.031165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:24.178 [2024-11-20 16:40:10.042244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016edece0 00:28:24.178 [2024-11-20 16:40:10.043087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.178 [2024-11-20 16:40:10.043103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:24.178 [2024-11-20 16:40:10.055679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016edece0 00:28:24.178 [2024-11-20 16:40:10.057137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.178 [2024-11-20 16:40:10.057152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:24.178 [2024-11-20 16:40:10.067488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ede470 00:28:24.178 [2024-11-20 16:40:10.068969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:17537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.178 [2024-11-20 16:40:10.068987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.178 [2024-11-20 16:40:10.079303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee5658 00:28:24.178 [2024-11-20 16:40:10.080774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:9299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.178 [2024-11-20 16:40:10.080792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:24.178 [2024-11-20 16:40:10.093502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016edf550 00:28:24.178 [2024-11-20 16:40:10.095574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:16003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.178 [2024-11-20 16:40:10.095590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:24.178 [2024-11-20 16:40:10.103039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ef0ff8 00:28:24.178 [2024-11-20 16:40:10.104466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.178 [2024-11-20 16:40:10.104481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:24.178 [2024-11-20 16:40:10.113776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016efdeb0 00:28:24.178 [2024-11-20 16:40:10.114717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.178 [2024-11-20 16:40:10.114732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:24.178 [2024-11-20 16:40:10.127401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ef35f0 00:28:24.178 [2024-11-20 16:40:10.128986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:25240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.178 [2024-11-20 16:40:10.129002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:24.441 [2024-11-20 16:40:10.137752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee01f8 00:28:24.441 [2024-11-20 16:40:10.138717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.441 [2024-11-20 16:40:10.138732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:24.441 [2024-11-20 16:40:10.148949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016eee190 00:28:24.441 [2024-11-20 16:40:10.149838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.441 [2024-11-20 16:40:10.149854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:24.441 [2024-11-20 16:40:10.161572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee01f8 00:28:24.441 [2024-11-20 16:40:10.162494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.441 [2024-11-20 16:40:10.162509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:24.441 [2024-11-20 16:40:10.173524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee4de8 00:28:24.441 [2024-11-20 16:40:10.174421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.441 [2024-11-20 16:40:10.174436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:24.441 [2024-11-20 16:40:10.184666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee3498 00:28:24.441 [2024-11-20 16:40:10.185538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.441 [2024-11-20 16:40:10.185553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:24.441 [2024-11-20 16:40:10.199458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee5220 00:28:24.441 [2024-11-20 16:40:10.201201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.441 [2024-11-20 16:40:10.201216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:24.441 [2024-11-20 16:40:10.209780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ef5be8 00:28:24.441 [2024-11-20 16:40:10.210865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.441 [2024-11-20 16:40:10.210881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:24.441 [2024-11-20 16:40:10.223150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016efb8b8 00:28:24.441 [2024-11-20 16:40:10.224839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:11818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.441 [2024-11-20 16:40:10.224854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:24.441 [2024-11-20 16:40:10.233463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee8088 00:28:24.441 [2024-11-20 16:40:10.234523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.441 [2024-11-20 16:40:10.234538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:24.441 [2024-11-20 16:40:10.246889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee8088 00:28:24.442 [2024-11-20 16:40:10.248587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.442 [2024-11-20 16:40:10.248602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:24.442 [2024-11-20 16:40:10.258744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee27f0 00:28:24.442 [2024-11-20 16:40:10.260409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.442 [2024-11-20 16:40:10.260424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:24.442 [2024-11-20 16:40:10.268351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee88f8 00:28:24.442 [2024-11-20 16:40:10.269337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.442 [2024-11-20 16:40:10.269352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:24.442 [2024-11-20 16:40:10.281035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ef20d8 00:28:24.442 [2024-11-20 16:40:10.282073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.442 [2024-11-20 16:40:10.282087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:24.442 [2024-11-20 16:40:10.294476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee27f0 00:28:24.442 [2024-11-20 16:40:10.296142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.442 [2024-11-20 16:40:10.296157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:24.442 [2024-11-20 16:40:10.304835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ef81e0 00:28:24.442 [2024-11-20 16:40:10.305877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.442 [2024-11-20 16:40:10.305892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:24.442 [2024-11-20 16:40:10.316727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ef81e0 00:28:24.442 [2024-11-20 16:40:10.317716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.442 [2024-11-20 16:40:10.317731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:24.442 [2024-11-20 16:40:10.330152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016efc560 00:28:24.442 [2024-11-20 16:40:10.331810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.442 [2024-11-20 16:40:10.331825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:24.442 [2024-11-20 16:40:10.340897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016efeb58 00:28:24.442 [2024-11-20 16:40:10.342080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:12179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.442 [2024-11-20 16:40:10.342095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:24.442 [2024-11-20 16:40:10.354517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee5a90 00:28:24.442 [2024-11-20 16:40:10.356358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:10097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.442 [2024-11-20 16:40:10.356373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:24.442 [2024-11-20 16:40:10.364847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ef7da8 00:28:24.442 [2024-11-20 16:40:10.366040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.442 [2024-11-20 16:40:10.366055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:24.442 [2024-11-20 16:40:10.376764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ef7da8 00:28:24.442 [2024-11-20 16:40:10.377958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.442 [2024-11-20 16:40:10.377973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:24.442 [2024-11-20 16:40:10.388674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ef7da8 00:28:24.442 [2024-11-20 16:40:10.389807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.442 [2024-11-20 16:40:10.389826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:24.704 [2024-11-20 16:40:10.402119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ef4f40 00:28:24.704 [2024-11-20 16:40:10.403886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.704 [2024-11-20 16:40:10.403901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:24.704 [2024-11-20 16:40:10.413908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016efb480 00:28:24.704 [2024-11-20 16:40:10.415704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.704 [2024-11-20 16:40:10.415719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.704 [2024-11-20 16:40:10.424648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee27f0 00:28:24.704 [2024-11-20 16:40:10.425955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.704 [2024-11-20 16:40:10.425971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:24.704 [2024-11-20 16:40:10.436748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ef8e88 00:28:24.704 [2024-11-20 16:40:10.438058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.704 [2024-11-20 16:40:10.438073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:24.704 [2024-11-20 16:40:10.448659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ef46d0 00:28:24.704 [2024-11-20 16:40:10.449976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:20378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.704 [2024-11-20 16:40:10.449994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:24.704 [2024-11-20 16:40:10.460588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ef2d80 00:28:24.704 [2024-11-20 16:40:10.461913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.704 [2024-11-20 16:40:10.461929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:24.704 [2024-11-20 16:40:10.472493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee6fa8 00:28:24.704 [2024-11-20 16:40:10.473789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:17975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.704 [2024-11-20 16:40:10.473804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:24.704 [2024-11-20 16:40:10.486101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee88f8 00:28:24.704 [2024-11-20 16:40:10.488066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.704 [2024-11-20 16:40:10.488081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:24.704 [2024-11-20 16:40:10.495722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ef8e88 00:28:24.704 [2024-11-20 16:40:10.497022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.704 [2024-11-20 16:40:10.497037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:24.704 [2024-11-20 16:40:10.510549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee0630 00:28:24.704 [2024-11-20 16:40:10.512673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.704 [2024-11-20 16:40:10.512688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.704 [2024-11-20 16:40:10.522409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016edf550 00:28:24.704 [2024-11-20 16:40:10.524557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.704 [2024-11-20 16:40:10.524572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:24.704 [2024-11-20 16:40:10.534293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ef0bc0 00:28:24.704 [2024-11-20 16:40:10.536392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:3807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.704 [2024-11-20 16:40:10.536407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:24.704 [2024-11-20 16:40:10.544636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee2c28 00:28:24.704 [2024-11-20 16:40:10.546098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.704 [2024-11-20 16:40:10.546113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:24.704 [2024-11-20 16:40:10.556537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee2c28 00:28:24.704 [2024-11-20 16:40:10.557998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.704 [2024-11-20 16:40:10.558014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:24.704 [2024-11-20 16:40:10.568461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee2c28 00:28:24.704 [2024-11-20 16:40:10.569920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.704 [2024-11-20 16:40:10.569936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:24.704 [2024-11-20 16:40:10.580387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee2c28 00:28:24.704 [2024-11-20 16:40:10.581839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.704 [2024-11-20 16:40:10.581854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:24.704 [2024-11-20 16:40:10.592298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee2c28 00:28:24.704 [2024-11-20 16:40:10.593724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:6258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.704 [2024-11-20 16:40:10.593740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:24.704 [2024-11-20 16:40:10.604202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084750) with pdu=0x200016ee2c28 00:28:24.704 [2024-11-20 16:40:10.605639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.704 [2024-11-20 16:40:10.605655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:24.704 21338.50 IOPS, 83.35 MiB/s 00:28:24.704 Latency(us) 00:28:24.704 [2024-11-20T15:40:10.663Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:24.704 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:24.704 nvme0n1 : 2.00 21353.63 83.41 0.00 0.00 5987.32 2266.45 14636.37 00:28:24.704 [2024-11-20T15:40:10.663Z] =================================================================================================================== 00:28:24.704 [2024-11-20T15:40:10.663Z] Total : 21353.63 83.41 0.00 0.00 5987.32 2266.45 14636.37 00:28:24.704 { 00:28:24.704 "results": [ 00:28:24.704 { 00:28:24.704 "job": "nvme0n1", 00:28:24.704 "core_mask": "0x2", 00:28:24.704 "workload": "randwrite", 00:28:24.704 "status": "finished", 00:28:24.704 "queue_depth": 128, 00:28:24.704 "io_size": 4096, 00:28:24.704 "runtime": 2.004577, 00:28:24.704 "iops": 21353.632212681277, 00:28:24.704 "mibps": 83.41262583078624, 00:28:24.704 "io_failed": 0, 00:28:24.704 "io_timeout": 0, 00:28:24.704 "avg_latency_us": 5987.321899778063, 00:28:24.704 "min_latency_us": 2266.4533333333334, 00:28:24.704 "max_latency_us": 14636.373333333333 00:28:24.704 } 00:28:24.704 ], 00:28:24.704 "core_count": 1 00:28:24.704 } 00:28:24.704 16:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:24.704 16:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:24.704 16:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:24.704 | .driver_specific 00:28:24.704 | .nvme_error 00:28:24.704 | .status_code 00:28:24.704 | .command_transient_transport_error' 00:28:24.704 16:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:24.965 16:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 167 > 0 )) 00:28:24.965 16:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2381625 00:28:24.965 16:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2381625 ']' 00:28:24.965 16:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2381625 00:28:24.965 16:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:24.965 16:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:24.965 16:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2381625 00:28:24.966 16:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:24.966 16:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:24.966 16:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2381625' 00:28:24.966 killing process with pid 2381625 00:28:24.966 16:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2381625 00:28:24.966 Received shutdown signal, test time was about 2.000000 seconds 00:28:24.966 00:28:24.966 Latency(us) 00:28:24.966 [2024-11-20T15:40:10.925Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:24.966 [2024-11-20T15:40:10.925Z] =================================================================================================================== 00:28:24.966 [2024-11-20T15:40:10.925Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:24.966 16:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2381625 00:28:25.227 16:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:25.227 16:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:25.227 16:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:25.227 16:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:25.227 16:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:25.227 16:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2382430 00:28:25.227 16:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2382430 /var/tmp/bperf.sock 00:28:25.227 16:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2382430 ']' 00:28:25.227 16:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:25.227 16:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:25.227 16:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:25.227 16:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:25.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:25.227 16:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:25.227 16:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:25.227 [2024-11-20 16:40:11.040329] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:28:25.227 [2024-11-20 16:40:11.040383] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2382430 ] 00:28:25.227 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:25.227 Zero copy mechanism will not be used. 00:28:25.227 [2024-11-20 16:40:11.123318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.227 [2024-11-20 16:40:11.152814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:26.169 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:26.169 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:26.169 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:26.169 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:26.169 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:26.169 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.169 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:26.169 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.169 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:26.169 16:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:26.430 nvme0n1 00:28:26.430 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:26.430 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.430 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:26.430 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.430 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:26.430 16:40:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:26.693 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:26.693 Zero copy mechanism will not be used. 00:28:26.693 Running I/O for 2 seconds... 00:28:26.693 [2024-11-20 16:40:12.440394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.693 [2024-11-20 16:40:12.440469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.693 [2024-11-20 16:40:12.440493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:26.693 [2024-11-20 16:40:12.447445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.693 [2024-11-20 16:40:12.447528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.693 [2024-11-20 16:40:12.447546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:26.693 [2024-11-20 16:40:12.451995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.693 [2024-11-20 16:40:12.452080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.694 [2024-11-20 16:40:12.452095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:26.694 [2024-11-20 16:40:12.458401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.694 [2024-11-20 16:40:12.458480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.694 [2024-11-20 16:40:12.458496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:26.694 [2024-11-20 16:40:12.462503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.694 [2024-11-20 16:40:12.462571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.694 [2024-11-20 16:40:12.462586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:26.694 [2024-11-20 16:40:12.466822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.694 [2024-11-20 16:40:12.466889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.694 [2024-11-20 16:40:12.466905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:26.694 [2024-11-20 16:40:12.474115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.694 [2024-11-20 16:40:12.474186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.694 [2024-11-20 16:40:12.474205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:26.694 [2024-11-20 16:40:12.478590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.694 [2024-11-20 16:40:12.478652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.694 [2024-11-20 16:40:12.478667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:26.694 [2024-11-20 16:40:12.482941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.694 [2024-11-20 16:40:12.483028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.694 [2024-11-20 16:40:12.483043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:26.694 [2024-11-20 16:40:12.489643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.694 [2024-11-20 16:40:12.489731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.694 [2024-11-20 16:40:12.489747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:26.694 [2024-11-20 16:40:12.497508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.694 [2024-11-20 16:40:12.497589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.694 [2024-11-20 16:40:12.497605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:26.694 [2024-11-20 16:40:12.502286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.694 [2024-11-20 16:40:12.502352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.694 [2024-11-20 16:40:12.502367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:26.694 [2024-11-20 16:40:12.506571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.694 [2024-11-20 16:40:12.506625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.694 [2024-11-20 16:40:12.506639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:26.694 [2024-11-20 16:40:12.510848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.694 [2024-11-20 16:40:12.510909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.694 [2024-11-20 16:40:12.510925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:26.694 [2024-11-20 16:40:12.515307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.694 [2024-11-20 16:40:12.515593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.694 [2024-11-20 16:40:12.515609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:26.694 [2024-11-20 16:40:12.520031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.694 [2024-11-20 16:40:12.520131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.694 [2024-11-20 16:40:12.520146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:26.694 [2024-11-20 16:40:12.524287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.694 [2024-11-20 16:40:12.524375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.694 [2024-11-20 16:40:12.524391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:26.694 [2024-11-20 16:40:12.528028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.694 [2024-11-20 16:40:12.528095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.694 [2024-11-20 16:40:12.528110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:26.694 [2024-11-20 16:40:12.532450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.694 [2024-11-20 16:40:12.532511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.694 [2024-11-20 16:40:12.532526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:26.694 [2024-11-20 16:40:12.538290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.694 [2024-11-20 16:40:12.538359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.694 [2024-11-20 16:40:12.538374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:26.694 [2024-11-20 16:40:12.542548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.694 [2024-11-20 16:40:12.542613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.694 [2024-11-20 16:40:12.542628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:26.694 [2024-11-20 16:40:12.546492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.694 [2024-11-20 16:40:12.546548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.694 [2024-11-20 16:40:12.546563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:26.694 [2024-11-20 16:40:12.550853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.694 [2024-11-20 16:40:12.550910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.694 [2024-11-20 16:40:12.550925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:26.694 [2024-11-20 16:40:12.556327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.694 [2024-11-20 16:40:12.556686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.694 [2024-11-20 16:40:12.556701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:26.694 [2024-11-20 16:40:12.562915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.694 [2024-11-20 16:40:12.562989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.695 [2024-11-20 16:40:12.563004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:26.695 [2024-11-20 16:40:12.568351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.695 [2024-11-20 16:40:12.568428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.695 [2024-11-20 16:40:12.568442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:26.695 [2024-11-20 16:40:12.575624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.695 [2024-11-20 16:40:12.575875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.695 [2024-11-20 16:40:12.575891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:26.695 [2024-11-20 16:40:12.581609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.695 [2024-11-20 16:40:12.581693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.695 [2024-11-20 16:40:12.581708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:26.695 [2024-11-20 16:40:12.585609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.695 [2024-11-20 16:40:12.585668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.695 [2024-11-20 16:40:12.585683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:26.695 [2024-11-20 16:40:12.589497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.695 [2024-11-20 16:40:12.589547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.695 [2024-11-20 16:40:12.589562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:26.695 [2024-11-20 16:40:12.593823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.695 [2024-11-20 16:40:12.594110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.695 [2024-11-20 16:40:12.594125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:26.695 [2024-11-20 16:40:12.599405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.695 [2024-11-20 16:40:12.599711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.695 [2024-11-20 16:40:12.599727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:26.695 [2024-11-20 16:40:12.604670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.695 [2024-11-20 16:40:12.604752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.695 [2024-11-20 16:40:12.604770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:26.695 [2024-11-20 16:40:12.609178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.695 [2024-11-20 16:40:12.609252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.695 [2024-11-20 16:40:12.609267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:26.695 [2024-11-20 16:40:12.614804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.695 [2024-11-20 16:40:12.614876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.695 [2024-11-20 16:40:12.614891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:26.695 [2024-11-20 16:40:12.620637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.695 [2024-11-20 16:40:12.620697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.695 [2024-11-20 16:40:12.620712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:26.695 [2024-11-20 16:40:12.627905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.695 [2024-11-20 16:40:12.628014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.695 [2024-11-20 16:40:12.628030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:26.695 [2024-11-20 16:40:12.636669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.695 [2024-11-20 16:40:12.636979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.695 [2024-11-20 16:40:12.637000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:26.695 [2024-11-20 16:40:12.641537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.695 [2024-11-20 16:40:12.641616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.695 [2024-11-20 16:40:12.641631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:26.695 [2024-11-20 16:40:12.645624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.695 [2024-11-20 16:40:12.645674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.695 [2024-11-20 16:40:12.645689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:26.958 [2024-11-20 16:40:12.649739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.958 [2024-11-20 16:40:12.649789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.958 [2024-11-20 16:40:12.649804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:26.958 [2024-11-20 16:40:12.653753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.958 [2024-11-20 16:40:12.653813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.958 [2024-11-20 16:40:12.653829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:26.958 [2024-11-20 16:40:12.657721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.958 [2024-11-20 16:40:12.657785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.958 [2024-11-20 16:40:12.657800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:26.958 [2024-11-20 16:40:12.661565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.958 [2024-11-20 16:40:12.661631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.958 [2024-11-20 16:40:12.661646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:26.958 [2024-11-20 16:40:12.665263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.958 [2024-11-20 16:40:12.665326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.958 [2024-11-20 16:40:12.665341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:26.958 [2024-11-20 16:40:12.668992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.958 [2024-11-20 16:40:12.669084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.958 [2024-11-20 16:40:12.669099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:26.958 [2024-11-20 16:40:12.677913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.958 [2024-11-20 16:40:12.678134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.958 [2024-11-20 16:40:12.678151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:26.958 [2024-11-20 16:40:12.686106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.958 [2024-11-20 16:40:12.686307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.958 [2024-11-20 16:40:12.686323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:26.958 [2024-11-20 16:40:12.692137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.958 [2024-11-20 16:40:12.692539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.959 [2024-11-20 16:40:12.692555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:26.959 [2024-11-20 16:40:12.700857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.959 [2024-11-20 16:40:12.701054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.959 [2024-11-20 16:40:12.701071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:26.959 [2024-11-20 16:40:12.705392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.959 [2024-11-20 16:40:12.705627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.959 [2024-11-20 16:40:12.705643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:26.959 [2024-11-20 16:40:12.713881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.959 [2024-11-20 16:40:12.714068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.959 [2024-11-20 16:40:12.714085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:26.959 [2024-11-20 16:40:12.718862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.959 [2024-11-20 16:40:12.719067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.959 [2024-11-20 16:40:12.719084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:26.959 [2024-11-20 16:40:12.725764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.959 [2024-11-20 16:40:12.726065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.959 [2024-11-20 16:40:12.726080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:26.959 [2024-11-20 16:40:12.730748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.959 [2024-11-20 16:40:12.730930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.959 [2024-11-20 16:40:12.730947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:26.959 [2024-11-20 16:40:12.737321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.959 [2024-11-20 16:40:12.737541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.959 [2024-11-20 16:40:12.737557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:26.959 [2024-11-20 16:40:12.742172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.959 [2024-11-20 16:40:12.742354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.959 [2024-11-20 16:40:12.742370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:26.959 [2024-11-20 16:40:12.746100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.959 [2024-11-20 16:40:12.746272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.959 [2024-11-20 16:40:12.746288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:26.959 [2024-11-20 16:40:12.749881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.959 [2024-11-20 16:40:12.750160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.959 [2024-11-20 16:40:12.750178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:26.959 [2024-11-20 16:40:12.754205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.959 [2024-11-20 16:40:12.754493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.959 [2024-11-20 16:40:12.754509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:26.959 [2024-11-20 16:40:12.757883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.959 [2024-11-20 16:40:12.758060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.959 [2024-11-20 16:40:12.758076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:26.959 [2024-11-20 16:40:12.761420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.959 [2024-11-20 16:40:12.761594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.959 [2024-11-20 16:40:12.761610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:26.959 [2024-11-20 16:40:12.764958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.959 [2024-11-20 16:40:12.765139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.959 [2024-11-20 16:40:12.765156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:26.959 [2024-11-20 16:40:12.768483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.959 [2024-11-20 16:40:12.768788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.959 [2024-11-20 16:40:12.768804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:26.959 [2024-11-20 16:40:12.774289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.959 [2024-11-20 16:40:12.774501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.959 [2024-11-20 16:40:12.774517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:26.959 [2024-11-20 16:40:12.779645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.959 [2024-11-20 16:40:12.779818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.959 [2024-11-20 16:40:12.779834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:26.959 [2024-11-20 16:40:12.784359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.959 [2024-11-20 16:40:12.784527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.959 [2024-11-20 16:40:12.784542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:26.959 [2024-11-20 16:40:12.791648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.959 [2024-11-20 16:40:12.791928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.959 [2024-11-20 16:40:12.791944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:26.959 [2024-11-20 16:40:12.796199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.959 [2024-11-20 16:40:12.796374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.959 [2024-11-20 16:40:12.796390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:26.959 [2024-11-20 16:40:12.801706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.959 [2024-11-20 16:40:12.801778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.959 [2024-11-20 16:40:12.801793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:26.959 [2024-11-20 16:40:12.806102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.959 [2024-11-20 16:40:12.806276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.959 [2024-11-20 16:40:12.806292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:26.959 [2024-11-20 16:40:12.813729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.960 [2024-11-20 16:40:12.813936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.960 [2024-11-20 16:40:12.813952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:26.960 [2024-11-20 16:40:12.818755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.960 [2024-11-20 16:40:12.818922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.960 [2024-11-20 16:40:12.818938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:26.960 [2024-11-20 16:40:12.822375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.960 [2024-11-20 16:40:12.822548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.960 [2024-11-20 16:40:12.822564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:26.960 [2024-11-20 16:40:12.826477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.960 [2024-11-20 16:40:12.826649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.960 [2024-11-20 16:40:12.826665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:26.960 [2024-11-20 16:40:12.831433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.960 [2024-11-20 16:40:12.831590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.960 [2024-11-20 16:40:12.831606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:26.960 [2024-11-20 16:40:12.840722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.960 [2024-11-20 16:40:12.840901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.960 [2024-11-20 16:40:12.840918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:26.960 [2024-11-20 16:40:12.847662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.960 [2024-11-20 16:40:12.847953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.960 [2024-11-20 16:40:12.847970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:26.960 [2024-11-20 16:40:12.858533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.960 [2024-11-20 16:40:12.858804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.960 [2024-11-20 16:40:12.858820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:26.960 [2024-11-20 16:40:12.868604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.960 [2024-11-20 16:40:12.869033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.960 [2024-11-20 16:40:12.869049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:26.960 [2024-11-20 16:40:12.878975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.960 [2024-11-20 16:40:12.879375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.960 [2024-11-20 16:40:12.879392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:26.960 [2024-11-20 16:40:12.890235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.960 [2024-11-20 16:40:12.890470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.960 [2024-11-20 16:40:12.890486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:26.960 [2024-11-20 16:40:12.901274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.960 [2024-11-20 16:40:12.901697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.960 [2024-11-20 16:40:12.901713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:26.960 [2024-11-20 16:40:12.912266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:26.960 [2024-11-20 16:40:12.912494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.960 [2024-11-20 16:40:12.912510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.223 [2024-11-20 16:40:12.922960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.223 [2024-11-20 16:40:12.923289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.223 [2024-11-20 16:40:12.923309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.223 [2024-11-20 16:40:12.932670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.223 [2024-11-20 16:40:12.932869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.223 [2024-11-20 16:40:12.932885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.223 [2024-11-20 16:40:12.942391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.223 [2024-11-20 16:40:12.942555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.223 [2024-11-20 16:40:12.942571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.223 [2024-11-20 16:40:12.951654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.223 [2024-11-20 16:40:12.951853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.223 [2024-11-20 16:40:12.951869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.223 [2024-11-20 16:40:12.957432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.223 [2024-11-20 16:40:12.957586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.223 [2024-11-20 16:40:12.957601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.223 [2024-11-20 16:40:12.962201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.223 [2024-11-20 16:40:12.962498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.223 [2024-11-20 16:40:12.962513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.223 [2024-11-20 16:40:12.967709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.223 [2024-11-20 16:40:12.967877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.223 [2024-11-20 16:40:12.967893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.223 [2024-11-20 16:40:12.973867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.223 [2024-11-20 16:40:12.974127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.223 [2024-11-20 16:40:12.974144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.223 [2024-11-20 16:40:12.983257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.223 [2024-11-20 16:40:12.983549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.223 [2024-11-20 16:40:12.983565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.223 [2024-11-20 16:40:12.992747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.223 [2024-11-20 16:40:12.993126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.223 [2024-11-20 16:40:12.993143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.223 [2024-11-20 16:40:13.002080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.223 [2024-11-20 16:40:13.002520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.223 [2024-11-20 16:40:13.002536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.223 [2024-11-20 16:40:13.009103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.223 [2024-11-20 16:40:13.009282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.223 [2024-11-20 16:40:13.009297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.223 [2024-11-20 16:40:13.015382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.223 [2024-11-20 16:40:13.015670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.223 [2024-11-20 16:40:13.015686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.223 [2024-11-20 16:40:13.021819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.223 [2024-11-20 16:40:13.022126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.223 [2024-11-20 16:40:13.022142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.223 [2024-11-20 16:40:13.027682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.223 [2024-11-20 16:40:13.027830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.223 [2024-11-20 16:40:13.027846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.223 [2024-11-20 16:40:13.031556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.223 [2024-11-20 16:40:13.031724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.223 [2024-11-20 16:40:13.031740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.223 [2024-11-20 16:40:13.037782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.223 [2024-11-20 16:40:13.037937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.223 [2024-11-20 16:40:13.037953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.223 [2024-11-20 16:40:13.043277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.223 [2024-11-20 16:40:13.043489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.223 [2024-11-20 16:40:13.043504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.223 [2024-11-20 16:40:13.047088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.223 [2024-11-20 16:40:13.047257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.223 [2024-11-20 16:40:13.047272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.223 [2024-11-20 16:40:13.050874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.223 [2024-11-20 16:40:13.051030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.223 [2024-11-20 16:40:13.051045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.223 [2024-11-20 16:40:13.056040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.223 [2024-11-20 16:40:13.056195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.223 [2024-11-20 16:40:13.056210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.223 [2024-11-20 16:40:13.059736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.223 [2024-11-20 16:40:13.059902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.223 [2024-11-20 16:40:13.059917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.223 [2024-11-20 16:40:13.063355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.223 [2024-11-20 16:40:13.063515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.223 [2024-11-20 16:40:13.063531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.223 [2024-11-20 16:40:13.067944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.223 [2024-11-20 16:40:13.068215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.223 [2024-11-20 16:40:13.068231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.223 [2024-11-20 16:40:13.073404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.224 [2024-11-20 16:40:13.073765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.224 [2024-11-20 16:40:13.073781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.224 [2024-11-20 16:40:13.082186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.224 [2024-11-20 16:40:13.082484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.224 [2024-11-20 16:40:13.082500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.224 [2024-11-20 16:40:13.089820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.224 [2024-11-20 16:40:13.090053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.224 [2024-11-20 16:40:13.090072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.224 [2024-11-20 16:40:13.097089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.224 [2024-11-20 16:40:13.097322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.224 [2024-11-20 16:40:13.097338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.224 [2024-11-20 16:40:13.106512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.224 [2024-11-20 16:40:13.106741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.224 [2024-11-20 16:40:13.106757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.224 [2024-11-20 16:40:13.113860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.224 [2024-11-20 16:40:13.114154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.224 [2024-11-20 16:40:13.114170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.224 [2024-11-20 16:40:13.122116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.224 [2024-11-20 16:40:13.122395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.224 [2024-11-20 16:40:13.122411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.224 [2024-11-20 16:40:13.129832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.224 [2024-11-20 16:40:13.129963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.224 [2024-11-20 16:40:13.129978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.224 [2024-11-20 16:40:13.139011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.224 [2024-11-20 16:40:13.139281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.224 [2024-11-20 16:40:13.139296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.224 [2024-11-20 16:40:13.148496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.224 [2024-11-20 16:40:13.148780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.224 [2024-11-20 16:40:13.148797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.224 [2024-11-20 16:40:13.158829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.224 [2024-11-20 16:40:13.159057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.224 [2024-11-20 16:40:13.159073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.224 [2024-11-20 16:40:13.168990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.224 [2024-11-20 16:40:13.169319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.224 [2024-11-20 16:40:13.169335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.486 [2024-11-20 16:40:13.179726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.486 [2024-11-20 16:40:13.179959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.486 [2024-11-20 16:40:13.179974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.486 [2024-11-20 16:40:13.190556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.486 [2024-11-20 16:40:13.190966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.486 [2024-11-20 16:40:13.190987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.486 [2024-11-20 16:40:13.199232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.486 [2024-11-20 16:40:13.199472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.486 [2024-11-20 16:40:13.199488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.486 [2024-11-20 16:40:13.203727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.486 [2024-11-20 16:40:13.203854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.486 [2024-11-20 16:40:13.203869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.486 [2024-11-20 16:40:13.207932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.486 [2024-11-20 16:40:13.208078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.486 [2024-11-20 16:40:13.208094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.486 [2024-11-20 16:40:13.215945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.486 [2024-11-20 16:40:13.216068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.486 [2024-11-20 16:40:13.216083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.486 [2024-11-20 16:40:13.223771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.486 [2024-11-20 16:40:13.224055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.486 [2024-11-20 16:40:13.224070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.486 [2024-11-20 16:40:13.232113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.486 [2024-11-20 16:40:13.232480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.486 [2024-11-20 16:40:13.232496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.486 [2024-11-20 16:40:13.239301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.486 [2024-11-20 16:40:13.239444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.486 [2024-11-20 16:40:13.239459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.486 [2024-11-20 16:40:13.246904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.486 [2024-11-20 16:40:13.247006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.486 [2024-11-20 16:40:13.247021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.486 [2024-11-20 16:40:13.254806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.486 [2024-11-20 16:40:13.254969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.486 [2024-11-20 16:40:13.254990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.487 [2024-11-20 16:40:13.262870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.487 [2024-11-20 16:40:13.263114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.487 [2024-11-20 16:40:13.263130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.487 [2024-11-20 16:40:13.270931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.487 [2024-11-20 16:40:13.271045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.487 [2024-11-20 16:40:13.271060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.487 [2024-11-20 16:40:13.278616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.487 [2024-11-20 16:40:13.278862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.487 [2024-11-20 16:40:13.278878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.487 [2024-11-20 16:40:13.287730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.487 [2024-11-20 16:40:13.287965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.487 [2024-11-20 16:40:13.287985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.487 [2024-11-20 16:40:13.297752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.487 [2024-11-20 16:40:13.298091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.487 [2024-11-20 16:40:13.298107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.487 [2024-11-20 16:40:13.306787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.487 [2024-11-20 16:40:13.306949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.487 [2024-11-20 16:40:13.306968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.487 [2024-11-20 16:40:13.316792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.487 [2024-11-20 16:40:13.317036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.487 [2024-11-20 16:40:13.317052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.487 [2024-11-20 16:40:13.326263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.487 [2024-11-20 16:40:13.326472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.487 [2024-11-20 16:40:13.326489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.487 [2024-11-20 16:40:13.335821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.487 [2024-11-20 16:40:13.336179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.487 [2024-11-20 16:40:13.336195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.487 [2024-11-20 16:40:13.345504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.487 [2024-11-20 16:40:13.345680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.487 [2024-11-20 16:40:13.345697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.487 [2024-11-20 16:40:13.355509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.487 [2024-11-20 16:40:13.355739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.487 [2024-11-20 16:40:13.355756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.487 [2024-11-20 16:40:13.365466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.487 [2024-11-20 16:40:13.365673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.487 [2024-11-20 16:40:13.365689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.487 [2024-11-20 16:40:13.375290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.487 [2024-11-20 16:40:13.375533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.487 [2024-11-20 16:40:13.375549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.487 [2024-11-20 16:40:13.384664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.487 [2024-11-20 16:40:13.384936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.487 [2024-11-20 16:40:13.384952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.487 [2024-11-20 16:40:13.395232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.487 [2024-11-20 16:40:13.395448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.487 [2024-11-20 16:40:13.395464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.487 [2024-11-20 16:40:13.404411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.487 [2024-11-20 16:40:13.404679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.487 [2024-11-20 16:40:13.404695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.487 [2024-11-20 16:40:13.413543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.487 [2024-11-20 16:40:13.413754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.487 [2024-11-20 16:40:13.413770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.487 [2024-11-20 16:40:13.423596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.487 [2024-11-20 16:40:13.423823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.487 [2024-11-20 16:40:13.423839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.487 [2024-11-20 16:40:13.433458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.487 [2024-11-20 16:40:13.433776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.487 [2024-11-20 16:40:13.433791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.750 4638.00 IOPS, 579.75 MiB/s [2024-11-20T15:40:13.709Z] [2024-11-20 16:40:13.443683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.750 [2024-11-20 16:40:13.443926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.750 [2024-11-20 16:40:13.443941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.750 [2024-11-20 16:40:13.453494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.750 [2024-11-20 16:40:13.453740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.750 [2024-11-20 16:40:13.453756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.750 [2024-11-20 16:40:13.463006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.750 [2024-11-20 16:40:13.463273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.750 [2024-11-20 16:40:13.463289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.750 [2024-11-20 16:40:13.472751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.750 [2024-11-20 16:40:13.472817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.750 [2024-11-20 16:40:13.472832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.750 [2024-11-20 16:40:13.482596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.750 [2024-11-20 16:40:13.482885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.750 [2024-11-20 16:40:13.482901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.750 [2024-11-20 16:40:13.492184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.750 [2024-11-20 16:40:13.492525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.750 [2024-11-20 16:40:13.492540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.750 [2024-11-20 16:40:13.501097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.750 [2024-11-20 16:40:13.501348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.750 [2024-11-20 16:40:13.501364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.750 [2024-11-20 16:40:13.510167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.750 [2024-11-20 16:40:13.510259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.750 [2024-11-20 16:40:13.510274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.750 [2024-11-20 16:40:13.519704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.750 [2024-11-20 16:40:13.520033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.750 [2024-11-20 16:40:13.520048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.750 [2024-11-20 16:40:13.528616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.751 [2024-11-20 16:40:13.528872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.751 [2024-11-20 16:40:13.528888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.751 [2024-11-20 16:40:13.539135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.751 [2024-11-20 16:40:13.539442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.751 [2024-11-20 16:40:13.539458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.751 [2024-11-20 16:40:13.549362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.751 [2024-11-20 16:40:13.549702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.751 [2024-11-20 16:40:13.549717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.751 [2024-11-20 16:40:13.559230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.751 [2024-11-20 16:40:13.559476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.751 [2024-11-20 16:40:13.559494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.751 [2024-11-20 16:40:13.568755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.751 [2024-11-20 16:40:13.568894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.751 [2024-11-20 16:40:13.568909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.751 [2024-11-20 16:40:13.578287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.751 [2024-11-20 16:40:13.578524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.751 [2024-11-20 16:40:13.578540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.751 [2024-11-20 16:40:13.587382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.751 [2024-11-20 16:40:13.587682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.751 [2024-11-20 16:40:13.587697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.751 [2024-11-20 16:40:13.596686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.751 [2024-11-20 16:40:13.596894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.751 [2024-11-20 16:40:13.596910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.751 [2024-11-20 16:40:13.606189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.751 [2024-11-20 16:40:13.606480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.751 [2024-11-20 16:40:13.606496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.751 [2024-11-20 16:40:13.616135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.751 [2024-11-20 16:40:13.616386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.751 [2024-11-20 16:40:13.616402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.751 [2024-11-20 16:40:13.625959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.751 [2024-11-20 16:40:13.626038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.751 [2024-11-20 16:40:13.626054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.751 [2024-11-20 16:40:13.635928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.751 [2024-11-20 16:40:13.636223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.751 [2024-11-20 16:40:13.636238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.751 [2024-11-20 16:40:13.644305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.751 [2024-11-20 16:40:13.644556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.751 [2024-11-20 16:40:13.644572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.751 [2024-11-20 16:40:13.650926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.751 [2024-11-20 16:40:13.650987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.751 [2024-11-20 16:40:13.651003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.751 [2024-11-20 16:40:13.656826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.751 [2024-11-20 16:40:13.656893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.751 [2024-11-20 16:40:13.656907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.751 [2024-11-20 16:40:13.660677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.751 [2024-11-20 16:40:13.660737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.751 [2024-11-20 16:40:13.660752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.751 [2024-11-20 16:40:13.664313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.751 [2024-11-20 16:40:13.664377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.751 [2024-11-20 16:40:13.664392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.751 [2024-11-20 16:40:13.667935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.751 [2024-11-20 16:40:13.668004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.751 [2024-11-20 16:40:13.668019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.751 [2024-11-20 16:40:13.672599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.751 [2024-11-20 16:40:13.672673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.751 [2024-11-20 16:40:13.672688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.751 [2024-11-20 16:40:13.679251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.751 [2024-11-20 16:40:13.679568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.751 [2024-11-20 16:40:13.679584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:27.751 [2024-11-20 16:40:13.684752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.751 [2024-11-20 16:40:13.684863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.751 [2024-11-20 16:40:13.684877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:27.751 [2024-11-20 16:40:13.691408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.751 [2024-11-20 16:40:13.691494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.751 [2024-11-20 16:40:13.691509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:27.751 [2024-11-20 16:40:13.694976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.751 [2024-11-20 16:40:13.695064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.751 [2024-11-20 16:40:13.695079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:27.751 [2024-11-20 16:40:13.699783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:27.751 [2024-11-20 16:40:13.699974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.751 [2024-11-20 16:40:13.699994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.014 [2024-11-20 16:40:13.708143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.014 [2024-11-20 16:40:13.708226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.014 [2024-11-20 16:40:13.708241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.014 [2024-11-20 16:40:13.711700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.014 [2024-11-20 16:40:13.711769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.014 [2024-11-20 16:40:13.711784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.014 [2024-11-20 16:40:13.715032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.014 [2024-11-20 16:40:13.715104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.014 [2024-11-20 16:40:13.715119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.014 [2024-11-20 16:40:13.718317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.014 [2024-11-20 16:40:13.718387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.014 [2024-11-20 16:40:13.718402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.014 [2024-11-20 16:40:13.721600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.014 [2024-11-20 16:40:13.721677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.014 [2024-11-20 16:40:13.721692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.014 [2024-11-20 16:40:13.724893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.014 [2024-11-20 16:40:13.724966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.014 [2024-11-20 16:40:13.724989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.014 [2024-11-20 16:40:13.728341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.014 [2024-11-20 16:40:13.728415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.014 [2024-11-20 16:40:13.728430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.014 [2024-11-20 16:40:13.731631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.014 [2024-11-20 16:40:13.731705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.014 [2024-11-20 16:40:13.731720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.014 [2024-11-20 16:40:13.735189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.014 [2024-11-20 16:40:13.735268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.014 [2024-11-20 16:40:13.735283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.014 [2024-11-20 16:40:13.741081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.014 [2024-11-20 16:40:13.741179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.015 [2024-11-20 16:40:13.741193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.015 [2024-11-20 16:40:13.745846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.015 [2024-11-20 16:40:13.746075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.015 [2024-11-20 16:40:13.746091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.015 [2024-11-20 16:40:13.753571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.015 [2024-11-20 16:40:13.753677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.015 [2024-11-20 16:40:13.753692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.015 [2024-11-20 16:40:13.758512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.015 [2024-11-20 16:40:13.758751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.015 [2024-11-20 16:40:13.758766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.015 [2024-11-20 16:40:13.763150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.015 [2024-11-20 16:40:13.763206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.015 [2024-11-20 16:40:13.763220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.015 [2024-11-20 16:40:13.767548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.015 [2024-11-20 16:40:13.767806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.015 [2024-11-20 16:40:13.767821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.015 [2024-11-20 16:40:13.773609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.015 [2024-11-20 16:40:13.773853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.015 [2024-11-20 16:40:13.773869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.015 [2024-11-20 16:40:13.777857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.015 [2024-11-20 16:40:13.777911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.015 [2024-11-20 16:40:13.777925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.015 [2024-11-20 16:40:13.781923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.015 [2024-11-20 16:40:13.781979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.015 [2024-11-20 16:40:13.781998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.015 [2024-11-20 16:40:13.786780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.015 [2024-11-20 16:40:13.786840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.015 [2024-11-20 16:40:13.786854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.015 [2024-11-20 16:40:13.792236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.015 [2024-11-20 16:40:13.792493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.015 [2024-11-20 16:40:13.792509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.015 [2024-11-20 16:40:13.799577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.015 [2024-11-20 16:40:13.799719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.015 [2024-11-20 16:40:13.799734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.015 [2024-11-20 16:40:13.805709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.015 [2024-11-20 16:40:13.805763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.015 [2024-11-20 16:40:13.805778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.015 [2024-11-20 16:40:13.809942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.015 [2024-11-20 16:40:13.810012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.015 [2024-11-20 16:40:13.810027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.015 [2024-11-20 16:40:13.813406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.015 [2024-11-20 16:40:13.813461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.015 [2024-11-20 16:40:13.813476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.015 [2024-11-20 16:40:13.816714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.015 [2024-11-20 16:40:13.816779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.015 [2024-11-20 16:40:13.816794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.015 [2024-11-20 16:40:13.820014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.015 [2024-11-20 16:40:13.820064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.015 [2024-11-20 16:40:13.820079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.015 [2024-11-20 16:40:13.823299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.015 [2024-11-20 16:40:13.823365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.015 [2024-11-20 16:40:13.823380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.015 [2024-11-20 16:40:13.826562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.015 [2024-11-20 16:40:13.826624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.015 [2024-11-20 16:40:13.826638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.015 [2024-11-20 16:40:13.830044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.015 [2024-11-20 16:40:13.830101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.015 [2024-11-20 16:40:13.830116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.015 [2024-11-20 16:40:13.833634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.015 [2024-11-20 16:40:13.833687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.015 [2024-11-20 16:40:13.833702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.015 [2024-11-20 16:40:13.837718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.015 [2024-11-20 16:40:13.837769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.015 [2024-11-20 16:40:13.837784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.015 [2024-11-20 16:40:13.845815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.015 [2024-11-20 16:40:13.846119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.015 [2024-11-20 16:40:13.846137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.015 [2024-11-20 16:40:13.854383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.015 [2024-11-20 16:40:13.854609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.015 [2024-11-20 16:40:13.854625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.015 [2024-11-20 16:40:13.859157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.015 [2024-11-20 16:40:13.859210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.015 [2024-11-20 16:40:13.859225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.015 [2024-11-20 16:40:13.862863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.015 [2024-11-20 16:40:13.862922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.015 [2024-11-20 16:40:13.862937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.015 [2024-11-20 16:40:13.867728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.015 [2024-11-20 16:40:13.867935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.015 [2024-11-20 16:40:13.867951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.015 [2024-11-20 16:40:13.872674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.015 [2024-11-20 16:40:13.872768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.015 [2024-11-20 16:40:13.872784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.015 [2024-11-20 16:40:13.877076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.015 [2024-11-20 16:40:13.877253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.015 [2024-11-20 16:40:13.877268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.015 [2024-11-20 16:40:13.883532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.015 [2024-11-20 16:40:13.883831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.015 [2024-11-20 16:40:13.883847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.015 [2024-11-20 16:40:13.891053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.015 [2024-11-20 16:40:13.891274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.015 [2024-11-20 16:40:13.891289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.015 [2024-11-20 16:40:13.899060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.015 [2024-11-20 16:40:13.899394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.015 [2024-11-20 16:40:13.899409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.015 [2024-11-20 16:40:13.903428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.015 [2024-11-20 16:40:13.903564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.016 [2024-11-20 16:40:13.903579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.016 [2024-11-20 16:40:13.907744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.016 [2024-11-20 16:40:13.907810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.016 [2024-11-20 16:40:13.907825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.016 [2024-11-20 16:40:13.911423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.016 [2024-11-20 16:40:13.911493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.016 [2024-11-20 16:40:13.911508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.016 [2024-11-20 16:40:13.915207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.016 [2024-11-20 16:40:13.915270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.016 [2024-11-20 16:40:13.915284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.016 [2024-11-20 16:40:13.918744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.016 [2024-11-20 16:40:13.918794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.016 [2024-11-20 16:40:13.918809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.016 [2024-11-20 16:40:13.922077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.016 [2024-11-20 16:40:13.922147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.016 [2024-11-20 16:40:13.922163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.016 [2024-11-20 16:40:13.925614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.016 [2024-11-20 16:40:13.925680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.016 [2024-11-20 16:40:13.925695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.016 [2024-11-20 16:40:13.928827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.016 [2024-11-20 16:40:13.928881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.016 [2024-11-20 16:40:13.928896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.016 [2024-11-20 16:40:13.932051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.016 [2024-11-20 16:40:13.932112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.016 [2024-11-20 16:40:13.932127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.016 [2024-11-20 16:40:13.935269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.016 [2024-11-20 16:40:13.935332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.016 [2024-11-20 16:40:13.935347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.016 [2024-11-20 16:40:13.938492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.016 [2024-11-20 16:40:13.938550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.016 [2024-11-20 16:40:13.938565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.016 [2024-11-20 16:40:13.943363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.016 [2024-11-20 16:40:13.943441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.016 [2024-11-20 16:40:13.943456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.016 [2024-11-20 16:40:13.948898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.016 [2024-11-20 16:40:13.948967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.016 [2024-11-20 16:40:13.948987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.016 [2024-11-20 16:40:13.955367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.016 [2024-11-20 16:40:13.955590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.016 [2024-11-20 16:40:13.955605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.016 [2024-11-20 16:40:13.962895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.016 [2024-11-20 16:40:13.962966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.016 [2024-11-20 16:40:13.962985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.016 [2024-11-20 16:40:13.966251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.016 [2024-11-20 16:40:13.966355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.016 [2024-11-20 16:40:13.966370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.278 [2024-11-20 16:40:13.972549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.278 [2024-11-20 16:40:13.972616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-11-20 16:40:13.972634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.278 [2024-11-20 16:40:13.975905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.278 [2024-11-20 16:40:13.975959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-11-20 16:40:13.975974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.278 [2024-11-20 16:40:13.979509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.278 [2024-11-20 16:40:13.979576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-11-20 16:40:13.979591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.278 [2024-11-20 16:40:13.985134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.278 [2024-11-20 16:40:13.985274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-11-20 16:40:13.985288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.278 [2024-11-20 16:40:13.991676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.278 [2024-11-20 16:40:13.991789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-11-20 16:40:13.991804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.278 [2024-11-20 16:40:13.995447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.278 [2024-11-20 16:40:13.995712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-11-20 16:40:13.995727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.278 [2024-11-20 16:40:14.004585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.278 [2024-11-20 16:40:14.004666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-11-20 16:40:14.004681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.278 [2024-11-20 16:40:14.008516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.278 [2024-11-20 16:40:14.008568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-11-20 16:40:14.008583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.278 [2024-11-20 16:40:14.011804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.278 [2024-11-20 16:40:14.011867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-11-20 16:40:14.011881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.278 [2024-11-20 16:40:14.015075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.278 [2024-11-20 16:40:14.015131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-11-20 16:40:14.015146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.278 [2024-11-20 16:40:14.018341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.278 [2024-11-20 16:40:14.018392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-11-20 16:40:14.018407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.278 [2024-11-20 16:40:14.021598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.278 [2024-11-20 16:40:14.021655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-11-20 16:40:14.021670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.278 [2024-11-20 16:40:14.025090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.278 [2024-11-20 16:40:14.025167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-11-20 16:40:14.025182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.278 [2024-11-20 16:40:14.029491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.278 [2024-11-20 16:40:14.029609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-11-20 16:40:14.029624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.278 [2024-11-20 16:40:14.033084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.278 [2024-11-20 16:40:14.033171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-11-20 16:40:14.033187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.278 [2024-11-20 16:40:14.036305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.278 [2024-11-20 16:40:14.036391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-11-20 16:40:14.036406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.278 [2024-11-20 16:40:14.039544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.278 [2024-11-20 16:40:14.039627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-11-20 16:40:14.039642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.278 [2024-11-20 16:40:14.042766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.278 [2024-11-20 16:40:14.042853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-11-20 16:40:14.042868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.278 [2024-11-20 16:40:14.046018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.278 [2024-11-20 16:40:14.046105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-11-20 16:40:14.046119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.278 [2024-11-20 16:40:14.049268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.278 [2024-11-20 16:40:14.049347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-11-20 16:40:14.049362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.278 [2024-11-20 16:40:14.052485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.278 [2024-11-20 16:40:14.052569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-11-20 16:40:14.052584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.278 [2024-11-20 16:40:14.055718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.278 [2024-11-20 16:40:14.055791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.278 [2024-11-20 16:40:14.055806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.278 [2024-11-20 16:40:14.058925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.278 [2024-11-20 16:40:14.059017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.279 [2024-11-20 16:40:14.059032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.279 [2024-11-20 16:40:14.062147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.279 [2024-11-20 16:40:14.062232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.279 [2024-11-20 16:40:14.062247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.279 [2024-11-20 16:40:14.065359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.279 [2024-11-20 16:40:14.065452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.279 [2024-11-20 16:40:14.065466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.279 [2024-11-20 16:40:14.068586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.279 [2024-11-20 16:40:14.068683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.279 [2024-11-20 16:40:14.068697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.279 [2024-11-20 16:40:14.071955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.279 [2024-11-20 16:40:14.072076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.279 [2024-11-20 16:40:14.072094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.279 [2024-11-20 16:40:14.077908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.279 [2024-11-20 16:40:14.078157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.279 [2024-11-20 16:40:14.078173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.279 [2024-11-20 16:40:14.088243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.279 [2024-11-20 16:40:14.088423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.279 [2024-11-20 16:40:14.088438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.279 [2024-11-20 16:40:14.099347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.279 [2024-11-20 16:40:14.099532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.279 [2024-11-20 16:40:14.099548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.279 [2024-11-20 16:40:14.110659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.279 [2024-11-20 16:40:14.110921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.279 [2024-11-20 16:40:14.110937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.279 [2024-11-20 16:40:14.120932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.279 [2024-11-20 16:40:14.121218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.279 [2024-11-20 16:40:14.121234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.279 [2024-11-20 16:40:14.131804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.279 [2024-11-20 16:40:14.132018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.279 [2024-11-20 16:40:14.132034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.279 [2024-11-20 16:40:14.142293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.279 [2024-11-20 16:40:14.142459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.279 [2024-11-20 16:40:14.142474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.279 [2024-11-20 16:40:14.152433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.279 [2024-11-20 16:40:14.152591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.279 [2024-11-20 16:40:14.152606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.279 [2024-11-20 16:40:14.162887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.279 [2024-11-20 16:40:14.163098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.279 [2024-11-20 16:40:14.163114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.279 [2024-11-20 16:40:14.173093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.279 [2024-11-20 16:40:14.173360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.279 [2024-11-20 16:40:14.173376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.279 [2024-11-20 16:40:14.180489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.279 [2024-11-20 16:40:14.180772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.279 [2024-11-20 16:40:14.180788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.279 [2024-11-20 16:40:14.188386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.279 [2024-11-20 16:40:14.188679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.279 [2024-11-20 16:40:14.188694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.279 [2024-11-20 16:40:14.197037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.279 [2024-11-20 16:40:14.197318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.279 [2024-11-20 16:40:14.197333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.279 [2024-11-20 16:40:14.205492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.279 [2024-11-20 16:40:14.205735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.279 [2024-11-20 16:40:14.205750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.279 [2024-11-20 16:40:14.215604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.279 [2024-11-20 16:40:14.215874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.279 [2024-11-20 16:40:14.215890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.279 [2024-11-20 16:40:14.225778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.279 [2024-11-20 16:40:14.226017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.279 [2024-11-20 16:40:14.226033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.540 [2024-11-20 16:40:14.234806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.540 [2024-11-20 16:40:14.234881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.540 [2024-11-20 16:40:14.234896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.541 [2024-11-20 16:40:14.243816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.541 [2024-11-20 16:40:14.244054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.541 [2024-11-20 16:40:14.244069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.541 [2024-11-20 16:40:14.251757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.541 [2024-11-20 16:40:14.251861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.541 [2024-11-20 16:40:14.251876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.541 [2024-11-20 16:40:14.256259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.541 [2024-11-20 16:40:14.256349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.541 [2024-11-20 16:40:14.256364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.541 [2024-11-20 16:40:14.263127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.541 [2024-11-20 16:40:14.263416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.541 [2024-11-20 16:40:14.263432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.541 [2024-11-20 16:40:14.269137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.541 [2024-11-20 16:40:14.269196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.541 [2024-11-20 16:40:14.269211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.541 [2024-11-20 16:40:14.274938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.541 [2024-11-20 16:40:14.275178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.541 [2024-11-20 16:40:14.275193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.541 [2024-11-20 16:40:14.278677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.541 [2024-11-20 16:40:14.278785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.541 [2024-11-20 16:40:14.278800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.541 [2024-11-20 16:40:14.286929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.541 [2024-11-20 16:40:14.287196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.541 [2024-11-20 16:40:14.287212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.541 [2024-11-20 16:40:14.296634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.541 [2024-11-20 16:40:14.296950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.541 [2024-11-20 16:40:14.296968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.541 [2024-11-20 16:40:14.306904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.541 [2024-11-20 16:40:14.307135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.541 [2024-11-20 16:40:14.307151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.541 [2024-11-20 16:40:14.317086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.541 [2024-11-20 16:40:14.317437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.541 [2024-11-20 16:40:14.317452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.541 [2024-11-20 16:40:14.327643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.541 [2024-11-20 16:40:14.327916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.541 [2024-11-20 16:40:14.327933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.541 [2024-11-20 16:40:14.337835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.541 [2024-11-20 16:40:14.338125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.541 [2024-11-20 16:40:14.338141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.541 [2024-11-20 16:40:14.348473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.541 [2024-11-20 16:40:14.348743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.541 [2024-11-20 16:40:14.348759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.541 [2024-11-20 16:40:14.358721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.541 [2024-11-20 16:40:14.358939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.541 [2024-11-20 16:40:14.358955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.541 [2024-11-20 16:40:14.369062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.541 [2024-11-20 16:40:14.369335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.541 [2024-11-20 16:40:14.369350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.541 [2024-11-20 16:40:14.379615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.541 [2024-11-20 16:40:14.379895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.541 [2024-11-20 16:40:14.379911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.541 [2024-11-20 16:40:14.389723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.541 [2024-11-20 16:40:14.389921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.541 [2024-11-20 16:40:14.389937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.541 [2024-11-20 16:40:14.399355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.541 [2024-11-20 16:40:14.399594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.541 [2024-11-20 16:40:14.399610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.541 [2024-11-20 16:40:14.409320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.541 [2024-11-20 16:40:14.409589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.542 [2024-11-20 16:40:14.409605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:28.542 [2024-11-20 16:40:14.419702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.542 [2024-11-20 16:40:14.419971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.542 [2024-11-20 16:40:14.419993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:28.542 [2024-11-20 16:40:14.430623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.542 [2024-11-20 16:40:14.430882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.542 [2024-11-20 16:40:14.430897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:28.542 [2024-11-20 16:40:14.441143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2084a90) with pdu=0x200016eff3c8 00:28:28.542 [2024-11-20 16:40:14.441375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.542 [2024-11-20 16:40:14.441390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:28.542 4743.50 IOPS, 592.94 MiB/s 00:28:28.542 Latency(us) 00:28:28.542 [2024-11-20T15:40:14.501Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:28.542 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:28.542 nvme0n1 : 2.01 4737.76 592.22 0.00 0.00 3370.85 1549.65 11359.57 00:28:28.542 [2024-11-20T15:40:14.501Z] =================================================================================================================== 00:28:28.542 [2024-11-20T15:40:14.501Z] Total : 4737.76 592.22 0.00 0.00 3370.85 1549.65 11359.57 00:28:28.542 { 00:28:28.542 "results": [ 00:28:28.542 { 00:28:28.542 "job": "nvme0n1", 00:28:28.542 "core_mask": "0x2", 00:28:28.542 "workload": "randwrite", 00:28:28.542 "status": "finished", 00:28:28.542 "queue_depth": 16, 00:28:28.542 "io_size": 131072, 00:28:28.542 "runtime": 2.006646, 00:28:28.542 "iops": 4737.7564353652815, 00:28:28.542 "mibps": 592.2195544206602, 00:28:28.542 "io_failed": 0, 00:28:28.542 "io_timeout": 0, 00:28:28.542 "avg_latency_us": 3370.8503124013882, 00:28:28.542 "min_latency_us": 1549.6533333333334, 00:28:28.542 "max_latency_us": 11359.573333333334 00:28:28.542 } 00:28:28.542 ], 00:28:28.542 "core_count": 1 00:28:28.542 } 00:28:28.542 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:28.542 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:28.542 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:28.542 | .driver_specific 00:28:28.542 | .nvme_error 00:28:28.542 | .status_code 00:28:28.542 | .command_transient_transport_error' 00:28:28.542 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:28.804 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 307 > 0 )) 00:28:28.804 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2382430 00:28:28.804 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2382430 ']' 00:28:28.804 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2382430 00:28:28.804 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:28.804 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:28.804 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2382430 00:28:28.804 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:28.804 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:28.804 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2382430' 00:28:28.804 killing process with pid 2382430 00:28:28.804 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2382430 00:28:28.804 Received shutdown signal, test time was about 2.000000 seconds 00:28:28.804 00:28:28.804 Latency(us) 00:28:28.804 [2024-11-20T15:40:14.763Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:28.804 [2024-11-20T15:40:14.763Z] =================================================================================================================== 00:28:28.804 [2024-11-20T15:40:14.763Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:28.804 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2382430 00:28:29.065 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2379926 00:28:29.065 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2379926 ']' 00:28:29.065 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2379926 00:28:29.065 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:29.065 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:29.065 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2379926 00:28:29.065 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:29.065 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:29.065 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2379926' 00:28:29.065 killing process with pid 2379926 00:28:29.065 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2379926 00:28:29.065 16:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2379926 00:28:29.065 00:28:29.065 real 0m16.598s 00:28:29.065 user 0m32.870s 00:28:29.065 sys 0m3.509s 00:28:29.065 16:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:29.065 16:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:29.065 ************************************ 00:28:29.065 END TEST nvmf_digest_error 00:28:29.065 ************************************ 00:28:29.326 16:40:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:29.326 16:40:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:29.326 16:40:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:29.326 16:40:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:28:29.326 16:40:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:29.326 16:40:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:28:29.326 16:40:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:29.326 16:40:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:29.326 rmmod nvme_tcp 00:28:29.326 rmmod nvme_fabrics 00:28:29.326 rmmod nvme_keyring 00:28:29.326 16:40:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:29.326 16:40:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:28:29.326 16:40:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:28:29.326 16:40:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 2379926 ']' 00:28:29.326 16:40:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 2379926 00:28:29.326 16:40:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 2379926 ']' 00:28:29.326 16:40:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 2379926 00:28:29.326 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2379926) - No such process 00:28:29.326 16:40:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 2379926 is not found' 00:28:29.326 Process with pid 2379926 is not found 00:28:29.326 16:40:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:29.326 16:40:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:29.326 16:40:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:29.326 16:40:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:28:29.326 16:40:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:28:29.326 16:40:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:29.326 16:40:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:28:29.326 16:40:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:29.326 16:40:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:29.326 16:40:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:29.326 16:40:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:29.326 16:40:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:31.240 16:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:31.240 00:28:31.240 real 0m42.780s 00:28:31.240 user 1m7.935s 00:28:31.240 sys 0m12.394s 00:28:31.240 16:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:31.240 16:40:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:31.240 ************************************ 00:28:31.240 END TEST nvmf_digest 00:28:31.240 ************************************ 00:28:31.501 16:40:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:28:31.501 16:40:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:28:31.501 16:40:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:28:31.501 16:40:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:31.501 16:40:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:31.501 16:40:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:31.501 16:40:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.501 ************************************ 00:28:31.501 START TEST nvmf_bdevperf 00:28:31.501 ************************************ 00:28:31.501 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:31.501 * Looking for test storage... 00:28:31.501 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:31.501 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:31.501 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:28:31.501 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:31.763 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:31.763 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:31.763 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:31.763 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:31.763 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:28:31.763 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:28:31.763 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:28:31.763 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:28:31.763 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:28:31.763 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:28:31.763 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:28:31.763 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:31.763 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:28:31.763 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:28:31.763 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:31.763 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:31.763 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:28:31.763 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:28:31.763 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:31.763 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:28:31.763 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:28:31.763 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:28:31.763 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:28:31.763 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:31.763 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:28:31.763 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:28:31.763 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:31.763 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:31.763 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:28:31.763 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:31.763 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:31.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:31.763 --rc genhtml_branch_coverage=1 00:28:31.763 --rc genhtml_function_coverage=1 00:28:31.763 --rc genhtml_legend=1 00:28:31.763 --rc geninfo_all_blocks=1 00:28:31.763 --rc geninfo_unexecuted_blocks=1 00:28:31.763 00:28:31.763 ' 00:28:31.763 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:31.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:31.764 --rc genhtml_branch_coverage=1 00:28:31.764 --rc genhtml_function_coverage=1 00:28:31.764 --rc genhtml_legend=1 00:28:31.764 --rc geninfo_all_blocks=1 00:28:31.764 --rc geninfo_unexecuted_blocks=1 00:28:31.764 00:28:31.764 ' 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:31.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:31.764 --rc genhtml_branch_coverage=1 00:28:31.764 --rc genhtml_function_coverage=1 00:28:31.764 --rc genhtml_legend=1 00:28:31.764 --rc geninfo_all_blocks=1 00:28:31.764 --rc geninfo_unexecuted_blocks=1 00:28:31.764 00:28:31.764 ' 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:31.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:31.764 --rc genhtml_branch_coverage=1 00:28:31.764 --rc genhtml_function_coverage=1 00:28:31.764 --rc genhtml_legend=1 00:28:31.764 --rc geninfo_all_blocks=1 00:28:31.764 --rc geninfo_unexecuted_blocks=1 00:28:31.764 00:28:31.764 ' 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:31.764 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:28:31.764 16:40:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:39.912 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:39.912 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:39.912 Found net devices under 0000:31:00.0: cvl_0_0 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:39.912 Found net devices under 0000:31:00.1: cvl_0_1 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:39.912 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:39.912 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.594 ms 00:28:39.912 00:28:39.912 --- 10.0.0.2 ping statistics --- 00:28:39.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.912 rtt min/avg/max/mdev = 0.594/0.594/0.594/0.000 ms 00:28:39.912 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:39.912 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:39.912 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:28:39.912 00:28:39.912 --- 10.0.0.1 ping statistics --- 00:28:39.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.912 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:28:39.913 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:39.913 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:28:39.913 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:39.913 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:39.913 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:39.913 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:39.913 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:39.913 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:39.913 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:39.913 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:28:39.913 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:39.913 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:39.913 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:39.913 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:39.913 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2387353 00:28:39.913 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2387353 00:28:39.913 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:39.913 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2387353 ']' 00:28:39.913 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:39.913 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:39.913 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:39.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:39.913 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:39.913 16:40:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:39.913 [2024-11-20 16:40:24.925381] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:28:39.913 [2024-11-20 16:40:24.925431] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:39.913 [2024-11-20 16:40:25.020755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:39.913 [2024-11-20 16:40:25.061476] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:39.913 [2024-11-20 16:40:25.061514] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:39.913 [2024-11-20 16:40:25.061522] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:39.913 [2024-11-20 16:40:25.061529] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:39.913 [2024-11-20 16:40:25.061536] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:39.913 [2024-11-20 16:40:25.063127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:39.913 [2024-11-20 16:40:25.063508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:39.913 [2024-11-20 16:40:25.063510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:39.913 16:40:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:39.913 16:40:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:28:39.913 16:40:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:39.913 16:40:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:39.913 16:40:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:39.913 16:40:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:39.913 16:40:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:39.913 16:40:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.913 16:40:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:39.913 [2024-11-20 16:40:25.772337] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:39.913 16:40:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.913 16:40:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:39.913 16:40:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.913 16:40:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:39.913 Malloc0 00:28:39.913 16:40:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.913 16:40:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:39.913 16:40:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.913 16:40:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:39.913 16:40:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.913 16:40:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:39.913 16:40:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.913 16:40:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:39.913 16:40:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.913 16:40:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:39.913 16:40:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.913 16:40:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:39.913 [2024-11-20 16:40:25.836053] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:39.913 16:40:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.913 16:40:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:39.913 16:40:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:39.913 16:40:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:28:39.913 16:40:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:28:39.913 16:40:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:39.913 16:40:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:39.913 { 00:28:39.913 "params": { 00:28:39.913 "name": "Nvme$subsystem", 00:28:39.913 "trtype": "$TEST_TRANSPORT", 00:28:39.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:39.913 "adrfam": "ipv4", 00:28:39.913 "trsvcid": "$NVMF_PORT", 00:28:39.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:39.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:39.913 "hdgst": ${hdgst:-false}, 00:28:39.913 "ddgst": ${ddgst:-false} 00:28:39.913 }, 00:28:39.913 "method": "bdev_nvme_attach_controller" 00:28:39.913 } 00:28:39.913 EOF 00:28:39.913 )") 00:28:39.913 16:40:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:28:39.913 16:40:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:28:39.913 16:40:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:28:39.913 16:40:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:39.913 "params": { 00:28:39.913 "name": "Nvme1", 00:28:39.913 "trtype": "tcp", 00:28:39.913 "traddr": "10.0.0.2", 00:28:39.913 "adrfam": "ipv4", 00:28:39.913 "trsvcid": "4420", 00:28:39.913 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:39.913 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:39.913 "hdgst": false, 00:28:39.913 "ddgst": false 00:28:39.913 }, 00:28:39.913 "method": "bdev_nvme_attach_controller" 00:28:39.913 }' 00:28:40.173 [2024-11-20 16:40:25.890979] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:28:40.174 [2024-11-20 16:40:25.891035] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2387706 ] 00:28:40.174 [2024-11-20 16:40:25.963354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:40.174 [2024-11-20 16:40:25.999652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:40.434 Running I/O for 1 seconds... 00:28:41.376 8852.00 IOPS, 34.58 MiB/s 00:28:41.376 Latency(us) 00:28:41.376 [2024-11-20T15:40:27.335Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:41.376 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:41.376 Verification LBA range: start 0x0 length 0x4000 00:28:41.377 Nvme1n1 : 1.01 8893.50 34.74 0.00 0.00 14324.55 2730.67 14854.83 00:28:41.377 [2024-11-20T15:40:27.336Z] =================================================================================================================== 00:28:41.377 [2024-11-20T15:40:27.336Z] Total : 8893.50 34.74 0.00 0.00 14324.55 2730.67 14854.83 00:28:41.377 16:40:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2387918 00:28:41.377 16:40:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:28:41.377 16:40:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:41.377 16:40:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:41.377 16:40:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:28:41.377 16:40:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:28:41.377 16:40:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:41.377 16:40:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:41.377 { 00:28:41.377 "params": { 00:28:41.377 "name": "Nvme$subsystem", 00:28:41.377 "trtype": "$TEST_TRANSPORT", 00:28:41.377 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:41.377 "adrfam": "ipv4", 00:28:41.377 "trsvcid": "$NVMF_PORT", 00:28:41.377 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:41.377 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:41.377 "hdgst": ${hdgst:-false}, 00:28:41.377 "ddgst": ${ddgst:-false} 00:28:41.377 }, 00:28:41.377 "method": "bdev_nvme_attach_controller" 00:28:41.377 } 00:28:41.377 EOF 00:28:41.377 )") 00:28:41.377 16:40:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:28:41.377 16:40:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:28:41.377 16:40:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:28:41.377 16:40:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:41.377 "params": { 00:28:41.377 "name": "Nvme1", 00:28:41.377 "trtype": "tcp", 00:28:41.377 "traddr": "10.0.0.2", 00:28:41.377 "adrfam": "ipv4", 00:28:41.377 "trsvcid": "4420", 00:28:41.377 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:41.377 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:41.377 "hdgst": false, 00:28:41.377 "ddgst": false 00:28:41.377 }, 00:28:41.377 "method": "bdev_nvme_attach_controller" 00:28:41.377 }' 00:28:41.377 [2024-11-20 16:40:27.326686] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:28:41.377 [2024-11-20 16:40:27.326735] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2387918 ] 00:28:41.644 [2024-11-20 16:40:27.398325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:41.644 [2024-11-20 16:40:27.433878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:41.906 Running I/O for 15 seconds... 00:28:43.790 9010.00 IOPS, 35.20 MiB/s [2024-11-20T15:40:30.324Z] 9915.00 IOPS, 38.73 MiB/s [2024-11-20T15:40:30.324Z] 16:40:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2387353 00:28:44.365 16:40:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:28:44.365 [2024-11-20 16:40:30.294117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.365 [2024-11-20 16:40:30.294157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.365 [2024-11-20 16:40:30.294178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:91704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.365 [2024-11-20 16:40:30.294189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.366 [2024-11-20 16:40:30.294202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:91712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.366 [2024-11-20 16:40:30.294211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.366 [2024-11-20 16:40:30.294223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:91720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.366 [2024-11-20 16:40:30.294233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.366 [2024-11-20 16:40:30.294244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:91728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.366 [2024-11-20 16:40:30.294251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.366 [2024-11-20 16:40:30.294263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:91736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.366 [2024-11-20 16:40:30.294272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.366 [2024-11-20 16:40:30.294282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:91744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.366 [2024-11-20 16:40:30.294295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.366 [2024-11-20 16:40:30.294305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:91752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.366 [2024-11-20 16:40:30.294314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.366 [2024-11-20 16:40:30.294324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:91760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.366 [2024-11-20 16:40:30.294331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.366 [2024-11-20 16:40:30.294340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:91768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.366 [2024-11-20 16:40:30.294348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.366 [2024-11-20 16:40:30.294359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:91776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.366 [2024-11-20 16:40:30.294367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.366 [2024-11-20 16:40:30.294378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:91784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.366 [2024-11-20 16:40:30.294388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.366 [2024-11-20 16:40:30.294398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:91792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.366 [2024-11-20 16:40:30.294406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.366 [2024-11-20 16:40:30.294416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:91800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.366 [2024-11-20 16:40:30.294425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.366 [2024-11-20 16:40:30.294435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:91808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.366 [2024-11-20 16:40:30.294443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.366 [2024-11-20 16:40:30.294453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:91816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.366 [2024-11-20 16:40:30.294464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.366 [2024-11-20 16:40:30.294475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:91824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.366 [2024-11-20 16:40:30.294484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.366 [2024-11-20 16:40:30.294496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:91832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.366 [2024-11-20 16:40:30.294506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.366 [2024-11-20 16:40:30.294518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:91840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.366 [2024-11-20 16:40:30.294526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.366 [2024-11-20 16:40:30.294538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:91848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.366 [2024-11-20 16:40:30.294549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.366 [2024-11-20 16:40:30.294560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:91856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.366 [2024-11-20 16:40:30.294570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.366 [2024-11-20 16:40:30.294582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:91864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.366 [2024-11-20 16:40:30.294590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.366 [2024-11-20 16:40:30.294601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:91872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.366 [2024-11-20 16:40:30.294608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.366 [2024-11-20 16:40:30.294618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.366 [2024-11-20 16:40:30.294626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.366 [2024-11-20 16:40:30.294635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:92464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.366 [2024-11-20 16:40:30.294642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.366 [2024-11-20 16:40:30.294652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:92472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.366 [2024-11-20 16:40:30.294659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.366 [2024-11-20 16:40:30.294668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:92480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.366 [2024-11-20 16:40:30.294675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.366 [2024-11-20 16:40:30.294685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.366 [2024-11-20 16:40:30.294692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.366 [2024-11-20 16:40:30.294702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:92496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.366 [2024-11-20 16:40:30.294709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.366 [2024-11-20 16:40:30.294718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:92504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.366 [2024-11-20 16:40:30.294725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.366 [2024-11-20 16:40:30.294734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:92512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.366 [2024-11-20 16:40:30.294741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.366 [2024-11-20 16:40:30.294751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:92520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.366 [2024-11-20 16:40:30.294758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.366 [2024-11-20 16:40:30.294769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:92528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.366 [2024-11-20 16:40:30.294776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.366 [2024-11-20 16:40:30.294786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:92536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.366 [2024-11-20 16:40:30.294793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.366 [2024-11-20 16:40:30.294802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:92544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.366 [2024-11-20 16:40:30.294810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.366 [2024-11-20 16:40:30.294819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:92552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.366 [2024-11-20 16:40:30.294826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.366 [2024-11-20 16:40:30.294836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:92560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.366 [2024-11-20 16:40:30.294843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.366 [2024-11-20 16:40:30.294853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:92568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.366 [2024-11-20 16:40:30.294860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.366 [2024-11-20 16:40:30.294869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:92576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.366 [2024-11-20 16:40:30.294876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.366 [2024-11-20 16:40:30.294886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.366 [2024-11-20 16:40:30.294893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.366 [2024-11-20 16:40:30.294902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:91880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.366 [2024-11-20 16:40:30.294909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.367 [2024-11-20 16:40:30.294918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.367 [2024-11-20 16:40:30.294925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.367 [2024-11-20 16:40:30.294935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:91896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.367 [2024-11-20 16:40:30.294942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.367 [2024-11-20 16:40:30.294952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:91904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.367 [2024-11-20 16:40:30.294959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.367 [2024-11-20 16:40:30.294968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:91912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.367 [2024-11-20 16:40:30.294976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.367 [2024-11-20 16:40:30.294991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:91920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.367 [2024-11-20 16:40:30.294999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.367 [2024-11-20 16:40:30.295009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:91928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.367 [2024-11-20 16:40:30.295016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.367 [2024-11-20 16:40:30.295025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:91936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.367 [2024-11-20 16:40:30.295032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.367 [2024-11-20 16:40:30.295042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:91944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.367 [2024-11-20 16:40:30.295049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.367 [2024-11-20 16:40:30.295058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:91952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.367 [2024-11-20 16:40:30.295065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.367 [2024-11-20 16:40:30.295075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:91960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.367 [2024-11-20 16:40:30.295083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.367 [2024-11-20 16:40:30.295092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.367 [2024-11-20 16:40:30.295099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.367 [2024-11-20 16:40:30.295109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:91976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.367 [2024-11-20 16:40:30.295116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.367 [2024-11-20 16:40:30.295125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:91984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.367 [2024-11-20 16:40:30.295133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.367 [2024-11-20 16:40:30.295142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:91992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.367 [2024-11-20 16:40:30.295149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.367 [2024-11-20 16:40:30.295158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:92000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.367 [2024-11-20 16:40:30.295165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.367 [2024-11-20 16:40:30.295175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:92008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.367 [2024-11-20 16:40:30.295182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.367 [2024-11-20 16:40:30.295193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:92592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.367 [2024-11-20 16:40:30.295201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.367 [2024-11-20 16:40:30.295210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:92600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.367 [2024-11-20 16:40:30.295217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.367 [2024-11-20 16:40:30.295226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:92608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.367 [2024-11-20 16:40:30.295234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.367 [2024-11-20 16:40:30.295243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.367 [2024-11-20 16:40:30.295250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.367 [2024-11-20 16:40:30.295260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:92624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.367 [2024-11-20 16:40:30.295267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.367 [2024-11-20 16:40:30.295276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:92632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.367 [2024-11-20 16:40:30.295284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.367 [2024-11-20 16:40:30.295292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:92640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.367 [2024-11-20 16:40:30.295300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.367 [2024-11-20 16:40:30.295309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:92648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.367 [2024-11-20 16:40:30.295316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.367 [2024-11-20 16:40:30.295327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:92656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.367 [2024-11-20 16:40:30.295334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.367 [2024-11-20 16:40:30.295343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:92664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.367 [2024-11-20 16:40:30.295351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.367 [2024-11-20 16:40:30.295360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:92672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.367 [2024-11-20 16:40:30.295367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.367 [2024-11-20 16:40:30.295376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:92680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.367 [2024-11-20 16:40:30.295384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.367 [2024-11-20 16:40:30.295393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:92688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.367 [2024-11-20 16:40:30.295404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.367 [2024-11-20 16:40:30.295413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:92696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.367 [2024-11-20 16:40:30.295421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.367 [2024-11-20 16:40:30.295430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:92704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.367 [2024-11-20 16:40:30.295438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.367 [2024-11-20 16:40:30.295447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:92016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.367 [2024-11-20 16:40:30.295454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.367 [2024-11-20 16:40:30.295464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:92024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.367 [2024-11-20 16:40:30.295471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.367 [2024-11-20 16:40:30.295480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:92032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.367 [2024-11-20 16:40:30.295487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.367 [2024-11-20 16:40:30.295497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:92040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.367 [2024-11-20 16:40:30.295504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.367 [2024-11-20 16:40:30.295514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:92048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.367 [2024-11-20 16:40:30.295521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.367 [2024-11-20 16:40:30.295535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:92056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.367 [2024-11-20 16:40:30.295542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.367 [2024-11-20 16:40:30.295552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:92064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.367 [2024-11-20 16:40:30.295559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.367 [2024-11-20 16:40:30.295568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:92072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.367 [2024-11-20 16:40:30.295575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.367 [2024-11-20 16:40:30.295585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:92080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.368 [2024-11-20 16:40:30.295592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.368 [2024-11-20 16:40:30.295601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:92088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.368 [2024-11-20 16:40:30.295609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.368 [2024-11-20 16:40:30.295618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:92096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.368 [2024-11-20 16:40:30.295626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.368 [2024-11-20 16:40:30.295636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:92104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.368 [2024-11-20 16:40:30.295643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.368 [2024-11-20 16:40:30.295653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:92112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.368 [2024-11-20 16:40:30.295660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.368 [2024-11-20 16:40:30.295669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:92120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.368 [2024-11-20 16:40:30.295676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.368 [2024-11-20 16:40:30.295686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:92128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.368 [2024-11-20 16:40:30.295693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.368 [2024-11-20 16:40:30.295703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:92136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.368 [2024-11-20 16:40:30.295710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.368 [2024-11-20 16:40:30.295719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:92144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.368 [2024-11-20 16:40:30.295726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.368 [2024-11-20 16:40:30.295735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:92152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.368 [2024-11-20 16:40:30.295743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.368 [2024-11-20 16:40:30.295752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:92160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.368 [2024-11-20 16:40:30.295760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.368 [2024-11-20 16:40:30.295769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:92168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.368 [2024-11-20 16:40:30.295776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.368 [2024-11-20 16:40:30.295785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:92176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.368 [2024-11-20 16:40:30.295793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.368 [2024-11-20 16:40:30.295802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:92184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.368 [2024-11-20 16:40:30.295809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.368 [2024-11-20 16:40:30.295819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:92192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.368 [2024-11-20 16:40:30.295826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.368 [2024-11-20 16:40:30.295836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:92712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.368 [2024-11-20 16:40:30.295844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.368 [2024-11-20 16:40:30.295853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.368 [2024-11-20 16:40:30.295861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.368 [2024-11-20 16:40:30.295870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:92208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.368 [2024-11-20 16:40:30.295878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.368 [2024-11-20 16:40:30.295887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:92216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.368 [2024-11-20 16:40:30.295894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.368 [2024-11-20 16:40:30.295904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:92224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.368 [2024-11-20 16:40:30.295911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.368 [2024-11-20 16:40:30.295920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:92232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.368 [2024-11-20 16:40:30.295927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.368 [2024-11-20 16:40:30.295937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:92240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.368 [2024-11-20 16:40:30.295944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.368 [2024-11-20 16:40:30.295954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:92248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.368 [2024-11-20 16:40:30.295961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.368 [2024-11-20 16:40:30.295970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:92256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.368 [2024-11-20 16:40:30.295977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.368 [2024-11-20 16:40:30.295990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:92264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.368 [2024-11-20 16:40:30.295998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.368 [2024-11-20 16:40:30.296008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:92272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.368 [2024-11-20 16:40:30.296015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.368 [2024-11-20 16:40:30.296024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:92280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.368 [2024-11-20 16:40:30.296031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.368 [2024-11-20 16:40:30.296040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:92288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.368 [2024-11-20 16:40:30.296049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.368 [2024-11-20 16:40:30.296059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:92296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.368 [2024-11-20 16:40:30.296066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.368 [2024-11-20 16:40:30.296078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:92304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.368 [2024-11-20 16:40:30.296085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.368 [2024-11-20 16:40:30.296094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:92312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.368 [2024-11-20 16:40:30.296102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.368 [2024-11-20 16:40:30.296111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.368 [2024-11-20 16:40:30.296118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.368 [2024-11-20 16:40:30.296128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:92328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.368 [2024-11-20 16:40:30.296135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.368 [2024-11-20 16:40:30.296144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.368 [2024-11-20 16:40:30.296151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.368 [2024-11-20 16:40:30.296161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:92344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.368 [2024-11-20 16:40:30.296168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.368 [2024-11-20 16:40:30.296177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:92352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.368 [2024-11-20 16:40:30.296184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.368 [2024-11-20 16:40:30.296194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:92360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.368 [2024-11-20 16:40:30.296201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.368 [2024-11-20 16:40:30.296210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:92368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.368 [2024-11-20 16:40:30.296217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.368 [2024-11-20 16:40:30.296227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:92376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.368 [2024-11-20 16:40:30.296234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.368 [2024-11-20 16:40:30.296243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:92384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.368 [2024-11-20 16:40:30.296250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.369 [2024-11-20 16:40:30.296262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:92392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.369 [2024-11-20 16:40:30.296269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.369 [2024-11-20 16:40:30.296278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:92400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.369 [2024-11-20 16:40:30.296285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.369 [2024-11-20 16:40:30.296295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:92408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.369 [2024-11-20 16:40:30.296302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.369 [2024-11-20 16:40:30.296311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:92416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.369 [2024-11-20 16:40:30.296319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.369 [2024-11-20 16:40:30.296328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:92424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.369 [2024-11-20 16:40:30.296335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.369 [2024-11-20 16:40:30.296344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:92432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.369 [2024-11-20 16:40:30.296351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.369 [2024-11-20 16:40:30.296360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:92440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.369 [2024-11-20 16:40:30.296367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.369 [2024-11-20 16:40:30.296376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f9540 is same with the state(6) to be set 00:28:44.369 [2024-11-20 16:40:30.296385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:44.369 [2024-11-20 16:40:30.296391] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:44.369 [2024-11-20 16:40:30.296397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92448 len:8 PRP1 0x0 PRP2 0x0 00:28:44.369 [2024-11-20 16:40:30.296406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.369 [2024-11-20 16:40:30.299998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.369 [2024-11-20 16:40:30.300051] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:44.369 [2024-11-20 16:40:30.300813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.369 [2024-11-20 16:40:30.300829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:44.369 [2024-11-20 16:40:30.300837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:44.369 [2024-11-20 16:40:30.301063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:44.369 [2024-11-20 16:40:30.301286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.369 [2024-11-20 16:40:30.301294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.369 [2024-11-20 16:40:30.301307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.369 [2024-11-20 16:40:30.301316] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.369 [2024-11-20 16:40:30.314125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.369 [2024-11-20 16:40:30.314738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.369 [2024-11-20 16:40:30.314776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:44.369 [2024-11-20 16:40:30.314787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:44.369 [2024-11-20 16:40:30.315039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:44.369 [2024-11-20 16:40:30.315266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.369 [2024-11-20 16:40:30.315276] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.369 [2024-11-20 16:40:30.315284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.369 [2024-11-20 16:40:30.315293] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.631 [2024-11-20 16:40:30.328146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.631 [2024-11-20 16:40:30.328731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.631 [2024-11-20 16:40:30.328750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:44.631 [2024-11-20 16:40:30.328758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:44.631 [2024-11-20 16:40:30.328979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:44.631 [2024-11-20 16:40:30.329207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.631 [2024-11-20 16:40:30.329216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.631 [2024-11-20 16:40:30.329223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.631 [2024-11-20 16:40:30.329230] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.631 [2024-11-20 16:40:30.342092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.631 [2024-11-20 16:40:30.342739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.631 [2024-11-20 16:40:30.342776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:44.631 [2024-11-20 16:40:30.342787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:44.631 [2024-11-20 16:40:30.343036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:44.631 [2024-11-20 16:40:30.343261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.631 [2024-11-20 16:40:30.343270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.631 [2024-11-20 16:40:30.343279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.631 [2024-11-20 16:40:30.343287] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.631 [2024-11-20 16:40:30.355921] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.631 [2024-11-20 16:40:30.356595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.631 [2024-11-20 16:40:30.356632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:44.631 [2024-11-20 16:40:30.356644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:44.631 [2024-11-20 16:40:30.356883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:44.631 [2024-11-20 16:40:30.357116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.631 [2024-11-20 16:40:30.357126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.631 [2024-11-20 16:40:30.357134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.631 [2024-11-20 16:40:30.357142] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.632 [2024-11-20 16:40:30.369779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.632 [2024-11-20 16:40:30.370373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.632 [2024-11-20 16:40:30.370393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:44.632 [2024-11-20 16:40:30.370401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:44.632 [2024-11-20 16:40:30.370622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:44.632 [2024-11-20 16:40:30.370843] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.632 [2024-11-20 16:40:30.370852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.632 [2024-11-20 16:40:30.370860] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.632 [2024-11-20 16:40:30.370867] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.632 [2024-11-20 16:40:30.383703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.632 [2024-11-20 16:40:30.384241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.632 [2024-11-20 16:40:30.384258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:44.632 [2024-11-20 16:40:30.384266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:44.632 [2024-11-20 16:40:30.384486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:44.632 [2024-11-20 16:40:30.384706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.632 [2024-11-20 16:40:30.384715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.632 [2024-11-20 16:40:30.384722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.632 [2024-11-20 16:40:30.384729] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.632 [2024-11-20 16:40:30.397556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.632 [2024-11-20 16:40:30.398204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.632 [2024-11-20 16:40:30.398241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:44.632 [2024-11-20 16:40:30.398258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:44.632 [2024-11-20 16:40:30.398499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:44.632 [2024-11-20 16:40:30.398724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.632 [2024-11-20 16:40:30.398733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.632 [2024-11-20 16:40:30.398741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.632 [2024-11-20 16:40:30.398748] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.632 [2024-11-20 16:40:30.411374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.632 [2024-11-20 16:40:30.412046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.632 [2024-11-20 16:40:30.412083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:44.632 [2024-11-20 16:40:30.412095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:44.632 [2024-11-20 16:40:30.412338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:44.632 [2024-11-20 16:40:30.412563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.632 [2024-11-20 16:40:30.412572] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.632 [2024-11-20 16:40:30.412580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.632 [2024-11-20 16:40:30.412588] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.632 [2024-11-20 16:40:30.425206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.632 [2024-11-20 16:40:30.425868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.632 [2024-11-20 16:40:30.425906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:44.632 [2024-11-20 16:40:30.425917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:44.632 [2024-11-20 16:40:30.426168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:44.632 [2024-11-20 16:40:30.426395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.632 [2024-11-20 16:40:30.426405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.632 [2024-11-20 16:40:30.426412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.632 [2024-11-20 16:40:30.426420] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.632 [2024-11-20 16:40:30.439069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.632 [2024-11-20 16:40:30.439722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.632 [2024-11-20 16:40:30.439759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:44.632 [2024-11-20 16:40:30.439775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:44.632 [2024-11-20 16:40:30.440031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:44.632 [2024-11-20 16:40:30.440261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.632 [2024-11-20 16:40:30.440270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.632 [2024-11-20 16:40:30.440278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.632 [2024-11-20 16:40:30.440286] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.632 [2024-11-20 16:40:30.452927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.632 [2024-11-20 16:40:30.453590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.632 [2024-11-20 16:40:30.453628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:44.632 [2024-11-20 16:40:30.453638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:44.632 [2024-11-20 16:40:30.453878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:44.632 [2024-11-20 16:40:30.454113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.632 [2024-11-20 16:40:30.454123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.632 [2024-11-20 16:40:30.454131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.632 [2024-11-20 16:40:30.454139] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.632 [2024-11-20 16:40:30.466781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.632 [2024-11-20 16:40:30.467362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.632 [2024-11-20 16:40:30.467381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:44.632 [2024-11-20 16:40:30.467389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:44.632 [2024-11-20 16:40:30.467610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:44.632 [2024-11-20 16:40:30.467830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.632 [2024-11-20 16:40:30.467838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.632 [2024-11-20 16:40:30.467845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.632 [2024-11-20 16:40:30.467852] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.632 [2024-11-20 16:40:30.480688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.632 [2024-11-20 16:40:30.481377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.632 [2024-11-20 16:40:30.481415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:44.632 [2024-11-20 16:40:30.481427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:44.632 [2024-11-20 16:40:30.481667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:44.632 [2024-11-20 16:40:30.481892] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.632 [2024-11-20 16:40:30.481900] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.632 [2024-11-20 16:40:30.481914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.632 [2024-11-20 16:40:30.481922] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.632 [2024-11-20 16:40:30.494545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.632 [2024-11-20 16:40:30.495209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.632 [2024-11-20 16:40:30.495246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:44.632 [2024-11-20 16:40:30.495257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:44.632 [2024-11-20 16:40:30.495498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:44.633 [2024-11-20 16:40:30.495722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.633 [2024-11-20 16:40:30.495731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.633 [2024-11-20 16:40:30.495739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.633 [2024-11-20 16:40:30.495747] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.633 [2024-11-20 16:40:30.508392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.633 [2024-11-20 16:40:30.509042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.633 [2024-11-20 16:40:30.509081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:44.633 [2024-11-20 16:40:30.509091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:44.633 [2024-11-20 16:40:30.509331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:44.633 [2024-11-20 16:40:30.509556] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.633 [2024-11-20 16:40:30.509565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.633 [2024-11-20 16:40:30.509573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.633 [2024-11-20 16:40:30.509581] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.633 [2024-11-20 16:40:30.522212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.633 [2024-11-20 16:40:30.522840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.633 [2024-11-20 16:40:30.522878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:44.633 [2024-11-20 16:40:30.522889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:44.633 [2024-11-20 16:40:30.523138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:44.633 [2024-11-20 16:40:30.523364] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.633 [2024-11-20 16:40:30.523374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.633 [2024-11-20 16:40:30.523381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.633 [2024-11-20 16:40:30.523389] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.633 [2024-11-20 16:40:30.536058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.633 [2024-11-20 16:40:30.536596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.633 [2024-11-20 16:40:30.536616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:44.633 [2024-11-20 16:40:30.536623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:44.633 [2024-11-20 16:40:30.536844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:44.633 [2024-11-20 16:40:30.537071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.633 [2024-11-20 16:40:30.537081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.633 [2024-11-20 16:40:30.537088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.633 [2024-11-20 16:40:30.537095] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.633 [2024-11-20 16:40:30.549923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.633 [2024-11-20 16:40:30.550535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.633 [2024-11-20 16:40:30.550573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:44.633 [2024-11-20 16:40:30.550586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:44.633 [2024-11-20 16:40:30.550830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:44.633 [2024-11-20 16:40:30.551062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.633 [2024-11-20 16:40:30.551072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.633 [2024-11-20 16:40:30.551080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.633 [2024-11-20 16:40:30.551089] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.633 [2024-11-20 16:40:30.563896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.633 [2024-11-20 16:40:30.564458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.633 [2024-11-20 16:40:30.564478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:44.633 [2024-11-20 16:40:30.564486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:44.633 [2024-11-20 16:40:30.564706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:44.633 [2024-11-20 16:40:30.564927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.633 [2024-11-20 16:40:30.564936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.633 [2024-11-20 16:40:30.564943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.633 [2024-11-20 16:40:30.564950] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.633 [2024-11-20 16:40:30.577794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.633 [2024-11-20 16:40:30.578441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.633 [2024-11-20 16:40:30.578478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:44.633 [2024-11-20 16:40:30.578493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:44.633 [2024-11-20 16:40:30.578733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:44.633 [2024-11-20 16:40:30.578957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.633 [2024-11-20 16:40:30.578966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.633 [2024-11-20 16:40:30.578974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.633 [2024-11-20 16:40:30.578990] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.896 [2024-11-20 16:40:30.591621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.896 [2024-11-20 16:40:30.592182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.896 [2024-11-20 16:40:30.592202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:44.896 [2024-11-20 16:40:30.592210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:44.896 [2024-11-20 16:40:30.592430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:44.896 [2024-11-20 16:40:30.592651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.896 [2024-11-20 16:40:30.592658] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.896 [2024-11-20 16:40:30.592665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.897 [2024-11-20 16:40:30.592672] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.897 [2024-11-20 16:40:30.605518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.897 [2024-11-20 16:40:30.606079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.897 [2024-11-20 16:40:30.606096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:44.897 [2024-11-20 16:40:30.606103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:44.897 [2024-11-20 16:40:30.606323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:44.897 [2024-11-20 16:40:30.606544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.897 [2024-11-20 16:40:30.606552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.897 [2024-11-20 16:40:30.606559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.897 [2024-11-20 16:40:30.606565] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.897 [2024-11-20 16:40:30.619417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.897 [2024-11-20 16:40:30.619966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.897 [2024-11-20 16:40:30.619988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:44.897 [2024-11-20 16:40:30.619997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:44.897 [2024-11-20 16:40:30.620217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:44.897 [2024-11-20 16:40:30.620441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.897 [2024-11-20 16:40:30.620449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.897 [2024-11-20 16:40:30.620456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.897 [2024-11-20 16:40:30.620462] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.897 [2024-11-20 16:40:30.633303] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.897 [2024-11-20 16:40:30.633795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.897 [2024-11-20 16:40:30.633812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:44.897 [2024-11-20 16:40:30.633819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:44.897 [2024-11-20 16:40:30.634045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:44.897 [2024-11-20 16:40:30.634271] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.897 [2024-11-20 16:40:30.634279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.897 [2024-11-20 16:40:30.634286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.897 [2024-11-20 16:40:30.634293] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.897 9282.00 IOPS, 36.26 MiB/s [2024-11-20T15:40:30.856Z] [2024-11-20 16:40:30.647302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.897 [2024-11-20 16:40:30.647873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.897 [2024-11-20 16:40:30.647889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:44.897 [2024-11-20 16:40:30.647896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:44.897 [2024-11-20 16:40:30.648121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:44.897 [2024-11-20 16:40:30.648342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.897 [2024-11-20 16:40:30.648351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.897 [2024-11-20 16:40:30.648358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.897 [2024-11-20 16:40:30.648365] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.897 [2024-11-20 16:40:30.661208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.897 [2024-11-20 16:40:30.661788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.897 [2024-11-20 16:40:30.661805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:44.897 [2024-11-20 16:40:30.661812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:44.897 [2024-11-20 16:40:30.662038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:44.897 [2024-11-20 16:40:30.662260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.897 [2024-11-20 16:40:30.662268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.897 [2024-11-20 16:40:30.662280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.897 [2024-11-20 16:40:30.662287] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.897 [2024-11-20 16:40:30.675124] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.897 [2024-11-20 16:40:30.675650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.897 [2024-11-20 16:40:30.675666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:44.897 [2024-11-20 16:40:30.675674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:44.897 [2024-11-20 16:40:30.675894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:44.897 [2024-11-20 16:40:30.676119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.897 [2024-11-20 16:40:30.676129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.897 [2024-11-20 16:40:30.676136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.897 [2024-11-20 16:40:30.676143] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.897 [2024-11-20 16:40:30.688996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.897 [2024-11-20 16:40:30.689565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.897 [2024-11-20 16:40:30.689580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:44.897 [2024-11-20 16:40:30.689588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:44.897 [2024-11-20 16:40:30.689807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:44.897 [2024-11-20 16:40:30.690033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.897 [2024-11-20 16:40:30.690042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.897 [2024-11-20 16:40:30.690049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.897 [2024-11-20 16:40:30.690055] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.897 [2024-11-20 16:40:30.702893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.897 [2024-11-20 16:40:30.703522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.897 [2024-11-20 16:40:30.703559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:44.897 [2024-11-20 16:40:30.703570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:44.897 [2024-11-20 16:40:30.703810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:44.897 [2024-11-20 16:40:30.704044] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.897 [2024-11-20 16:40:30.704054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.897 [2024-11-20 16:40:30.704062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.897 [2024-11-20 16:40:30.704070] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.897 [2024-11-20 16:40:30.716913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.897 [2024-11-20 16:40:30.717490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.897 [2024-11-20 16:40:30.717526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:44.897 [2024-11-20 16:40:30.717539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:44.897 [2024-11-20 16:40:30.717780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:44.897 [2024-11-20 16:40:30.718019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.897 [2024-11-20 16:40:30.718031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.897 [2024-11-20 16:40:30.718038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.897 [2024-11-20 16:40:30.718046] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.897 [2024-11-20 16:40:30.730876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.897 [2024-11-20 16:40:30.731462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.897 [2024-11-20 16:40:30.731481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:44.897 [2024-11-20 16:40:30.731489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:44.897 [2024-11-20 16:40:30.731710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:44.898 [2024-11-20 16:40:30.731939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.898 [2024-11-20 16:40:30.731948] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.898 [2024-11-20 16:40:30.731955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.898 [2024-11-20 16:40:30.731962] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.898 [2024-11-20 16:40:30.744813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.898 [2024-11-20 16:40:30.745443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.898 [2024-11-20 16:40:30.745481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:44.898 [2024-11-20 16:40:30.745492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:44.898 [2024-11-20 16:40:30.745731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:44.898 [2024-11-20 16:40:30.745955] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.898 [2024-11-20 16:40:30.745964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.898 [2024-11-20 16:40:30.745972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.898 [2024-11-20 16:40:30.745980] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.898 [2024-11-20 16:40:30.758821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.898 [2024-11-20 16:40:30.759371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.898 [2024-11-20 16:40:30.759392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:44.898 [2024-11-20 16:40:30.759405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:44.898 [2024-11-20 16:40:30.759628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:44.898 [2024-11-20 16:40:30.759848] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.898 [2024-11-20 16:40:30.759856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.898 [2024-11-20 16:40:30.759863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.898 [2024-11-20 16:40:30.759870] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.898 [2024-11-20 16:40:30.772712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.898 [2024-11-20 16:40:30.773317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.898 [2024-11-20 16:40:30.773335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:44.898 [2024-11-20 16:40:30.773343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:44.898 [2024-11-20 16:40:30.773563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:44.898 [2024-11-20 16:40:30.773783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.898 [2024-11-20 16:40:30.773791] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.898 [2024-11-20 16:40:30.773798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.898 [2024-11-20 16:40:30.773804] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.898 [2024-11-20 16:40:30.786649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.898 [2024-11-20 16:40:30.787190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.898 [2024-11-20 16:40:30.787207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:44.898 [2024-11-20 16:40:30.787214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:44.898 [2024-11-20 16:40:30.787434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:44.898 [2024-11-20 16:40:30.787655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.898 [2024-11-20 16:40:30.787663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.898 [2024-11-20 16:40:30.787670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.898 [2024-11-20 16:40:30.787677] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.898 [2024-11-20 16:40:30.800509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.898 [2024-11-20 16:40:30.801040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.898 [2024-11-20 16:40:30.801057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:44.898 [2024-11-20 16:40:30.801064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:44.898 [2024-11-20 16:40:30.801284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:44.898 [2024-11-20 16:40:30.801508] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.898 [2024-11-20 16:40:30.801517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.898 [2024-11-20 16:40:30.801524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.898 [2024-11-20 16:40:30.801531] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.898 [2024-11-20 16:40:30.814389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.898 [2024-11-20 16:40:30.814968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.898 [2024-11-20 16:40:30.814992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:44.898 [2024-11-20 16:40:30.815000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:44.898 [2024-11-20 16:40:30.815220] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:44.898 [2024-11-20 16:40:30.815440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.898 [2024-11-20 16:40:30.815448] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.898 [2024-11-20 16:40:30.815455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.898 [2024-11-20 16:40:30.815462] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.898 [2024-11-20 16:40:30.828306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.898 [2024-11-20 16:40:30.828882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.898 [2024-11-20 16:40:30.828898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:44.898 [2024-11-20 16:40:30.828905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:44.898 [2024-11-20 16:40:30.829131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:44.898 [2024-11-20 16:40:30.829352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.898 [2024-11-20 16:40:30.829360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.898 [2024-11-20 16:40:30.829367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.898 [2024-11-20 16:40:30.829374] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:44.898 [2024-11-20 16:40:30.842233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:44.898 [2024-11-20 16:40:30.842662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.898 [2024-11-20 16:40:30.842680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:44.898 [2024-11-20 16:40:30.842688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:44.898 [2024-11-20 16:40:30.842908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:44.898 [2024-11-20 16:40:30.843135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:44.898 [2024-11-20 16:40:30.843144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:44.898 [2024-11-20 16:40:30.843155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:44.898 [2024-11-20 16:40:30.843161] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.162 [2024-11-20 16:40:30.856213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.162 [2024-11-20 16:40:30.856831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-11-20 16:40:30.856868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.162 [2024-11-20 16:40:30.856878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.162 [2024-11-20 16:40:30.857127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.162 [2024-11-20 16:40:30.857352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.162 [2024-11-20 16:40:30.857361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.162 [2024-11-20 16:40:30.857369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.162 [2024-11-20 16:40:30.857377] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.162 [2024-11-20 16:40:30.870218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.162 [2024-11-20 16:40:30.870893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-11-20 16:40:30.870931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.162 [2024-11-20 16:40:30.870942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.162 [2024-11-20 16:40:30.871189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.162 [2024-11-20 16:40:30.871414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.162 [2024-11-20 16:40:30.871423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.162 [2024-11-20 16:40:30.871431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.162 [2024-11-20 16:40:30.871439] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.162 [2024-11-20 16:40:30.884049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.162 [2024-11-20 16:40:30.884598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-11-20 16:40:30.884616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.162 [2024-11-20 16:40:30.884624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.162 [2024-11-20 16:40:30.884844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.162 [2024-11-20 16:40:30.885070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.162 [2024-11-20 16:40:30.885079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.162 [2024-11-20 16:40:30.885087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.162 [2024-11-20 16:40:30.885093] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.162 [2024-11-20 16:40:30.897918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.162 [2024-11-20 16:40:30.898452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-11-20 16:40:30.898470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.162 [2024-11-20 16:40:30.898477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.162 [2024-11-20 16:40:30.898697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.162 [2024-11-20 16:40:30.898917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.162 [2024-11-20 16:40:30.898925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.162 [2024-11-20 16:40:30.898932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.162 [2024-11-20 16:40:30.898939] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.162 [2024-11-20 16:40:30.911765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.162 [2024-11-20 16:40:30.912443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-11-20 16:40:30.912481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.162 [2024-11-20 16:40:30.912492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.162 [2024-11-20 16:40:30.912732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.162 [2024-11-20 16:40:30.912956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.162 [2024-11-20 16:40:30.912965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.162 [2024-11-20 16:40:30.912973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.162 [2024-11-20 16:40:30.912988] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.162 [2024-11-20 16:40:30.925591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.162 [2024-11-20 16:40:30.926023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-11-20 16:40:30.926045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.162 [2024-11-20 16:40:30.926053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.162 [2024-11-20 16:40:30.926274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.162 [2024-11-20 16:40:30.926495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.162 [2024-11-20 16:40:30.926503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.162 [2024-11-20 16:40:30.926511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.162 [2024-11-20 16:40:30.926517] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.162 [2024-11-20 16:40:30.939570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.162 [2024-11-20 16:40:30.940045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.162 [2024-11-20 16:40:30.940062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.162 [2024-11-20 16:40:30.940074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.162 [2024-11-20 16:40:30.940295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.163 [2024-11-20 16:40:30.940514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.163 [2024-11-20 16:40:30.940523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.163 [2024-11-20 16:40:30.940530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.163 [2024-11-20 16:40:30.940537] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.163 [2024-11-20 16:40:30.953566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.163 [2024-11-20 16:40:30.954275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-11-20 16:40:30.954313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.163 [2024-11-20 16:40:30.954323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.163 [2024-11-20 16:40:30.954563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.163 [2024-11-20 16:40:30.954787] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.163 [2024-11-20 16:40:30.954796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.163 [2024-11-20 16:40:30.954803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.163 [2024-11-20 16:40:30.954811] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.163 [2024-11-20 16:40:30.967444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.163 [2024-11-20 16:40:30.968011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-11-20 16:40:30.968048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.163 [2024-11-20 16:40:30.968060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.163 [2024-11-20 16:40:30.968303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.163 [2024-11-20 16:40:30.968528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.163 [2024-11-20 16:40:30.968538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.163 [2024-11-20 16:40:30.968546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.163 [2024-11-20 16:40:30.968554] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.163 [2024-11-20 16:40:30.981386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.163 [2024-11-20 16:40:30.982072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-11-20 16:40:30.982110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.163 [2024-11-20 16:40:30.982122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.163 [2024-11-20 16:40:30.982365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.163 [2024-11-20 16:40:30.982594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.163 [2024-11-20 16:40:30.982603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.163 [2024-11-20 16:40:30.982611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.163 [2024-11-20 16:40:30.982619] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.163 [2024-11-20 16:40:30.995228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.163 [2024-11-20 16:40:30.995769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-11-20 16:40:30.995788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.163 [2024-11-20 16:40:30.995796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.163 [2024-11-20 16:40:30.996022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.163 [2024-11-20 16:40:30.996244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.163 [2024-11-20 16:40:30.996253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.163 [2024-11-20 16:40:30.996260] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.163 [2024-11-20 16:40:30.996267] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.163 [2024-11-20 16:40:31.009088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.163 [2024-11-20 16:40:31.009618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-11-20 16:40:31.009634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.163 [2024-11-20 16:40:31.009642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.163 [2024-11-20 16:40:31.009862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.163 [2024-11-20 16:40:31.010087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.163 [2024-11-20 16:40:31.010097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.163 [2024-11-20 16:40:31.010104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.163 [2024-11-20 16:40:31.010111] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.163 [2024-11-20 16:40:31.022944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.163 [2024-11-20 16:40:31.023472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-11-20 16:40:31.023488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.163 [2024-11-20 16:40:31.023495] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.163 [2024-11-20 16:40:31.023715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.163 [2024-11-20 16:40:31.023935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.163 [2024-11-20 16:40:31.023943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.163 [2024-11-20 16:40:31.023958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.163 [2024-11-20 16:40:31.023965] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.163 [2024-11-20 16:40:31.036804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.163 [2024-11-20 16:40:31.037382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.163 [2024-11-20 16:40:31.037399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.163 [2024-11-20 16:40:31.037406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.163 [2024-11-20 16:40:31.037626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.163 [2024-11-20 16:40:31.037846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.163 [2024-11-20 16:40:31.037855] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.163 [2024-11-20 16:40:31.037862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.163 [2024-11-20 16:40:31.037868] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.163 [2024-11-20 16:40:31.050685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.163 [2024-11-20 16:40:31.051201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-11-20 16:40:31.051238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.164 [2024-11-20 16:40:31.051250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.164 [2024-11-20 16:40:31.051492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.164 [2024-11-20 16:40:31.051717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.164 [2024-11-20 16:40:31.051726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.164 [2024-11-20 16:40:31.051734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.164 [2024-11-20 16:40:31.051742] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.164 [2024-11-20 16:40:31.064576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.164 [2024-11-20 16:40:31.065094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-11-20 16:40:31.065113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.164 [2024-11-20 16:40:31.065121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.164 [2024-11-20 16:40:31.065342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.164 [2024-11-20 16:40:31.065562] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.164 [2024-11-20 16:40:31.065571] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.164 [2024-11-20 16:40:31.065578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.164 [2024-11-20 16:40:31.065586] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.164 [2024-11-20 16:40:31.078418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.164 [2024-11-20 16:40:31.079119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-11-20 16:40:31.079156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.164 [2024-11-20 16:40:31.079167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.164 [2024-11-20 16:40:31.079407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.164 [2024-11-20 16:40:31.079632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.164 [2024-11-20 16:40:31.079641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.164 [2024-11-20 16:40:31.079649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.164 [2024-11-20 16:40:31.079657] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.164 [2024-11-20 16:40:31.092286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.164 [2024-11-20 16:40:31.092951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-11-20 16:40:31.092996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.164 [2024-11-20 16:40:31.093007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.164 [2024-11-20 16:40:31.093246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.164 [2024-11-20 16:40:31.093472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.164 [2024-11-20 16:40:31.093480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.164 [2024-11-20 16:40:31.093488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.164 [2024-11-20 16:40:31.093496] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.164 [2024-11-20 16:40:31.106109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.164 [2024-11-20 16:40:31.106545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.164 [2024-11-20 16:40:31.106564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.164 [2024-11-20 16:40:31.106572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.164 [2024-11-20 16:40:31.106793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.164 [2024-11-20 16:40:31.107022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.164 [2024-11-20 16:40:31.107032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.164 [2024-11-20 16:40:31.107039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.164 [2024-11-20 16:40:31.107046] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.426 [2024-11-20 16:40:31.120074] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.426 [2024-11-20 16:40:31.120744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.426 [2024-11-20 16:40:31.120782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.426 [2024-11-20 16:40:31.120797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.426 [2024-11-20 16:40:31.121046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.426 [2024-11-20 16:40:31.121271] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.426 [2024-11-20 16:40:31.121280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.427 [2024-11-20 16:40:31.121288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.427 [2024-11-20 16:40:31.121296] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.427 [2024-11-20 16:40:31.133933] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.427 [2024-11-20 16:40:31.134594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.427 [2024-11-20 16:40:31.134631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.427 [2024-11-20 16:40:31.134641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.427 [2024-11-20 16:40:31.134881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.427 [2024-11-20 16:40:31.135115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.427 [2024-11-20 16:40:31.135125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.427 [2024-11-20 16:40:31.135133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.427 [2024-11-20 16:40:31.135141] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.427 [2024-11-20 16:40:31.147761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.427 [2024-11-20 16:40:31.148447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.427 [2024-11-20 16:40:31.148484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.427 [2024-11-20 16:40:31.148496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.427 [2024-11-20 16:40:31.148735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.427 [2024-11-20 16:40:31.148959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.427 [2024-11-20 16:40:31.148967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.427 [2024-11-20 16:40:31.148975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.427 [2024-11-20 16:40:31.148993] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.427 [2024-11-20 16:40:31.161606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.427 [2024-11-20 16:40:31.162301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.427 [2024-11-20 16:40:31.162338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.427 [2024-11-20 16:40:31.162349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.427 [2024-11-20 16:40:31.162589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.427 [2024-11-20 16:40:31.162818] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.427 [2024-11-20 16:40:31.162827] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.427 [2024-11-20 16:40:31.162834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.427 [2024-11-20 16:40:31.162842] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.427 [2024-11-20 16:40:31.175476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.427 [2024-11-20 16:40:31.176053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.427 [2024-11-20 16:40:31.176074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.427 [2024-11-20 16:40:31.176082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.427 [2024-11-20 16:40:31.176303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.427 [2024-11-20 16:40:31.176523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.427 [2024-11-20 16:40:31.176531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.427 [2024-11-20 16:40:31.176538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.427 [2024-11-20 16:40:31.176545] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.427 [2024-11-20 16:40:31.189363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.427 [2024-11-20 16:40:31.190012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.427 [2024-11-20 16:40:31.190049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.427 [2024-11-20 16:40:31.190062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.427 [2024-11-20 16:40:31.190303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.427 [2024-11-20 16:40:31.190527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.427 [2024-11-20 16:40:31.190537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.427 [2024-11-20 16:40:31.190544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.427 [2024-11-20 16:40:31.190552] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.427 [2024-11-20 16:40:31.203177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.427 [2024-11-20 16:40:31.203850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.427 [2024-11-20 16:40:31.203887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.427 [2024-11-20 16:40:31.203898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.427 [2024-11-20 16:40:31.204145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.427 [2024-11-20 16:40:31.204370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.427 [2024-11-20 16:40:31.204379] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.427 [2024-11-20 16:40:31.204391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.427 [2024-11-20 16:40:31.204400] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.427 [2024-11-20 16:40:31.217015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.427 [2024-11-20 16:40:31.217647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.427 [2024-11-20 16:40:31.217685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.427 [2024-11-20 16:40:31.217696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.427 [2024-11-20 16:40:31.217935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.427 [2024-11-20 16:40:31.218169] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.427 [2024-11-20 16:40:31.218179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.427 [2024-11-20 16:40:31.218187] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.427 [2024-11-20 16:40:31.218194] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.427 [2024-11-20 16:40:31.231006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.427 [2024-11-20 16:40:31.231683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.427 [2024-11-20 16:40:31.231721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.427 [2024-11-20 16:40:31.231731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.427 [2024-11-20 16:40:31.231971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.427 [2024-11-20 16:40:31.232211] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.427 [2024-11-20 16:40:31.232222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.427 [2024-11-20 16:40:31.232230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.427 [2024-11-20 16:40:31.232238] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.427 [2024-11-20 16:40:31.244854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.428 [2024-11-20 16:40:31.245540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.428 [2024-11-20 16:40:31.245577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.428 [2024-11-20 16:40:31.245588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.428 [2024-11-20 16:40:31.245828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.428 [2024-11-20 16:40:31.246061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.428 [2024-11-20 16:40:31.246071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.428 [2024-11-20 16:40:31.246080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.428 [2024-11-20 16:40:31.246087] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.428 [2024-11-20 16:40:31.258718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.428 [2024-11-20 16:40:31.259396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.428 [2024-11-20 16:40:31.259433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.428 [2024-11-20 16:40:31.259444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.428 [2024-11-20 16:40:31.259690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.428 [2024-11-20 16:40:31.259916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.428 [2024-11-20 16:40:31.259924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.428 [2024-11-20 16:40:31.259933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.428 [2024-11-20 16:40:31.259940] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.428 [2024-11-20 16:40:31.272556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.428 [2024-11-20 16:40:31.273258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.428 [2024-11-20 16:40:31.273295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.428 [2024-11-20 16:40:31.273306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.428 [2024-11-20 16:40:31.273545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.428 [2024-11-20 16:40:31.273770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.428 [2024-11-20 16:40:31.273779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.428 [2024-11-20 16:40:31.273787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.428 [2024-11-20 16:40:31.273795] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.428 [2024-11-20 16:40:31.286408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.428 [2024-11-20 16:40:31.287107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.428 [2024-11-20 16:40:31.287145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.428 [2024-11-20 16:40:31.287157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.428 [2024-11-20 16:40:31.287398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.428 [2024-11-20 16:40:31.287622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.428 [2024-11-20 16:40:31.287630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.428 [2024-11-20 16:40:31.287638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.428 [2024-11-20 16:40:31.287646] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.428 [2024-11-20 16:40:31.300266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.428 [2024-11-20 16:40:31.300910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.428 [2024-11-20 16:40:31.300948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.428 [2024-11-20 16:40:31.300965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.428 [2024-11-20 16:40:31.301214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.428 [2024-11-20 16:40:31.301441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.428 [2024-11-20 16:40:31.301450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.428 [2024-11-20 16:40:31.301458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.428 [2024-11-20 16:40:31.301467] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.428 [2024-11-20 16:40:31.314287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.428 [2024-11-20 16:40:31.314926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.428 [2024-11-20 16:40:31.314964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.428 [2024-11-20 16:40:31.314975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.428 [2024-11-20 16:40:31.315227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.428 [2024-11-20 16:40:31.315451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.428 [2024-11-20 16:40:31.315461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.428 [2024-11-20 16:40:31.315468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.428 [2024-11-20 16:40:31.315476] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.428 [2024-11-20 16:40:31.328178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.428 [2024-11-20 16:40:31.328842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.428 [2024-11-20 16:40:31.328879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.428 [2024-11-20 16:40:31.328889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.428 [2024-11-20 16:40:31.329138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.428 [2024-11-20 16:40:31.329363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.428 [2024-11-20 16:40:31.329372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.428 [2024-11-20 16:40:31.329381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.428 [2024-11-20 16:40:31.329389] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.428 [2024-11-20 16:40:31.342023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.428 [2024-11-20 16:40:31.342589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.428 [2024-11-20 16:40:31.342625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.428 [2024-11-20 16:40:31.342637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.428 [2024-11-20 16:40:31.342877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.428 [2024-11-20 16:40:31.343113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.428 [2024-11-20 16:40:31.343123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.428 [2024-11-20 16:40:31.343131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.428 [2024-11-20 16:40:31.343139] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.428 [2024-11-20 16:40:31.355958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.428 [2024-11-20 16:40:31.356594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.428 [2024-11-20 16:40:31.356631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.428 [2024-11-20 16:40:31.356642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.428 [2024-11-20 16:40:31.356882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.429 [2024-11-20 16:40:31.357115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.429 [2024-11-20 16:40:31.357125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.429 [2024-11-20 16:40:31.357133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.429 [2024-11-20 16:40:31.357141] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.429 [2024-11-20 16:40:31.369959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.429 [2024-11-20 16:40:31.370551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.429 [2024-11-20 16:40:31.370570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.429 [2024-11-20 16:40:31.370578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.429 [2024-11-20 16:40:31.370798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.429 [2024-11-20 16:40:31.371025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.429 [2024-11-20 16:40:31.371034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.429 [2024-11-20 16:40:31.371041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.429 [2024-11-20 16:40:31.371048] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.691 [2024-11-20 16:40:31.383861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.691 [2024-11-20 16:40:31.384393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.691 [2024-11-20 16:40:31.384409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.691 [2024-11-20 16:40:31.384416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.691 [2024-11-20 16:40:31.384636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.691 [2024-11-20 16:40:31.384856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.691 [2024-11-20 16:40:31.384864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.691 [2024-11-20 16:40:31.384875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.691 [2024-11-20 16:40:31.384882] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.691 [2024-11-20 16:40:31.397681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.691 [2024-11-20 16:40:31.398313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.691 [2024-11-20 16:40:31.398350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.691 [2024-11-20 16:40:31.398361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.691 [2024-11-20 16:40:31.398600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.691 [2024-11-20 16:40:31.398825] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.691 [2024-11-20 16:40:31.398833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.691 [2024-11-20 16:40:31.398842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.691 [2024-11-20 16:40:31.398850] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.691 [2024-11-20 16:40:31.411668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.691 [2024-11-20 16:40:31.412346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.691 [2024-11-20 16:40:31.412383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.691 [2024-11-20 16:40:31.412394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.692 [2024-11-20 16:40:31.412634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.692 [2024-11-20 16:40:31.412858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.692 [2024-11-20 16:40:31.412867] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.692 [2024-11-20 16:40:31.412874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.692 [2024-11-20 16:40:31.412883] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.692 [2024-11-20 16:40:31.425507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.692 [2024-11-20 16:40:31.426102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-11-20 16:40:31.426140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.692 [2024-11-20 16:40:31.426152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.692 [2024-11-20 16:40:31.426395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.692 [2024-11-20 16:40:31.426619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.692 [2024-11-20 16:40:31.426629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.692 [2024-11-20 16:40:31.426636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.692 [2024-11-20 16:40:31.426644] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.692 [2024-11-20 16:40:31.439493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.692 [2024-11-20 16:40:31.440094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-11-20 16:40:31.440132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.692 [2024-11-20 16:40:31.440144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.692 [2024-11-20 16:40:31.440387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.692 [2024-11-20 16:40:31.440612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.692 [2024-11-20 16:40:31.440621] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.692 [2024-11-20 16:40:31.440629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.692 [2024-11-20 16:40:31.440637] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.692 [2024-11-20 16:40:31.453459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.692 [2024-11-20 16:40:31.454085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-11-20 16:40:31.454122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.692 [2024-11-20 16:40:31.454134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.692 [2024-11-20 16:40:31.454377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.692 [2024-11-20 16:40:31.454601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.692 [2024-11-20 16:40:31.454610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.692 [2024-11-20 16:40:31.454618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.692 [2024-11-20 16:40:31.454626] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.692 [2024-11-20 16:40:31.467457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.692 [2024-11-20 16:40:31.468064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-11-20 16:40:31.468101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.692 [2024-11-20 16:40:31.468112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.692 [2024-11-20 16:40:31.468352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.692 [2024-11-20 16:40:31.468576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.692 [2024-11-20 16:40:31.468584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.692 [2024-11-20 16:40:31.468592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.692 [2024-11-20 16:40:31.468601] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.692 [2024-11-20 16:40:31.481427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.692 [2024-11-20 16:40:31.482111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-11-20 16:40:31.482148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.692 [2024-11-20 16:40:31.482163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.692 [2024-11-20 16:40:31.482403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.692 [2024-11-20 16:40:31.482628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.692 [2024-11-20 16:40:31.482637] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.692 [2024-11-20 16:40:31.482645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.692 [2024-11-20 16:40:31.482652] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.692 [2024-11-20 16:40:31.495277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.692 [2024-11-20 16:40:31.495864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-11-20 16:40:31.495883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.692 [2024-11-20 16:40:31.495891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.692 [2024-11-20 16:40:31.496117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.692 [2024-11-20 16:40:31.496338] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.692 [2024-11-20 16:40:31.496347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.692 [2024-11-20 16:40:31.496354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.692 [2024-11-20 16:40:31.496360] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.692 [2024-11-20 16:40:31.509181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.692 [2024-11-20 16:40:31.509840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-11-20 16:40:31.509878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.692 [2024-11-20 16:40:31.509889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.692 [2024-11-20 16:40:31.510135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.692 [2024-11-20 16:40:31.510360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.692 [2024-11-20 16:40:31.510369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.692 [2024-11-20 16:40:31.510377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.692 [2024-11-20 16:40:31.510385] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.692 [2024-11-20 16:40:31.523041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.692 [2024-11-20 16:40:31.523603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-11-20 16:40:31.523639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.692 [2024-11-20 16:40:31.523652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.692 [2024-11-20 16:40:31.523892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.692 [2024-11-20 16:40:31.524133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.692 [2024-11-20 16:40:31.524145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.692 [2024-11-20 16:40:31.524154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.692 [2024-11-20 16:40:31.524162] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.692 [2024-11-20 16:40:31.537006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.692 [2024-11-20 16:40:31.537587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-11-20 16:40:31.537605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.692 [2024-11-20 16:40:31.537614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.692 [2024-11-20 16:40:31.537834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.692 [2024-11-20 16:40:31.538072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.692 [2024-11-20 16:40:31.538082] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.692 [2024-11-20 16:40:31.538089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.692 [2024-11-20 16:40:31.538096] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.692 [2024-11-20 16:40:31.550912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.692 [2024-11-20 16:40:31.551576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-11-20 16:40:31.551614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.692 [2024-11-20 16:40:31.551624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.693 [2024-11-20 16:40:31.551864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.693 [2024-11-20 16:40:31.552097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.693 [2024-11-20 16:40:31.552107] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.693 [2024-11-20 16:40:31.552116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.693 [2024-11-20 16:40:31.552124] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.693 [2024-11-20 16:40:31.564747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.693 [2024-11-20 16:40:31.565415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.693 [2024-11-20 16:40:31.565453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.693 [2024-11-20 16:40:31.565464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.693 [2024-11-20 16:40:31.565704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.693 [2024-11-20 16:40:31.565928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.693 [2024-11-20 16:40:31.565937] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.693 [2024-11-20 16:40:31.565949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.693 [2024-11-20 16:40:31.565957] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.693 [2024-11-20 16:40:31.578583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.693 [2024-11-20 16:40:31.579270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.693 [2024-11-20 16:40:31.579308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.693 [2024-11-20 16:40:31.579319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.693 [2024-11-20 16:40:31.579559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.693 [2024-11-20 16:40:31.579783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.693 [2024-11-20 16:40:31.579793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.693 [2024-11-20 16:40:31.579801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.693 [2024-11-20 16:40:31.579809] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.693 [2024-11-20 16:40:31.592435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.693 [2024-11-20 16:40:31.593088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.693 [2024-11-20 16:40:31.593126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.693 [2024-11-20 16:40:31.593138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.693 [2024-11-20 16:40:31.593381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.693 [2024-11-20 16:40:31.593605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.693 [2024-11-20 16:40:31.593615] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.693 [2024-11-20 16:40:31.593623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.693 [2024-11-20 16:40:31.593631] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.693 [2024-11-20 16:40:31.606463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.693 [2024-11-20 16:40:31.607083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.693 [2024-11-20 16:40:31.607261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.693 [2024-11-20 16:40:31.607274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.693 [2024-11-20 16:40:31.607562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.693 [2024-11-20 16:40:31.607787] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.693 [2024-11-20 16:40:31.607796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.693 [2024-11-20 16:40:31.607804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.693 [2024-11-20 16:40:31.607812] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.693 [2024-11-20 16:40:31.620434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.693 [2024-11-20 16:40:31.621123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.693 [2024-11-20 16:40:31.621161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.693 [2024-11-20 16:40:31.621173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.693 [2024-11-20 16:40:31.621415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.693 [2024-11-20 16:40:31.621639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.693 [2024-11-20 16:40:31.621648] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.693 [2024-11-20 16:40:31.621656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.693 [2024-11-20 16:40:31.621664] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.693 [2024-11-20 16:40:31.634288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.693 [2024-11-20 16:40:31.634962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.693 [2024-11-20 16:40:31.635006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.693 [2024-11-20 16:40:31.635017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.693 [2024-11-20 16:40:31.635257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.693 [2024-11-20 16:40:31.635491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.693 [2024-11-20 16:40:31.635501] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.693 [2024-11-20 16:40:31.635509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.693 [2024-11-20 16:40:31.635517] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.956 6961.50 IOPS, 27.19 MiB/s [2024-11-20T15:40:31.915Z] [2024-11-20 16:40:31.648110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.956 [2024-11-20 16:40:31.648744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.956 [2024-11-20 16:40:31.648781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.956 [2024-11-20 16:40:31.648792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.956 [2024-11-20 16:40:31.649040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.956 [2024-11-20 16:40:31.649266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.956 [2024-11-20 16:40:31.649275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.956 [2024-11-20 16:40:31.649283] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.956 [2024-11-20 16:40:31.649291] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.956 [2024-11-20 16:40:31.662124] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.956 [2024-11-20 16:40:31.662798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.956 [2024-11-20 16:40:31.662834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.956 [2024-11-20 16:40:31.662849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.956 [2024-11-20 16:40:31.663097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.956 [2024-11-20 16:40:31.663323] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.956 [2024-11-20 16:40:31.663332] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.956 [2024-11-20 16:40:31.663340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.956 [2024-11-20 16:40:31.663348] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.956 [2024-11-20 16:40:31.675953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.956 [2024-11-20 16:40:31.676589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.956 [2024-11-20 16:40:31.676626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.956 [2024-11-20 16:40:31.676637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.956 [2024-11-20 16:40:31.676877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.956 [2024-11-20 16:40:31.677111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.956 [2024-11-20 16:40:31.677121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.956 [2024-11-20 16:40:31.677129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.956 [2024-11-20 16:40:31.677137] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.956 [2024-11-20 16:40:31.689956] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.956 [2024-11-20 16:40:31.690627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.956 [2024-11-20 16:40:31.690664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.956 [2024-11-20 16:40:31.690675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.956 [2024-11-20 16:40:31.690915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.956 [2024-11-20 16:40:31.691149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.956 [2024-11-20 16:40:31.691159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.956 [2024-11-20 16:40:31.691167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.956 [2024-11-20 16:40:31.691175] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.956 [2024-11-20 16:40:31.703791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.956 [2024-11-20 16:40:31.704380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.956 [2024-11-20 16:40:31.704400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.956 [2024-11-20 16:40:31.704407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.956 [2024-11-20 16:40:31.704627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.956 [2024-11-20 16:40:31.704856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.956 [2024-11-20 16:40:31.704864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.957 [2024-11-20 16:40:31.704871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.957 [2024-11-20 16:40:31.704878] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.957 [2024-11-20 16:40:31.717704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.957 [2024-11-20 16:40:31.718199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.957 [2024-11-20 16:40:31.718216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.957 [2024-11-20 16:40:31.718224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.957 [2024-11-20 16:40:31.718444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.957 [2024-11-20 16:40:31.718664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.957 [2024-11-20 16:40:31.718671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.957 [2024-11-20 16:40:31.718679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.957 [2024-11-20 16:40:31.718685] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.957 [2024-11-20 16:40:31.731716] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.957 [2024-11-20 16:40:31.732246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.957 [2024-11-20 16:40:31.732282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.957 [2024-11-20 16:40:31.732293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.957 [2024-11-20 16:40:31.732532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.957 [2024-11-20 16:40:31.732757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.957 [2024-11-20 16:40:31.732766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.957 [2024-11-20 16:40:31.732774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.957 [2024-11-20 16:40:31.732782] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.957 [2024-11-20 16:40:31.745622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.957 [2024-11-20 16:40:31.746279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.957 [2024-11-20 16:40:31.746316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.957 [2024-11-20 16:40:31.746327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.957 [2024-11-20 16:40:31.746567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.957 [2024-11-20 16:40:31.746791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.957 [2024-11-20 16:40:31.746800] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.957 [2024-11-20 16:40:31.746813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.957 [2024-11-20 16:40:31.746821] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.957 [2024-11-20 16:40:31.759444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.957 [2024-11-20 16:40:31.759936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.957 [2024-11-20 16:40:31.759962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.957 [2024-11-20 16:40:31.759971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.957 [2024-11-20 16:40:31.760204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.957 [2024-11-20 16:40:31.760427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.957 [2024-11-20 16:40:31.760435] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.957 [2024-11-20 16:40:31.760443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.957 [2024-11-20 16:40:31.760450] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.957 [2024-11-20 16:40:31.773307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.957 [2024-11-20 16:40:31.773954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.957 [2024-11-20 16:40:31.773998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.957 [2024-11-20 16:40:31.774010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.957 [2024-11-20 16:40:31.774249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.957 [2024-11-20 16:40:31.774473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.957 [2024-11-20 16:40:31.774482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.957 [2024-11-20 16:40:31.774490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.957 [2024-11-20 16:40:31.774498] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.957 [2024-11-20 16:40:31.787121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.957 [2024-11-20 16:40:31.787775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.957 [2024-11-20 16:40:31.787812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.957 [2024-11-20 16:40:31.787823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.957 [2024-11-20 16:40:31.788072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.957 [2024-11-20 16:40:31.788298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.957 [2024-11-20 16:40:31.788306] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.957 [2024-11-20 16:40:31.788315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.957 [2024-11-20 16:40:31.788323] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.957 [2024-11-20 16:40:31.800944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.957 [2024-11-20 16:40:31.801616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.957 [2024-11-20 16:40:31.801654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.957 [2024-11-20 16:40:31.801666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.957 [2024-11-20 16:40:31.801906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.957 [2024-11-20 16:40:31.802142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.957 [2024-11-20 16:40:31.802152] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.957 [2024-11-20 16:40:31.802160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.957 [2024-11-20 16:40:31.802168] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.957 [2024-11-20 16:40:31.814777] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.957 [2024-11-20 16:40:31.815496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.957 [2024-11-20 16:40:31.815534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.957 [2024-11-20 16:40:31.815544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.957 [2024-11-20 16:40:31.815784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.957 [2024-11-20 16:40:31.816017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.957 [2024-11-20 16:40:31.816027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.957 [2024-11-20 16:40:31.816034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.957 [2024-11-20 16:40:31.816042] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.957 [2024-11-20 16:40:31.828658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.957 [2024-11-20 16:40:31.829302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.957 [2024-11-20 16:40:31.829340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.957 [2024-11-20 16:40:31.829350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.957 [2024-11-20 16:40:31.829590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.957 [2024-11-20 16:40:31.829814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.957 [2024-11-20 16:40:31.829823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.957 [2024-11-20 16:40:31.829831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.957 [2024-11-20 16:40:31.829838] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.957 [2024-11-20 16:40:31.842514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.957 [2024-11-20 16:40:31.843124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.957 [2024-11-20 16:40:31.843162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.957 [2024-11-20 16:40:31.843178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.957 [2024-11-20 16:40:31.843421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.957 [2024-11-20 16:40:31.843645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.958 [2024-11-20 16:40:31.843655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.958 [2024-11-20 16:40:31.843662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.958 [2024-11-20 16:40:31.843670] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.958 [2024-11-20 16:40:31.856505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.958 [2024-11-20 16:40:31.857083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.958 [2024-11-20 16:40:31.857120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.958 [2024-11-20 16:40:31.857132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.958 [2024-11-20 16:40:31.857373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.958 [2024-11-20 16:40:31.857597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.958 [2024-11-20 16:40:31.857606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.958 [2024-11-20 16:40:31.857614] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.958 [2024-11-20 16:40:31.857622] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.958 [2024-11-20 16:40:31.870460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.958 [2024-11-20 16:40:31.871086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.958 [2024-11-20 16:40:31.871124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.958 [2024-11-20 16:40:31.871136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.958 [2024-11-20 16:40:31.871377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.958 [2024-11-20 16:40:31.871601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.958 [2024-11-20 16:40:31.871610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.958 [2024-11-20 16:40:31.871618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.958 [2024-11-20 16:40:31.871626] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.958 [2024-11-20 16:40:31.884457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.958 [2024-11-20 16:40:31.885083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.958 [2024-11-20 16:40:31.885121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.958 [2024-11-20 16:40:31.885133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.958 [2024-11-20 16:40:31.885374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.958 [2024-11-20 16:40:31.885603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.958 [2024-11-20 16:40:31.885612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.958 [2024-11-20 16:40:31.885620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.958 [2024-11-20 16:40:31.885628] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:45.958 [2024-11-20 16:40:31.898460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:45.958 [2024-11-20 16:40:31.899005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.958 [2024-11-20 16:40:31.899025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:45.958 [2024-11-20 16:40:31.899041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:45.958 [2024-11-20 16:40:31.899262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:45.958 [2024-11-20 16:40:31.899482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:45.958 [2024-11-20 16:40:31.899490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:45.958 [2024-11-20 16:40:31.899497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:45.958 [2024-11-20 16:40:31.899503] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.221 [2024-11-20 16:40:31.912318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.221 [2024-11-20 16:40:31.912977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.221 [2024-11-20 16:40:31.913022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.221 [2024-11-20 16:40:31.913033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.221 [2024-11-20 16:40:31.913272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.221 [2024-11-20 16:40:31.913497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.221 [2024-11-20 16:40:31.913507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.221 [2024-11-20 16:40:31.913514] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.221 [2024-11-20 16:40:31.913523] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.221 [2024-11-20 16:40:31.926136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.221 [2024-11-20 16:40:31.926763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.221 [2024-11-20 16:40:31.926800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.221 [2024-11-20 16:40:31.926810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.221 [2024-11-20 16:40:31.927059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.221 [2024-11-20 16:40:31.927284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.221 [2024-11-20 16:40:31.927293] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.221 [2024-11-20 16:40:31.927306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.221 [2024-11-20 16:40:31.927314] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.221 [2024-11-20 16:40:31.940150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.221 [2024-11-20 16:40:31.940808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.221 [2024-11-20 16:40:31.940844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.221 [2024-11-20 16:40:31.940855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.221 [2024-11-20 16:40:31.941103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.221 [2024-11-20 16:40:31.941328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.221 [2024-11-20 16:40:31.941337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.221 [2024-11-20 16:40:31.941345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.221 [2024-11-20 16:40:31.941353] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.221 [2024-11-20 16:40:31.953963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.221 [2024-11-20 16:40:31.954544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.221 [2024-11-20 16:40:31.954563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.221 [2024-11-20 16:40:31.954571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.221 [2024-11-20 16:40:31.954791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.221 [2024-11-20 16:40:31.955017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.221 [2024-11-20 16:40:31.955026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.221 [2024-11-20 16:40:31.955033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.221 [2024-11-20 16:40:31.955039] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.221 [2024-11-20 16:40:31.967860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.221 [2024-11-20 16:40:31.968394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.221 [2024-11-20 16:40:31.968411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.221 [2024-11-20 16:40:31.968419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.222 [2024-11-20 16:40:31.968639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.222 [2024-11-20 16:40:31.968858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.222 [2024-11-20 16:40:31.968866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.222 [2024-11-20 16:40:31.968873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.222 [2024-11-20 16:40:31.968880] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.222 [2024-11-20 16:40:31.981700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.222 [2024-11-20 16:40:31.982365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.222 [2024-11-20 16:40:31.982402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.222 [2024-11-20 16:40:31.982413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.222 [2024-11-20 16:40:31.982652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.222 [2024-11-20 16:40:31.982877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.222 [2024-11-20 16:40:31.982886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.222 [2024-11-20 16:40:31.982894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.222 [2024-11-20 16:40:31.982902] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.222 [2024-11-20 16:40:31.995526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.222 [2024-11-20 16:40:31.996106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.222 [2024-11-20 16:40:31.996144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.222 [2024-11-20 16:40:31.996156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.222 [2024-11-20 16:40:31.996398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.222 [2024-11-20 16:40:31.996622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.222 [2024-11-20 16:40:31.996631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.222 [2024-11-20 16:40:31.996638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.222 [2024-11-20 16:40:31.996647] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.222 [2024-11-20 16:40:32.009473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.222 [2024-11-20 16:40:32.010008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.222 [2024-11-20 16:40:32.010045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.222 [2024-11-20 16:40:32.010057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.222 [2024-11-20 16:40:32.010298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.222 [2024-11-20 16:40:32.010522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.222 [2024-11-20 16:40:32.010530] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.222 [2024-11-20 16:40:32.010538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.222 [2024-11-20 16:40:32.010546] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.222 [2024-11-20 16:40:32.023353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.222 [2024-11-20 16:40:32.024000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.222 [2024-11-20 16:40:32.024038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.222 [2024-11-20 16:40:32.024055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.222 [2024-11-20 16:40:32.024297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.222 [2024-11-20 16:40:32.024522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.222 [2024-11-20 16:40:32.024531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.222 [2024-11-20 16:40:32.024539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.222 [2024-11-20 16:40:32.024547] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.222 [2024-11-20 16:40:32.037184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.222 [2024-11-20 16:40:32.037836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.222 [2024-11-20 16:40:32.037873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.222 [2024-11-20 16:40:32.037884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.222 [2024-11-20 16:40:32.038131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.222 [2024-11-20 16:40:32.038357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.222 [2024-11-20 16:40:32.038366] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.222 [2024-11-20 16:40:32.038374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.222 [2024-11-20 16:40:32.038382] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.222 [2024-11-20 16:40:32.051202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.222 [2024-11-20 16:40:32.051856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.222 [2024-11-20 16:40:32.051893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.222 [2024-11-20 16:40:32.051903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.222 [2024-11-20 16:40:32.052153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.222 [2024-11-20 16:40:32.052378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.222 [2024-11-20 16:40:32.052387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.222 [2024-11-20 16:40:32.052395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.222 [2024-11-20 16:40:32.052403] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.222 [2024-11-20 16:40:32.065020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.222 [2024-11-20 16:40:32.065660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.222 [2024-11-20 16:40:32.065697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.222 [2024-11-20 16:40:32.065708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.222 [2024-11-20 16:40:32.065948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.222 [2024-11-20 16:40:32.066186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.222 [2024-11-20 16:40:32.066197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.222 [2024-11-20 16:40:32.066205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.222 [2024-11-20 16:40:32.066213] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.222 [2024-11-20 16:40:32.079035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.222 [2024-11-20 16:40:32.079628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.222 [2024-11-20 16:40:32.079647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.222 [2024-11-20 16:40:32.079655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.222 [2024-11-20 16:40:32.079876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.222 [2024-11-20 16:40:32.080103] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.222 [2024-11-20 16:40:32.080112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.222 [2024-11-20 16:40:32.080119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.222 [2024-11-20 16:40:32.080126] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.222 [2024-11-20 16:40:32.092977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.222 [2024-11-20 16:40:32.093515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.222 [2024-11-20 16:40:32.093551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.222 [2024-11-20 16:40:32.093563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.222 [2024-11-20 16:40:32.093806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.222 [2024-11-20 16:40:32.094038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.222 [2024-11-20 16:40:32.094047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.222 [2024-11-20 16:40:32.094056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.223 [2024-11-20 16:40:32.094064] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.223 [2024-11-20 16:40:32.106883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.223 [2024-11-20 16:40:32.107520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.223 [2024-11-20 16:40:32.107557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.223 [2024-11-20 16:40:32.107568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.223 [2024-11-20 16:40:32.107807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.223 [2024-11-20 16:40:32.108039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.223 [2024-11-20 16:40:32.108049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.223 [2024-11-20 16:40:32.108061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.223 [2024-11-20 16:40:32.108069] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.223 [2024-11-20 16:40:32.120912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.223 [2024-11-20 16:40:32.121591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.223 [2024-11-20 16:40:32.121629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.223 [2024-11-20 16:40:32.121639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.223 [2024-11-20 16:40:32.121879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.223 [2024-11-20 16:40:32.122111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.223 [2024-11-20 16:40:32.122121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.223 [2024-11-20 16:40:32.122129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.223 [2024-11-20 16:40:32.122137] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.223 [2024-11-20 16:40:32.134765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.223 [2024-11-20 16:40:32.135331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.223 [2024-11-20 16:40:32.135349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.223 [2024-11-20 16:40:32.135357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.223 [2024-11-20 16:40:32.135578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.223 [2024-11-20 16:40:32.135798] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.223 [2024-11-20 16:40:32.135806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.223 [2024-11-20 16:40:32.135813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.223 [2024-11-20 16:40:32.135820] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.223 [2024-11-20 16:40:32.148657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.223 [2024-11-20 16:40:32.149193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.223 [2024-11-20 16:40:32.149211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.223 [2024-11-20 16:40:32.149219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.223 [2024-11-20 16:40:32.149439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.223 [2024-11-20 16:40:32.149659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.223 [2024-11-20 16:40:32.149668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.223 [2024-11-20 16:40:32.149675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.223 [2024-11-20 16:40:32.149682] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.223 [2024-11-20 16:40:32.162516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.223 [2024-11-20 16:40:32.163199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.223 [2024-11-20 16:40:32.163236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.223 [2024-11-20 16:40:32.163247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.223 [2024-11-20 16:40:32.163487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.223 [2024-11-20 16:40:32.163712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.223 [2024-11-20 16:40:32.163721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.223 [2024-11-20 16:40:32.163728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.223 [2024-11-20 16:40:32.163736] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.223 [2024-11-20 16:40:32.176359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.486 [2024-11-20 16:40:32.177043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.486 [2024-11-20 16:40:32.177087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.486 [2024-11-20 16:40:32.177102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.486 [2024-11-20 16:40:32.177419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.486 [2024-11-20 16:40:32.177717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.486 [2024-11-20 16:40:32.177733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.486 [2024-11-20 16:40:32.177745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.486 [2024-11-20 16:40:32.177757] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.486 [2024-11-20 16:40:32.190224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.486 [2024-11-20 16:40:32.190801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.486 [2024-11-20 16:40:32.190838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.486 [2024-11-20 16:40:32.190850] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.486 [2024-11-20 16:40:32.191102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.486 [2024-11-20 16:40:32.191327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.486 [2024-11-20 16:40:32.191336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.486 [2024-11-20 16:40:32.191344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.486 [2024-11-20 16:40:32.191353] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.486 [2024-11-20 16:40:32.204185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.486 [2024-11-20 16:40:32.204780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.486 [2024-11-20 16:40:32.204800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.486 [2024-11-20 16:40:32.204813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.486 [2024-11-20 16:40:32.205040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.486 [2024-11-20 16:40:32.205262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.486 [2024-11-20 16:40:32.205270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.486 [2024-11-20 16:40:32.205277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.486 [2024-11-20 16:40:32.205285] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.486 [2024-11-20 16:40:32.218105] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.486 [2024-11-20 16:40:32.218542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.486 [2024-11-20 16:40:32.218559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.486 [2024-11-20 16:40:32.218567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.486 [2024-11-20 16:40:32.218788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.486 [2024-11-20 16:40:32.219015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.486 [2024-11-20 16:40:32.219023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.486 [2024-11-20 16:40:32.219031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.486 [2024-11-20 16:40:32.219037] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.486 [2024-11-20 16:40:32.232076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.486 [2024-11-20 16:40:32.232623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.487 [2024-11-20 16:40:32.232661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.487 [2024-11-20 16:40:32.232673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.487 [2024-11-20 16:40:32.232916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.487 [2024-11-20 16:40:32.233156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.487 [2024-11-20 16:40:32.233167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.487 [2024-11-20 16:40:32.233175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.487 [2024-11-20 16:40:32.233183] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.487 [2024-11-20 16:40:32.246022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.487 [2024-11-20 16:40:32.246571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.487 [2024-11-20 16:40:32.246607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.487 [2024-11-20 16:40:32.246618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.487 [2024-11-20 16:40:32.246857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.487 [2024-11-20 16:40:32.247094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.487 [2024-11-20 16:40:32.247104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.487 [2024-11-20 16:40:32.247113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.487 [2024-11-20 16:40:32.247120] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.487 [2024-11-20 16:40:32.259955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.487 [2024-11-20 16:40:32.260519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.487 [2024-11-20 16:40:32.260539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.487 [2024-11-20 16:40:32.260547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.487 [2024-11-20 16:40:32.260769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.487 [2024-11-20 16:40:32.260995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.487 [2024-11-20 16:40:32.261003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.487 [2024-11-20 16:40:32.261011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.487 [2024-11-20 16:40:32.261018] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.487 [2024-11-20 16:40:32.273834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.487 [2024-11-20 16:40:32.274533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.487 [2024-11-20 16:40:32.274570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.487 [2024-11-20 16:40:32.274582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.487 [2024-11-20 16:40:32.274824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.487 [2024-11-20 16:40:32.275055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.487 [2024-11-20 16:40:32.275065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.487 [2024-11-20 16:40:32.275073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.487 [2024-11-20 16:40:32.275081] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.487 [2024-11-20 16:40:32.287699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.487 [2024-11-20 16:40:32.288308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.487 [2024-11-20 16:40:32.288328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.487 [2024-11-20 16:40:32.288336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.487 [2024-11-20 16:40:32.288557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.487 [2024-11-20 16:40:32.288778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.487 [2024-11-20 16:40:32.288786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.487 [2024-11-20 16:40:32.288798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.487 [2024-11-20 16:40:32.288805] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.487 [2024-11-20 16:40:32.301631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.487 [2024-11-20 16:40:32.302094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.487 [2024-11-20 16:40:32.302131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.487 [2024-11-20 16:40:32.302144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.487 [2024-11-20 16:40:32.302387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.487 [2024-11-20 16:40:32.302612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.487 [2024-11-20 16:40:32.302623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.487 [2024-11-20 16:40:32.302631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.487 [2024-11-20 16:40:32.302639] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.487 [2024-11-20 16:40:32.315479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.487 [2024-11-20 16:40:32.316211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.487 [2024-11-20 16:40:32.316248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.487 [2024-11-20 16:40:32.316259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.487 [2024-11-20 16:40:32.316499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.487 [2024-11-20 16:40:32.316723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.487 [2024-11-20 16:40:32.316732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.487 [2024-11-20 16:40:32.316740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.487 [2024-11-20 16:40:32.316748] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.487 [2024-11-20 16:40:32.329370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.487 [2024-11-20 16:40:32.329913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.487 [2024-11-20 16:40:32.329931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.487 [2024-11-20 16:40:32.329939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.487 [2024-11-20 16:40:32.330165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.487 [2024-11-20 16:40:32.330386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.487 [2024-11-20 16:40:32.330394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.487 [2024-11-20 16:40:32.330402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.487 [2024-11-20 16:40:32.330409] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.487 [2024-11-20 16:40:32.343264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.487 [2024-11-20 16:40:32.343876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.487 [2024-11-20 16:40:32.343913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.487 [2024-11-20 16:40:32.343924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.487 [2024-11-20 16:40:32.344172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.487 [2024-11-20 16:40:32.344397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.487 [2024-11-20 16:40:32.344406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.487 [2024-11-20 16:40:32.344413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.487 [2024-11-20 16:40:32.344422] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.487 [2024-11-20 16:40:32.357333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.487 [2024-11-20 16:40:32.358014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.487 [2024-11-20 16:40:32.358052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.487 [2024-11-20 16:40:32.358064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.487 [2024-11-20 16:40:32.358307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.487 [2024-11-20 16:40:32.358532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.487 [2024-11-20 16:40:32.358541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.487 [2024-11-20 16:40:32.358548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.487 [2024-11-20 16:40:32.358556] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.487 [2024-11-20 16:40:32.371192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.488 [2024-11-20 16:40:32.371789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.488 [2024-11-20 16:40:32.371808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.488 [2024-11-20 16:40:32.371816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.488 [2024-11-20 16:40:32.372044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.488 [2024-11-20 16:40:32.372265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.488 [2024-11-20 16:40:32.372273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.488 [2024-11-20 16:40:32.372280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.488 [2024-11-20 16:40:32.372287] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.488 [2024-11-20 16:40:32.385123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.488 [2024-11-20 16:40:32.385665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.488 [2024-11-20 16:40:32.385681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.488 [2024-11-20 16:40:32.385696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.488 [2024-11-20 16:40:32.385916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.488 [2024-11-20 16:40:32.386143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.488 [2024-11-20 16:40:32.386152] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.488 [2024-11-20 16:40:32.386159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.488 [2024-11-20 16:40:32.386166] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.488 [2024-11-20 16:40:32.398997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.488 [2024-11-20 16:40:32.399531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.488 [2024-11-20 16:40:32.399547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.488 [2024-11-20 16:40:32.399554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.488 [2024-11-20 16:40:32.399774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.488 [2024-11-20 16:40:32.400001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.488 [2024-11-20 16:40:32.400009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.488 [2024-11-20 16:40:32.400016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.488 [2024-11-20 16:40:32.400022] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.488 [2024-11-20 16:40:32.412850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.488 [2024-11-20 16:40:32.413400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.488 [2024-11-20 16:40:32.413416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.488 [2024-11-20 16:40:32.413424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.488 [2024-11-20 16:40:32.413644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.488 [2024-11-20 16:40:32.413864] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.488 [2024-11-20 16:40:32.413872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.488 [2024-11-20 16:40:32.413879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.488 [2024-11-20 16:40:32.413886] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.488 [2024-11-20 16:40:32.426725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.488 [2024-11-20 16:40:32.427258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.488 [2024-11-20 16:40:32.427274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.488 [2024-11-20 16:40:32.427282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.488 [2024-11-20 16:40:32.427502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.488 [2024-11-20 16:40:32.427726] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.488 [2024-11-20 16:40:32.427734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.488 [2024-11-20 16:40:32.427741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.488 [2024-11-20 16:40:32.427747] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.488 [2024-11-20 16:40:32.440600] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.488 [2024-11-20 16:40:32.441129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.488 [2024-11-20 16:40:32.441146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.488 [2024-11-20 16:40:32.441153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.488 [2024-11-20 16:40:32.441373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.751 [2024-11-20 16:40:32.441594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.751 [2024-11-20 16:40:32.441604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.751 [2024-11-20 16:40:32.441611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.751 [2024-11-20 16:40:32.441618] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.751 [2024-11-20 16:40:32.454460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.751 [2024-11-20 16:40:32.455020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.751 [2024-11-20 16:40:32.455058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.751 [2024-11-20 16:40:32.455069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.751 [2024-11-20 16:40:32.455308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.751 [2024-11-20 16:40:32.455532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.751 [2024-11-20 16:40:32.455542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.751 [2024-11-20 16:40:32.455551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.751 [2024-11-20 16:40:32.455560] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.751 [2024-11-20 16:40:32.468400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.751 [2024-11-20 16:40:32.469099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.751 [2024-11-20 16:40:32.469136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.751 [2024-11-20 16:40:32.469149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.751 [2024-11-20 16:40:32.469392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.751 [2024-11-20 16:40:32.469616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.752 [2024-11-20 16:40:32.469625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.752 [2024-11-20 16:40:32.469638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.752 [2024-11-20 16:40:32.469647] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.752 [2024-11-20 16:40:32.482267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.752 [2024-11-20 16:40:32.482818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.752 [2024-11-20 16:40:32.482855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.752 [2024-11-20 16:40:32.482866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.752 [2024-11-20 16:40:32.483116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.752 [2024-11-20 16:40:32.483341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.752 [2024-11-20 16:40:32.483350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.752 [2024-11-20 16:40:32.483358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.752 [2024-11-20 16:40:32.483366] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.752 [2024-11-20 16:40:32.496205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.752 [2024-11-20 16:40:32.496790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.752 [2024-11-20 16:40:32.496809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.752 [2024-11-20 16:40:32.496816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.752 [2024-11-20 16:40:32.497044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.752 [2024-11-20 16:40:32.497265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.752 [2024-11-20 16:40:32.497273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.752 [2024-11-20 16:40:32.497280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.752 [2024-11-20 16:40:32.497287] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.752 [2024-11-20 16:40:32.510125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.752 [2024-11-20 16:40:32.510545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.752 [2024-11-20 16:40:32.510564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.752 [2024-11-20 16:40:32.510572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.752 [2024-11-20 16:40:32.510792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.752 [2024-11-20 16:40:32.511018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.752 [2024-11-20 16:40:32.511027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.752 [2024-11-20 16:40:32.511034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.752 [2024-11-20 16:40:32.511040] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.752 [2024-11-20 16:40:32.524097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.752 [2024-11-20 16:40:32.524636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.752 [2024-11-20 16:40:32.524652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.752 [2024-11-20 16:40:32.524660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.752 [2024-11-20 16:40:32.524879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.752 [2024-11-20 16:40:32.525105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.752 [2024-11-20 16:40:32.525115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.752 [2024-11-20 16:40:32.525122] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.752 [2024-11-20 16:40:32.525129] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.752 [2024-11-20 16:40:32.537967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.752 [2024-11-20 16:40:32.538540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.752 [2024-11-20 16:40:32.538557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.752 [2024-11-20 16:40:32.538565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.752 [2024-11-20 16:40:32.538784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.752 [2024-11-20 16:40:32.539160] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.752 [2024-11-20 16:40:32.539172] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.752 [2024-11-20 16:40:32.539179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.752 [2024-11-20 16:40:32.539186] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.752 [2024-11-20 16:40:32.551813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.752 [2024-11-20 16:40:32.552355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.752 [2024-11-20 16:40:32.552372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.752 [2024-11-20 16:40:32.552380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.752 [2024-11-20 16:40:32.552600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.752 [2024-11-20 16:40:32.552820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.752 [2024-11-20 16:40:32.552828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.752 [2024-11-20 16:40:32.552835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.752 [2024-11-20 16:40:32.552841] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.752 [2024-11-20 16:40:32.565679] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.752 [2024-11-20 16:40:32.566318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.752 [2024-11-20 16:40:32.566356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.752 [2024-11-20 16:40:32.566371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.752 [2024-11-20 16:40:32.566611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.752 [2024-11-20 16:40:32.566835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.752 [2024-11-20 16:40:32.566844] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.752 [2024-11-20 16:40:32.566852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.752 [2024-11-20 16:40:32.566860] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.752 [2024-11-20 16:40:32.579515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.752 [2024-11-20 16:40:32.580077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.752 [2024-11-20 16:40:32.580097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.752 [2024-11-20 16:40:32.580105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.752 [2024-11-20 16:40:32.580326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.752 [2024-11-20 16:40:32.580546] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.752 [2024-11-20 16:40:32.580555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.752 [2024-11-20 16:40:32.580562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.752 [2024-11-20 16:40:32.580568] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.752 [2024-11-20 16:40:32.593401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.752 [2024-11-20 16:40:32.593965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.752 [2024-11-20 16:40:32.593988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.752 [2024-11-20 16:40:32.593996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.752 [2024-11-20 16:40:32.594216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.752 [2024-11-20 16:40:32.594436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.752 [2024-11-20 16:40:32.594444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.752 [2024-11-20 16:40:32.594451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.752 [2024-11-20 16:40:32.594458] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.752 [2024-11-20 16:40:32.607475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.752 [2024-11-20 16:40:32.608048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.752 [2024-11-20 16:40:32.608065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.752 [2024-11-20 16:40:32.608073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.752 [2024-11-20 16:40:32.608293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.753 [2024-11-20 16:40:32.608517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.753 [2024-11-20 16:40:32.608525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.753 [2024-11-20 16:40:32.608532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.753 [2024-11-20 16:40:32.608539] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.753 [2024-11-20 16:40:32.621372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.753 [2024-11-20 16:40:32.622010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.753 [2024-11-20 16:40:32.622048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.753 [2024-11-20 16:40:32.622059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.753 [2024-11-20 16:40:32.622299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.753 [2024-11-20 16:40:32.622524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.753 [2024-11-20 16:40:32.622533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.753 [2024-11-20 16:40:32.622540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.753 [2024-11-20 16:40:32.622548] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.753 [2024-11-20 16:40:32.635383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.753 [2024-11-20 16:40:32.635971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.753 [2024-11-20 16:40:32.635997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.753 [2024-11-20 16:40:32.636006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.753 [2024-11-20 16:40:32.636226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.753 [2024-11-20 16:40:32.636447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.753 [2024-11-20 16:40:32.636455] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.753 [2024-11-20 16:40:32.636462] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.753 [2024-11-20 16:40:32.636469] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.753 5569.20 IOPS, 21.75 MiB/s [2024-11-20T15:40:32.712Z] [2024-11-20 16:40:32.649276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.753 [2024-11-20 16:40:32.649895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.753 [2024-11-20 16:40:32.649933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.753 [2024-11-20 16:40:32.649943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.753 [2024-11-20 16:40:32.650190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.753 [2024-11-20 16:40:32.650416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.753 [2024-11-20 16:40:32.650425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.753 [2024-11-20 16:40:32.650438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.753 [2024-11-20 16:40:32.650446] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.753 [2024-11-20 16:40:32.663290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.753 [2024-11-20 16:40:32.663964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.753 [2024-11-20 16:40:32.664009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.753 [2024-11-20 16:40:32.664021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.753 [2024-11-20 16:40:32.664261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.753 [2024-11-20 16:40:32.664486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.753 [2024-11-20 16:40:32.664495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.753 [2024-11-20 16:40:32.664503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.753 [2024-11-20 16:40:32.664512] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.753 [2024-11-20 16:40:32.677142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.753 [2024-11-20 16:40:32.677752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.753 [2024-11-20 16:40:32.677791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.753 [2024-11-20 16:40:32.677802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.753 [2024-11-20 16:40:32.678050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.753 [2024-11-20 16:40:32.678276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.753 [2024-11-20 16:40:32.678286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.753 [2024-11-20 16:40:32.678294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.753 [2024-11-20 16:40:32.678302] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.753 [2024-11-20 16:40:32.691128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.753 [2024-11-20 16:40:32.691704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.753 [2024-11-20 16:40:32.691724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.753 [2024-11-20 16:40:32.691732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.753 [2024-11-20 16:40:32.691952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.753 [2024-11-20 16:40:32.692179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.753 [2024-11-20 16:40:32.692188] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.753 [2024-11-20 16:40:32.692195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.753 [2024-11-20 16:40:32.692202] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.753 [2024-11-20 16:40:32.705031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.753 [2024-11-20 16:40:32.705648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.753 [2024-11-20 16:40:32.705687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:46.753 [2024-11-20 16:40:32.705698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:46.753 [2024-11-20 16:40:32.705938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:46.753 [2024-11-20 16:40:32.706171] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.753 [2024-11-20 16:40:32.706182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.753 [2024-11-20 16:40:32.706190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.753 [2024-11-20 16:40:32.706198] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.016 [2024-11-20 16:40:32.719022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.016 [2024-11-20 16:40:32.719609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.016 [2024-11-20 16:40:32.719627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.016 [2024-11-20 16:40:32.719635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.016 [2024-11-20 16:40:32.719856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.016 [2024-11-20 16:40:32.720082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.016 [2024-11-20 16:40:32.720091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.016 [2024-11-20 16:40:32.720098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.016 [2024-11-20 16:40:32.720105] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.016 [2024-11-20 16:40:32.732919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.017 [2024-11-20 16:40:32.733581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.017 [2024-11-20 16:40:32.733619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.017 [2024-11-20 16:40:32.733630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.017 [2024-11-20 16:40:32.733870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.017 [2024-11-20 16:40:32.734111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.017 [2024-11-20 16:40:32.734121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.017 [2024-11-20 16:40:32.734129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.017 [2024-11-20 16:40:32.734137] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.017 [2024-11-20 16:40:32.746769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.017 [2024-11-20 16:40:32.747318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.017 [2024-11-20 16:40:32.747337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.017 [2024-11-20 16:40:32.747350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.017 [2024-11-20 16:40:32.747571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.017 [2024-11-20 16:40:32.747791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.017 [2024-11-20 16:40:32.747799] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.017 [2024-11-20 16:40:32.747806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.017 [2024-11-20 16:40:32.747813] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.017 [2024-11-20 16:40:32.760645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.017 [2024-11-20 16:40:32.761301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.017 [2024-11-20 16:40:32.761339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.017 [2024-11-20 16:40:32.761350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.017 [2024-11-20 16:40:32.761590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.017 [2024-11-20 16:40:32.761813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.017 [2024-11-20 16:40:32.761822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.017 [2024-11-20 16:40:32.761830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.017 [2024-11-20 16:40:32.761838] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.017 [2024-11-20 16:40:32.774479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.017 [2024-11-20 16:40:32.774938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.017 [2024-11-20 16:40:32.774957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.017 [2024-11-20 16:40:32.774965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.017 [2024-11-20 16:40:32.775191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.017 [2024-11-20 16:40:32.775412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.017 [2024-11-20 16:40:32.775420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.017 [2024-11-20 16:40:32.775427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.017 [2024-11-20 16:40:32.775434] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.017 [2024-11-20 16:40:32.788467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.017 [2024-11-20 16:40:32.789082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.017 [2024-11-20 16:40:32.789120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.017 [2024-11-20 16:40:32.789132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.017 [2024-11-20 16:40:32.789372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.017 [2024-11-20 16:40:32.789601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.017 [2024-11-20 16:40:32.789610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.017 [2024-11-20 16:40:32.789618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.017 [2024-11-20 16:40:32.789625] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.017 [2024-11-20 16:40:32.802280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.017 [2024-11-20 16:40:32.802973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.017 [2024-11-20 16:40:32.803018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.017 [2024-11-20 16:40:32.803030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.017 [2024-11-20 16:40:32.803272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.017 [2024-11-20 16:40:32.803497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.017 [2024-11-20 16:40:32.803505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.017 [2024-11-20 16:40:32.803513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.017 [2024-11-20 16:40:32.803521] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.017 [2024-11-20 16:40:32.816128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.017 [2024-11-20 16:40:32.816808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.017 [2024-11-20 16:40:32.816845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.017 [2024-11-20 16:40:32.816856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.017 [2024-11-20 16:40:32.817105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.017 [2024-11-20 16:40:32.817331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.017 [2024-11-20 16:40:32.817340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.017 [2024-11-20 16:40:32.817348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.017 [2024-11-20 16:40:32.817356] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.017 [2024-11-20 16:40:32.829965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.017 [2024-11-20 16:40:32.830567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.017 [2024-11-20 16:40:32.830586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.017 [2024-11-20 16:40:32.830594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.017 [2024-11-20 16:40:32.830814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.017 [2024-11-20 16:40:32.831041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.017 [2024-11-20 16:40:32.831050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.017 [2024-11-20 16:40:32.831061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.017 [2024-11-20 16:40:32.831068] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.017 [2024-11-20 16:40:32.843905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.017 [2024-11-20 16:40:32.844456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.017 [2024-11-20 16:40:32.844494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.017 [2024-11-20 16:40:32.844505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.017 [2024-11-20 16:40:32.844744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.018 [2024-11-20 16:40:32.844968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.018 [2024-11-20 16:40:32.844977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.018 [2024-11-20 16:40:32.844995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.018 [2024-11-20 16:40:32.845003] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.018 [2024-11-20 16:40:32.857816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.018 [2024-11-20 16:40:32.858299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.018 [2024-11-20 16:40:32.858319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.018 [2024-11-20 16:40:32.858327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.018 [2024-11-20 16:40:32.858547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.018 [2024-11-20 16:40:32.858767] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.018 [2024-11-20 16:40:32.858776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.018 [2024-11-20 16:40:32.858783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.018 [2024-11-20 16:40:32.858790] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.018 [2024-11-20 16:40:32.871827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.018 [2024-11-20 16:40:32.872464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.018 [2024-11-20 16:40:32.872502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.018 [2024-11-20 16:40:32.872513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.018 [2024-11-20 16:40:32.872753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.018 [2024-11-20 16:40:32.872977] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.018 [2024-11-20 16:40:32.872995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.018 [2024-11-20 16:40:32.873004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.018 [2024-11-20 16:40:32.873011] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.018 [2024-11-20 16:40:32.885840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.018 [2024-11-20 16:40:32.886273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.018 [2024-11-20 16:40:32.886293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.018 [2024-11-20 16:40:32.886302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.018 [2024-11-20 16:40:32.886523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.018 [2024-11-20 16:40:32.886743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.018 [2024-11-20 16:40:32.886752] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.018 [2024-11-20 16:40:32.886759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.018 [2024-11-20 16:40:32.886766] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.018 [2024-11-20 16:40:32.899783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.018 [2024-11-20 16:40:32.900474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.018 [2024-11-20 16:40:32.900511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.018 [2024-11-20 16:40:32.900522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.018 [2024-11-20 16:40:32.900762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.018 [2024-11-20 16:40:32.900994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.018 [2024-11-20 16:40:32.901004] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.018 [2024-11-20 16:40:32.901012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.018 [2024-11-20 16:40:32.901019] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.018 [2024-11-20 16:40:32.913632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.018 [2024-11-20 16:40:32.914306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.018 [2024-11-20 16:40:32.914343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.018 [2024-11-20 16:40:32.914353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.018 [2024-11-20 16:40:32.914593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.018 [2024-11-20 16:40:32.914817] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.018 [2024-11-20 16:40:32.914826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.018 [2024-11-20 16:40:32.914834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.018 [2024-11-20 16:40:32.914842] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.018 [2024-11-20 16:40:32.927463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.018 [2024-11-20 16:40:32.928189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.018 [2024-11-20 16:40:32.928227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.018 [2024-11-20 16:40:32.928242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.018 [2024-11-20 16:40:32.928482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.018 [2024-11-20 16:40:32.928706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.018 [2024-11-20 16:40:32.928715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.018 [2024-11-20 16:40:32.928723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.018 [2024-11-20 16:40:32.928731] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.018 [2024-11-20 16:40:32.941379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.018 [2024-11-20 16:40:32.942084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.018 [2024-11-20 16:40:32.942122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.018 [2024-11-20 16:40:32.942132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.018 [2024-11-20 16:40:32.942372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.018 [2024-11-20 16:40:32.942596] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.018 [2024-11-20 16:40:32.942604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.018 [2024-11-20 16:40:32.942612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.018 [2024-11-20 16:40:32.942620] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.018 [2024-11-20 16:40:32.955244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.018 [2024-11-20 16:40:32.955901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.018 [2024-11-20 16:40:32.955938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.018 [2024-11-20 16:40:32.955951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.019 [2024-11-20 16:40:32.956200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.019 [2024-11-20 16:40:32.956426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.019 [2024-11-20 16:40:32.956435] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.019 [2024-11-20 16:40:32.956443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.019 [2024-11-20 16:40:32.956451] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.019 [2024-11-20 16:40:32.969081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.019 [2024-11-20 16:40:32.969754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.019 [2024-11-20 16:40:32.969790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.019 [2024-11-20 16:40:32.969801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.019 [2024-11-20 16:40:32.970050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.019 [2024-11-20 16:40:32.970282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.019 [2024-11-20 16:40:32.970291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.019 [2024-11-20 16:40:32.970299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.019 [2024-11-20 16:40:32.970307] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.282 [2024-11-20 16:40:32.982926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.282 [2024-11-20 16:40:32.983599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.282 [2024-11-20 16:40:32.983636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.282 [2024-11-20 16:40:32.983647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.282 [2024-11-20 16:40:32.983887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.282 [2024-11-20 16:40:32.984121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.282 [2024-11-20 16:40:32.984130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.282 [2024-11-20 16:40:32.984139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.282 [2024-11-20 16:40:32.984147] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.282 [2024-11-20 16:40:32.996755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.282 [2024-11-20 16:40:32.997366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.282 [2024-11-20 16:40:32.997403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.282 [2024-11-20 16:40:32.997415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.282 [2024-11-20 16:40:32.997655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.282 [2024-11-20 16:40:32.997879] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.282 [2024-11-20 16:40:32.997888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.282 [2024-11-20 16:40:32.997895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.282 [2024-11-20 16:40:32.997903] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.282 [2024-11-20 16:40:33.010747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.282 [2024-11-20 16:40:33.011423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.282 [2024-11-20 16:40:33.011461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.282 [2024-11-20 16:40:33.011472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.282 [2024-11-20 16:40:33.011712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.282 [2024-11-20 16:40:33.011936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.282 [2024-11-20 16:40:33.011945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.282 [2024-11-20 16:40:33.011957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.282 [2024-11-20 16:40:33.011965] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.282 [2024-11-20 16:40:33.024588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.282 [2024-11-20 16:40:33.025216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.282 [2024-11-20 16:40:33.025253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.282 [2024-11-20 16:40:33.025264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.282 [2024-11-20 16:40:33.025503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.282 [2024-11-20 16:40:33.025728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.282 [2024-11-20 16:40:33.025737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.282 [2024-11-20 16:40:33.025745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.282 [2024-11-20 16:40:33.025753] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.282 [2024-11-20 16:40:33.038591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.282 [2024-11-20 16:40:33.039206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.282 [2024-11-20 16:40:33.039244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.282 [2024-11-20 16:40:33.039254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.282 [2024-11-20 16:40:33.039494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.282 [2024-11-20 16:40:33.039718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.282 [2024-11-20 16:40:33.039727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.282 [2024-11-20 16:40:33.039735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.282 [2024-11-20 16:40:33.039743] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.282 [2024-11-20 16:40:33.052583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.282 [2024-11-20 16:40:33.053145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.282 [2024-11-20 16:40:33.053165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.282 [2024-11-20 16:40:33.053173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.282 [2024-11-20 16:40:33.053394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.282 [2024-11-20 16:40:33.053614] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.282 [2024-11-20 16:40:33.053623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.282 [2024-11-20 16:40:33.053631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.282 [2024-11-20 16:40:33.053638] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.282 [2024-11-20 16:40:33.066477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.282 [2024-11-20 16:40:33.067048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.282 [2024-11-20 16:40:33.067066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.282 [2024-11-20 16:40:33.067073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.282 [2024-11-20 16:40:33.067294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.282 [2024-11-20 16:40:33.067513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.282 [2024-11-20 16:40:33.067521] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.282 [2024-11-20 16:40:33.067528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.282 [2024-11-20 16:40:33.067535] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.282 [2024-11-20 16:40:33.080360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.282 [2024-11-20 16:40:33.080975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.282 [2024-11-20 16:40:33.081020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.282 [2024-11-20 16:40:33.081030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.282 [2024-11-20 16:40:33.081270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.282 [2024-11-20 16:40:33.081495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.282 [2024-11-20 16:40:33.081503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.282 [2024-11-20 16:40:33.081511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.282 [2024-11-20 16:40:33.081519] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.282 [2024-11-20 16:40:33.094328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.282 [2024-11-20 16:40:33.094978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.282 [2024-11-20 16:40:33.095023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.283 [2024-11-20 16:40:33.095034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.283 [2024-11-20 16:40:33.095273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.283 [2024-11-20 16:40:33.095497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.283 [2024-11-20 16:40:33.095506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.283 [2024-11-20 16:40:33.095514] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.283 [2024-11-20 16:40:33.095522] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.283 [2024-11-20 16:40:33.108139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.283 [2024-11-20 16:40:33.108756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.283 [2024-11-20 16:40:33.108793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.283 [2024-11-20 16:40:33.108810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.283 [2024-11-20 16:40:33.109062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.283 [2024-11-20 16:40:33.109287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.283 [2024-11-20 16:40:33.109296] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.283 [2024-11-20 16:40:33.109304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.283 [2024-11-20 16:40:33.109312] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.283 [2024-11-20 16:40:33.122140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.283 [2024-11-20 16:40:33.122815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.283 [2024-11-20 16:40:33.122852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.283 [2024-11-20 16:40:33.122863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.283 [2024-11-20 16:40:33.123113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.283 [2024-11-20 16:40:33.123338] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.283 [2024-11-20 16:40:33.123347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.283 [2024-11-20 16:40:33.123356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.283 [2024-11-20 16:40:33.123364] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.283 [2024-11-20 16:40:33.135987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.283 [2024-11-20 16:40:33.136670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.283 [2024-11-20 16:40:33.136707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.283 [2024-11-20 16:40:33.136718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.283 [2024-11-20 16:40:33.136958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.283 [2024-11-20 16:40:33.137192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.283 [2024-11-20 16:40:33.137202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.283 [2024-11-20 16:40:33.137210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.283 [2024-11-20 16:40:33.137218] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.283 [2024-11-20 16:40:33.149845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.283 [2024-11-20 16:40:33.150505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.283 [2024-11-20 16:40:33.150543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.283 [2024-11-20 16:40:33.150554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.283 [2024-11-20 16:40:33.150794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.283 [2024-11-20 16:40:33.151032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.283 [2024-11-20 16:40:33.151042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.283 [2024-11-20 16:40:33.151049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.283 [2024-11-20 16:40:33.151057] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.283 [2024-11-20 16:40:33.163882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.283 [2024-11-20 16:40:33.164579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.283 [2024-11-20 16:40:33.164616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.283 [2024-11-20 16:40:33.164627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.283 [2024-11-20 16:40:33.164866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.283 [2024-11-20 16:40:33.165100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.283 [2024-11-20 16:40:33.165110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.283 [2024-11-20 16:40:33.165118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.283 [2024-11-20 16:40:33.165125] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.283 [2024-11-20 16:40:33.177728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.283 [2024-11-20 16:40:33.178409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.283 [2024-11-20 16:40:33.178446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.283 [2024-11-20 16:40:33.178458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.283 [2024-11-20 16:40:33.178701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.283 [2024-11-20 16:40:33.178925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.283 [2024-11-20 16:40:33.178934] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.283 [2024-11-20 16:40:33.178942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.283 [2024-11-20 16:40:33.178950] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.283 [2024-11-20 16:40:33.191575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.283 [2024-11-20 16:40:33.192120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.283 [2024-11-20 16:40:33.192140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.283 [2024-11-20 16:40:33.192148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.283 [2024-11-20 16:40:33.192369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.283 [2024-11-20 16:40:33.192589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.283 [2024-11-20 16:40:33.192598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.283 [2024-11-20 16:40:33.192610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.283 [2024-11-20 16:40:33.192617] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.283 [2024-11-20 16:40:33.205434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.283 [2024-11-20 16:40:33.206085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.284 [2024-11-20 16:40:33.206123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.284 [2024-11-20 16:40:33.206133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.284 [2024-11-20 16:40:33.206374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.284 [2024-11-20 16:40:33.206598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.284 [2024-11-20 16:40:33.206607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.284 [2024-11-20 16:40:33.206614] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.284 [2024-11-20 16:40:33.206622] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.284 [2024-11-20 16:40:33.219452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.284 [2024-11-20 16:40:33.220083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.284 [2024-11-20 16:40:33.220120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.284 [2024-11-20 16:40:33.220132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.284 [2024-11-20 16:40:33.220375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.284 [2024-11-20 16:40:33.220600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.284 [2024-11-20 16:40:33.220609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.284 [2024-11-20 16:40:33.220617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.284 [2024-11-20 16:40:33.220625] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.284 [2024-11-20 16:40:33.233460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.284 [2024-11-20 16:40:33.234144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.284 [2024-11-20 16:40:33.234182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.284 [2024-11-20 16:40:33.234193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.284 [2024-11-20 16:40:33.234433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.284 [2024-11-20 16:40:33.234658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.284 [2024-11-20 16:40:33.234668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.284 [2024-11-20 16:40:33.234676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.284 [2024-11-20 16:40:33.234683] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.547 [2024-11-20 16:40:33.247321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.547 [2024-11-20 16:40:33.248002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.547 [2024-11-20 16:40:33.248039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.547 [2024-11-20 16:40:33.248051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.547 [2024-11-20 16:40:33.248294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.547 [2024-11-20 16:40:33.248518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.547 [2024-11-20 16:40:33.248526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.547 [2024-11-20 16:40:33.248535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.547 [2024-11-20 16:40:33.248543] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.547 [2024-11-20 16:40:33.261179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.547 [2024-11-20 16:40:33.261780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.547 [2024-11-20 16:40:33.261817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.547 [2024-11-20 16:40:33.261828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.547 [2024-11-20 16:40:33.262076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.547 [2024-11-20 16:40:33.262301] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.547 [2024-11-20 16:40:33.262310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.547 [2024-11-20 16:40:33.262318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.547 [2024-11-20 16:40:33.262326] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.547 [2024-11-20 16:40:33.275167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.547 [2024-11-20 16:40:33.275837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.547 [2024-11-20 16:40:33.275874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.547 [2024-11-20 16:40:33.275885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.547 [2024-11-20 16:40:33.276135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.547 [2024-11-20 16:40:33.276360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.547 [2024-11-20 16:40:33.276369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.547 [2024-11-20 16:40:33.276377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.547 [2024-11-20 16:40:33.276385] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.547 [2024-11-20 16:40:33.289001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.547 [2024-11-20 16:40:33.289678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.547 [2024-11-20 16:40:33.289716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.547 [2024-11-20 16:40:33.289731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.547 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2387353 Killed "${NVMF_APP[@]}" "$@" 00:28:47.547 [2024-11-20 16:40:33.289971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.547 [2024-11-20 16:40:33.290205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.547 [2024-11-20 16:40:33.290215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.547 [2024-11-20 16:40:33.290223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.547 [2024-11-20 16:40:33.290230] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.547 16:40:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:28:47.547 16:40:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:47.547 16:40:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:47.547 16:40:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:47.547 16:40:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:47.547 16:40:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2389060 00:28:47.547 16:40:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2389060 00:28:47.547 16:40:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:47.547 16:40:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2389060 ']' 00:28:47.547 16:40:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:47.547 16:40:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:47.547 16:40:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:47.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:47.547 16:40:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:47.547 16:40:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:47.547 [2024-11-20 16:40:33.302867] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.547 [2024-11-20 16:40:33.303448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.547 [2024-11-20 16:40:33.303486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.547 [2024-11-20 16:40:33.303497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.547 [2024-11-20 16:40:33.303736] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.547 [2024-11-20 16:40:33.303961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.547 [2024-11-20 16:40:33.303971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.547 [2024-11-20 16:40:33.303979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.547 [2024-11-20 16:40:33.303998] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.547 [2024-11-20 16:40:33.316842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.547 [2024-11-20 16:40:33.317305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.547 [2024-11-20 16:40:33.317325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.548 [2024-11-20 16:40:33.317333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.548 [2024-11-20 16:40:33.317554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.548 [2024-11-20 16:40:33.317774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.548 [2024-11-20 16:40:33.317782] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.548 [2024-11-20 16:40:33.317789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.548 [2024-11-20 16:40:33.317796] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.548 [2024-11-20 16:40:33.330854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.548 [2024-11-20 16:40:33.331402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.548 [2024-11-20 16:40:33.331420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.548 [2024-11-20 16:40:33.331428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.548 [2024-11-20 16:40:33.331647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.548 [2024-11-20 16:40:33.331867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.548 [2024-11-20 16:40:33.331875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.548 [2024-11-20 16:40:33.331882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.548 [2024-11-20 16:40:33.331889] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.548 [2024-11-20 16:40:33.344753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.548 [2024-11-20 16:40:33.345290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.548 [2024-11-20 16:40:33.345308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.548 [2024-11-20 16:40:33.345316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.548 [2024-11-20 16:40:33.345537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.548 [2024-11-20 16:40:33.345756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.548 [2024-11-20 16:40:33.345764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.548 [2024-11-20 16:40:33.345771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.548 [2024-11-20 16:40:33.345778] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.548 [2024-11-20 16:40:33.358617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.548 [2024-11-20 16:40:33.359250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.548 [2024-11-20 16:40:33.359287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.548 [2024-11-20 16:40:33.359298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.548 [2024-11-20 16:40:33.359543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.548 [2024-11-20 16:40:33.359767] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.548 [2024-11-20 16:40:33.359776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.548 [2024-11-20 16:40:33.359784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.548 [2024-11-20 16:40:33.359792] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.548 [2024-11-20 16:40:33.362594] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:28:47.548 [2024-11-20 16:40:33.362654] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:47.548 [2024-11-20 16:40:33.372630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.548 [2024-11-20 16:40:33.373253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.548 [2024-11-20 16:40:33.373291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.548 [2024-11-20 16:40:33.373302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.548 [2024-11-20 16:40:33.373542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.548 [2024-11-20 16:40:33.373766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.548 [2024-11-20 16:40:33.373776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.548 [2024-11-20 16:40:33.373784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.548 [2024-11-20 16:40:33.373792] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.548 [2024-11-20 16:40:33.386699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.548 [2024-11-20 16:40:33.387346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.548 [2024-11-20 16:40:33.387383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.548 [2024-11-20 16:40:33.387395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.548 [2024-11-20 16:40:33.387634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.548 [2024-11-20 16:40:33.387858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.548 [2024-11-20 16:40:33.387867] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.548 [2024-11-20 16:40:33.387876] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.548 [2024-11-20 16:40:33.387884] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.548 [2024-11-20 16:40:33.400728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.548 [2024-11-20 16:40:33.401454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.548 [2024-11-20 16:40:33.401492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.548 [2024-11-20 16:40:33.401508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.548 [2024-11-20 16:40:33.401748] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.548 [2024-11-20 16:40:33.401974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.548 [2024-11-20 16:40:33.401992] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.548 [2024-11-20 16:40:33.402000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.548 [2024-11-20 16:40:33.402008] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.548 [2024-11-20 16:40:33.414629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.548 [2024-11-20 16:40:33.415283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.548 [2024-11-20 16:40:33.415321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.548 [2024-11-20 16:40:33.415332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.548 [2024-11-20 16:40:33.415572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.549 [2024-11-20 16:40:33.415796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.549 [2024-11-20 16:40:33.415805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.549 [2024-11-20 16:40:33.415813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.549 [2024-11-20 16:40:33.415821] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.549 [2024-11-20 16:40:33.428652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.549 [2024-11-20 16:40:33.429297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.549 [2024-11-20 16:40:33.429335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.549 [2024-11-20 16:40:33.429346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.549 [2024-11-20 16:40:33.429585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.549 [2024-11-20 16:40:33.429809] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.549 [2024-11-20 16:40:33.429818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.549 [2024-11-20 16:40:33.429826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.549 [2024-11-20 16:40:33.429835] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.549 [2024-11-20 16:40:33.442485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.549 [2024-11-20 16:40:33.443094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.549 [2024-11-20 16:40:33.443131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.549 [2024-11-20 16:40:33.443141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.549 [2024-11-20 16:40:33.443381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.549 [2024-11-20 16:40:33.443610] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.549 [2024-11-20 16:40:33.443620] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.549 [2024-11-20 16:40:33.443628] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.549 [2024-11-20 16:40:33.443636] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.549 [2024-11-20 16:40:33.453126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:47.549 [2024-11-20 16:40:33.456472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.549 [2024-11-20 16:40:33.457036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.549 [2024-11-20 16:40:33.457062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.549 [2024-11-20 16:40:33.457071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.549 [2024-11-20 16:40:33.457298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.549 [2024-11-20 16:40:33.457520] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.549 [2024-11-20 16:40:33.457528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.549 [2024-11-20 16:40:33.457535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.549 [2024-11-20 16:40:33.457543] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.549 [2024-11-20 16:40:33.470363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.549 [2024-11-20 16:40:33.471066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.549 [2024-11-20 16:40:33.471103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.549 [2024-11-20 16:40:33.471115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.549 [2024-11-20 16:40:33.471359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.549 [2024-11-20 16:40:33.471583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.549 [2024-11-20 16:40:33.471592] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.549 [2024-11-20 16:40:33.471600] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.549 [2024-11-20 16:40:33.471608] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.549 [2024-11-20 16:40:33.482256] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:47.549 [2024-11-20 16:40:33.482277] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:47.549 [2024-11-20 16:40:33.482283] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:47.549 [2024-11-20 16:40:33.482289] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:47.549 [2024-11-20 16:40:33.482294] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:47.549 [2024-11-20 16:40:33.483332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:47.549 [2024-11-20 16:40:33.483485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:47.549 [2024-11-20 16:40:33.483488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:47.549 [2024-11-20 16:40:33.484230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.549 [2024-11-20 16:40:33.484935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.549 [2024-11-20 16:40:33.484973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.549 [2024-11-20 16:40:33.484994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.549 [2024-11-20 16:40:33.485239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.549 [2024-11-20 16:40:33.485464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.549 [2024-11-20 16:40:33.485473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.549 [2024-11-20 16:40:33.485481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.549 [2024-11-20 16:40:33.485490] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.549 [2024-11-20 16:40:33.498117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.549 [2024-11-20 16:40:33.498826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.549 [2024-11-20 16:40:33.498864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.549 [2024-11-20 16:40:33.498876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.549 [2024-11-20 16:40:33.499127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.549 [2024-11-20 16:40:33.499352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.549 [2024-11-20 16:40:33.499363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.549 [2024-11-20 16:40:33.499372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.549 [2024-11-20 16:40:33.499380] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.812 [2024-11-20 16:40:33.512006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.812 [2024-11-20 16:40:33.512460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.812 [2024-11-20 16:40:33.512479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.813 [2024-11-20 16:40:33.512487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.813 [2024-11-20 16:40:33.512708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.813 [2024-11-20 16:40:33.512928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.813 [2024-11-20 16:40:33.512938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.813 [2024-11-20 16:40:33.512945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.813 [2024-11-20 16:40:33.512952] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.813 [2024-11-20 16:40:33.525993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.813 [2024-11-20 16:40:33.526612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.813 [2024-11-20 16:40:33.526650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.813 [2024-11-20 16:40:33.526667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.813 [2024-11-20 16:40:33.526908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.813 [2024-11-20 16:40:33.527141] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.813 [2024-11-20 16:40:33.527150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.813 [2024-11-20 16:40:33.527159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.813 [2024-11-20 16:40:33.527167] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.813 [2024-11-20 16:40:33.540013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.813 [2024-11-20 16:40:33.540669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.813 [2024-11-20 16:40:33.540706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.813 [2024-11-20 16:40:33.540717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.813 [2024-11-20 16:40:33.540958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.813 [2024-11-20 16:40:33.541191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.813 [2024-11-20 16:40:33.541201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.813 [2024-11-20 16:40:33.541210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.813 [2024-11-20 16:40:33.541217] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.813 [2024-11-20 16:40:33.553845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.813 [2024-11-20 16:40:33.554506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.813 [2024-11-20 16:40:33.554544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.813 [2024-11-20 16:40:33.554555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.813 [2024-11-20 16:40:33.554794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.813 [2024-11-20 16:40:33.555028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.813 [2024-11-20 16:40:33.555038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.813 [2024-11-20 16:40:33.555046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.813 [2024-11-20 16:40:33.555054] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.813 [2024-11-20 16:40:33.567849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.813 [2024-11-20 16:40:33.568406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.813 [2024-11-20 16:40:33.568426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.813 [2024-11-20 16:40:33.568434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.813 [2024-11-20 16:40:33.568655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.813 [2024-11-20 16:40:33.568880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.813 [2024-11-20 16:40:33.568889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.813 [2024-11-20 16:40:33.568896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.813 [2024-11-20 16:40:33.568903] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.813 [2024-11-20 16:40:33.581724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.813 [2024-11-20 16:40:33.582149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.813 [2024-11-20 16:40:33.582168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.813 [2024-11-20 16:40:33.582176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.813 [2024-11-20 16:40:33.582397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.813 [2024-11-20 16:40:33.582618] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.813 [2024-11-20 16:40:33.582625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.813 [2024-11-20 16:40:33.582633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.813 [2024-11-20 16:40:33.582639] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.813 [2024-11-20 16:40:33.595665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.813 [2024-11-20 16:40:33.596322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.813 [2024-11-20 16:40:33.596360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.813 [2024-11-20 16:40:33.596372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.813 [2024-11-20 16:40:33.596612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.813 [2024-11-20 16:40:33.596836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.813 [2024-11-20 16:40:33.596845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.813 [2024-11-20 16:40:33.596853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.813 [2024-11-20 16:40:33.596861] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.813 [2024-11-20 16:40:33.609705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.813 [2024-11-20 16:40:33.610412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.813 [2024-11-20 16:40:33.610450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.813 [2024-11-20 16:40:33.610461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.813 [2024-11-20 16:40:33.610701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.813 [2024-11-20 16:40:33.610925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.813 [2024-11-20 16:40:33.610934] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.813 [2024-11-20 16:40:33.610946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.813 [2024-11-20 16:40:33.610954] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.813 [2024-11-20 16:40:33.623589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.813 [2024-11-20 16:40:33.624289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.813 [2024-11-20 16:40:33.624327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.813 [2024-11-20 16:40:33.624338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.813 [2024-11-20 16:40:33.624578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.813 [2024-11-20 16:40:33.624803] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.813 [2024-11-20 16:40:33.624812] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.813 [2024-11-20 16:40:33.624820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.813 [2024-11-20 16:40:33.624828] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.814 [2024-11-20 16:40:33.637468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.814 [2024-11-20 16:40:33.638086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.814 [2024-11-20 16:40:33.638123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.814 [2024-11-20 16:40:33.638134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.814 [2024-11-20 16:40:33.638373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.814 [2024-11-20 16:40:33.638597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.814 [2024-11-20 16:40:33.638607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.814 [2024-11-20 16:40:33.638614] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.814 [2024-11-20 16:40:33.638622] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.814 4641.00 IOPS, 18.13 MiB/s [2024-11-20T15:40:33.773Z] [2024-11-20 16:40:33.651431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.814 [2024-11-20 16:40:33.651761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.814 [2024-11-20 16:40:33.651786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.814 [2024-11-20 16:40:33.651795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.814 [2024-11-20 16:40:33.652029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.814 [2024-11-20 16:40:33.652251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.814 [2024-11-20 16:40:33.652259] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.814 [2024-11-20 16:40:33.652266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.814 [2024-11-20 16:40:33.652273] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.814 [2024-11-20 16:40:33.665309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.814 [2024-11-20 16:40:33.665784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.814 [2024-11-20 16:40:33.665821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.814 [2024-11-20 16:40:33.665833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.814 [2024-11-20 16:40:33.666086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.814 [2024-11-20 16:40:33.666311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.814 [2024-11-20 16:40:33.666320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.814 [2024-11-20 16:40:33.666327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.814 [2024-11-20 16:40:33.666336] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.814 [2024-11-20 16:40:33.679162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.814 [2024-11-20 16:40:33.679856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.814 [2024-11-20 16:40:33.679894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.814 [2024-11-20 16:40:33.679905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.814 [2024-11-20 16:40:33.680153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.814 [2024-11-20 16:40:33.680378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.814 [2024-11-20 16:40:33.680387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.814 [2024-11-20 16:40:33.680395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.814 [2024-11-20 16:40:33.680403] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.814 [2024-11-20 16:40:33.693016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.814 [2024-11-20 16:40:33.693679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.814 [2024-11-20 16:40:33.693716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.814 [2024-11-20 16:40:33.693728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.814 [2024-11-20 16:40:33.693968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.814 [2024-11-20 16:40:33.694201] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.814 [2024-11-20 16:40:33.694211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.814 [2024-11-20 16:40:33.694219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.814 [2024-11-20 16:40:33.694227] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.814 [2024-11-20 16:40:33.706837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.814 [2024-11-20 16:40:33.707460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.814 [2024-11-20 16:40:33.707497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.814 [2024-11-20 16:40:33.707517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.814 [2024-11-20 16:40:33.707757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.814 [2024-11-20 16:40:33.707989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.814 [2024-11-20 16:40:33.707999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.814 [2024-11-20 16:40:33.708007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.814 [2024-11-20 16:40:33.708015] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.814 [2024-11-20 16:40:33.720842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.814 [2024-11-20 16:40:33.721522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.814 [2024-11-20 16:40:33.721559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.814 [2024-11-20 16:40:33.721571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.814 [2024-11-20 16:40:33.721810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.814 [2024-11-20 16:40:33.722045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.814 [2024-11-20 16:40:33.722055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.814 [2024-11-20 16:40:33.722062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.814 [2024-11-20 16:40:33.722070] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.814 [2024-11-20 16:40:33.734691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.814 [2024-11-20 16:40:33.735361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.814 [2024-11-20 16:40:33.735399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.814 [2024-11-20 16:40:33.735410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.814 [2024-11-20 16:40:33.735650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.814 [2024-11-20 16:40:33.735874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.814 [2024-11-20 16:40:33.735883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.814 [2024-11-20 16:40:33.735891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.814 [2024-11-20 16:40:33.735899] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.814 [2024-11-20 16:40:33.748531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.815 [2024-11-20 16:40:33.749295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.815 [2024-11-20 16:40:33.749332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.815 [2024-11-20 16:40:33.749344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.815 [2024-11-20 16:40:33.749584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.815 [2024-11-20 16:40:33.749813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.815 [2024-11-20 16:40:33.749823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.815 [2024-11-20 16:40:33.749831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.815 [2024-11-20 16:40:33.749839] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.815 [2024-11-20 16:40:33.762458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.815 [2024-11-20 16:40:33.762995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.815 [2024-11-20 16:40:33.763033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:47.815 [2024-11-20 16:40:33.763045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:47.815 [2024-11-20 16:40:33.763287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:47.815 [2024-11-20 16:40:33.763511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.815 [2024-11-20 16:40:33.763520] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.815 [2024-11-20 16:40:33.763528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.815 [2024-11-20 16:40:33.763536] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.078 [2024-11-20 16:40:33.776361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.078 [2024-11-20 16:40:33.776803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.078 [2024-11-20 16:40:33.776822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:48.078 [2024-11-20 16:40:33.776830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:48.078 [2024-11-20 16:40:33.777055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:48.078 [2024-11-20 16:40:33.777277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.078 [2024-11-20 16:40:33.777285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.078 [2024-11-20 16:40:33.777293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.078 [2024-11-20 16:40:33.777299] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.078 [2024-11-20 16:40:33.790324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.078 [2024-11-20 16:40:33.790973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.078 [2024-11-20 16:40:33.791018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:48.078 [2024-11-20 16:40:33.791029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:48.078 [2024-11-20 16:40:33.791269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:48.078 [2024-11-20 16:40:33.791494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.078 [2024-11-20 16:40:33.791503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.078 [2024-11-20 16:40:33.791516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.078 [2024-11-20 16:40:33.791524] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.078 [2024-11-20 16:40:33.804149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.078 [2024-11-20 16:40:33.804855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.078 [2024-11-20 16:40:33.804893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:48.078 [2024-11-20 16:40:33.804904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:48.078 [2024-11-20 16:40:33.805152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:48.078 [2024-11-20 16:40:33.805378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.078 [2024-11-20 16:40:33.805387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.078 [2024-11-20 16:40:33.805395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.078 [2024-11-20 16:40:33.805403] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.078 [2024-11-20 16:40:33.818025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.078 [2024-11-20 16:40:33.818679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.078 [2024-11-20 16:40:33.818717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:48.079 [2024-11-20 16:40:33.818728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:48.079 [2024-11-20 16:40:33.818967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:48.079 [2024-11-20 16:40:33.819203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.079 [2024-11-20 16:40:33.819213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.079 [2024-11-20 16:40:33.819221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.079 [2024-11-20 16:40:33.819228] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.079 [2024-11-20 16:40:33.831845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.079 [2024-11-20 16:40:33.832531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.079 [2024-11-20 16:40:33.832569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:48.079 [2024-11-20 16:40:33.832580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:48.079 [2024-11-20 16:40:33.832819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:48.079 [2024-11-20 16:40:33.833052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.079 [2024-11-20 16:40:33.833062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.079 [2024-11-20 16:40:33.833071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.079 [2024-11-20 16:40:33.833078] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.079 [2024-11-20 16:40:33.845716] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.079 [2024-11-20 16:40:33.846418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.079 [2024-11-20 16:40:33.846456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:48.079 [2024-11-20 16:40:33.846467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:48.079 [2024-11-20 16:40:33.846706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:48.079 [2024-11-20 16:40:33.846932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.079 [2024-11-20 16:40:33.846941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.079 [2024-11-20 16:40:33.846949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.079 [2024-11-20 16:40:33.846957] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.079 [2024-11-20 16:40:33.859581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.079 [2024-11-20 16:40:33.860326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.079 [2024-11-20 16:40:33.860364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:48.079 [2024-11-20 16:40:33.860375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:48.079 [2024-11-20 16:40:33.860615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:48.079 [2024-11-20 16:40:33.860839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.079 [2024-11-20 16:40:33.860848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.079 [2024-11-20 16:40:33.860856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.079 [2024-11-20 16:40:33.860864] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.079 [2024-11-20 16:40:33.873487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.079 [2024-11-20 16:40:33.874115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.079 [2024-11-20 16:40:33.874154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:48.079 [2024-11-20 16:40:33.874166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:48.079 [2024-11-20 16:40:33.874409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:48.079 [2024-11-20 16:40:33.874633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.079 [2024-11-20 16:40:33.874643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.079 [2024-11-20 16:40:33.874650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.079 [2024-11-20 16:40:33.874658] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.079 [2024-11-20 16:40:33.887328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.079 [2024-11-20 16:40:33.887841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.079 [2024-11-20 16:40:33.887878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:48.079 [2024-11-20 16:40:33.887895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:48.079 [2024-11-20 16:40:33.888147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:48.079 [2024-11-20 16:40:33.888373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.079 [2024-11-20 16:40:33.888382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.079 [2024-11-20 16:40:33.888390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.079 [2024-11-20 16:40:33.888398] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.079 [2024-11-20 16:40:33.901235] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.079 [2024-11-20 16:40:33.901935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.079 [2024-11-20 16:40:33.901973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:48.079 [2024-11-20 16:40:33.901992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:48.079 [2024-11-20 16:40:33.902233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:48.079 [2024-11-20 16:40:33.902457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.079 [2024-11-20 16:40:33.902467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.079 [2024-11-20 16:40:33.902475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.079 [2024-11-20 16:40:33.902483] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.079 [2024-11-20 16:40:33.915102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.079 [2024-11-20 16:40:33.915725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.079 [2024-11-20 16:40:33.915762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:48.079 [2024-11-20 16:40:33.915773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:48.079 [2024-11-20 16:40:33.916021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:48.079 [2024-11-20 16:40:33.916246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.079 [2024-11-20 16:40:33.916255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.079 [2024-11-20 16:40:33.916263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.079 [2024-11-20 16:40:33.916272] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.079 [2024-11-20 16:40:33.929121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.079 [2024-11-20 16:40:33.929678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.079 [2024-11-20 16:40:33.929697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:48.079 [2024-11-20 16:40:33.929705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:48.079 [2024-11-20 16:40:33.929926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:48.079 [2024-11-20 16:40:33.930158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.080 [2024-11-20 16:40:33.930168] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.080 [2024-11-20 16:40:33.930175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.080 [2024-11-20 16:40:33.930182] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.080 [2024-11-20 16:40:33.943015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.080 [2024-11-20 16:40:33.943626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.080 [2024-11-20 16:40:33.943642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:48.080 [2024-11-20 16:40:33.943650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:48.080 [2024-11-20 16:40:33.943869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:48.080 [2024-11-20 16:40:33.944103] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.080 [2024-11-20 16:40:33.944113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.080 [2024-11-20 16:40:33.944120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.080 [2024-11-20 16:40:33.944127] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.080 [2024-11-20 16:40:33.957019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.080 [2024-11-20 16:40:33.957589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.080 [2024-11-20 16:40:33.957606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:48.080 [2024-11-20 16:40:33.957614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:48.080 [2024-11-20 16:40:33.957833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:48.080 [2024-11-20 16:40:33.958058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.080 [2024-11-20 16:40:33.958066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.080 [2024-11-20 16:40:33.958073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.080 [2024-11-20 16:40:33.958080] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.080 [2024-11-20 16:40:33.970897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.080 [2024-11-20 16:40:33.971539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.080 [2024-11-20 16:40:33.971576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:48.080 [2024-11-20 16:40:33.971587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:48.080 [2024-11-20 16:40:33.971826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:48.080 [2024-11-20 16:40:33.972059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.080 [2024-11-20 16:40:33.972070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.080 [2024-11-20 16:40:33.972088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.080 [2024-11-20 16:40:33.972096] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.080 [2024-11-20 16:40:33.984731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.080 [2024-11-20 16:40:33.985218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.080 [2024-11-20 16:40:33.985256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:48.080 [2024-11-20 16:40:33.985269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:48.080 [2024-11-20 16:40:33.985510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:48.080 [2024-11-20 16:40:33.985735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.080 [2024-11-20 16:40:33.985745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.080 [2024-11-20 16:40:33.985753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.080 [2024-11-20 16:40:33.985761] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.080 [2024-11-20 16:40:33.998595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.080 [2024-11-20 16:40:33.999298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.080 [2024-11-20 16:40:33.999336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:48.080 [2024-11-20 16:40:33.999347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:48.080 [2024-11-20 16:40:33.999588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:48.080 [2024-11-20 16:40:33.999812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.080 [2024-11-20 16:40:33.999820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.080 [2024-11-20 16:40:33.999828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.080 [2024-11-20 16:40:33.999836] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.080 [2024-11-20 16:40:34.012457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.080 [2024-11-20 16:40:34.013048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.080 [2024-11-20 16:40:34.013069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:48.080 [2024-11-20 16:40:34.013077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:48.080 [2024-11-20 16:40:34.013298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:48.080 [2024-11-20 16:40:34.013518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.080 [2024-11-20 16:40:34.013526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.080 [2024-11-20 16:40:34.013534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.080 [2024-11-20 16:40:34.013540] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.080 [2024-11-20 16:40:34.026364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.080 [2024-11-20 16:40:34.026790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.080 [2024-11-20 16:40:34.026806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:48.080 [2024-11-20 16:40:34.026814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:48.080 [2024-11-20 16:40:34.027042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:48.080 [2024-11-20 16:40:34.027263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.080 [2024-11-20 16:40:34.027271] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.080 [2024-11-20 16:40:34.027278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.080 [2024-11-20 16:40:34.027285] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.344 [2024-11-20 16:40:34.040329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.344 [2024-11-20 16:40:34.041011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.344 [2024-11-20 16:40:34.041049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:48.344 [2024-11-20 16:40:34.041061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:48.344 [2024-11-20 16:40:34.041304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:48.344 [2024-11-20 16:40:34.041528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.344 [2024-11-20 16:40:34.041537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.344 [2024-11-20 16:40:34.041545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.344 [2024-11-20 16:40:34.041553] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.344 [2024-11-20 16:40:34.054183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.344 [2024-11-20 16:40:34.054878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.344 [2024-11-20 16:40:34.054916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:48.344 [2024-11-20 16:40:34.054928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:48.345 [2024-11-20 16:40:34.055177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:48.345 [2024-11-20 16:40:34.055403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.345 [2024-11-20 16:40:34.055412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.345 [2024-11-20 16:40:34.055420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.345 [2024-11-20 16:40:34.055428] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.345 [2024-11-20 16:40:34.068065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.345 [2024-11-20 16:40:34.068617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.345 [2024-11-20 16:40:34.068636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:48.345 [2024-11-20 16:40:34.068650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:48.345 [2024-11-20 16:40:34.068870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:48.345 [2024-11-20 16:40:34.069098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.345 [2024-11-20 16:40:34.069108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.345 [2024-11-20 16:40:34.069116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.345 [2024-11-20 16:40:34.069122] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.345 [2024-11-20 16:40:34.081946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.345 [2024-11-20 16:40:34.082626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.345 [2024-11-20 16:40:34.082664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:48.345 [2024-11-20 16:40:34.082675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:48.345 [2024-11-20 16:40:34.082915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:48.345 [2024-11-20 16:40:34.083148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.345 [2024-11-20 16:40:34.083159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.345 [2024-11-20 16:40:34.083167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.345 [2024-11-20 16:40:34.083175] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.345 [2024-11-20 16:40:34.095797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.345 [2024-11-20 16:40:34.096349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.345 [2024-11-20 16:40:34.096369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:48.345 [2024-11-20 16:40:34.096377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:48.345 [2024-11-20 16:40:34.096598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:48.345 [2024-11-20 16:40:34.096819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.345 [2024-11-20 16:40:34.096828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.345 [2024-11-20 16:40:34.096835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.345 [2024-11-20 16:40:34.096841] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.345 [2024-11-20 16:40:34.109659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.345 [2024-11-20 16:40:34.110245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.345 [2024-11-20 16:40:34.110281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:48.345 [2024-11-20 16:40:34.110294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:48.345 [2024-11-20 16:40:34.110535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:48.345 [2024-11-20 16:40:34.110766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.345 [2024-11-20 16:40:34.110775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.345 [2024-11-20 16:40:34.110784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.345 [2024-11-20 16:40:34.110792] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.345 [2024-11-20 16:40:34.123483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.345 [2024-11-20 16:40:34.123947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.345 [2024-11-20 16:40:34.123966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:48.345 [2024-11-20 16:40:34.123974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:48.345 [2024-11-20 16:40:34.124199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:48.345 [2024-11-20 16:40:34.124421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.345 [2024-11-20 16:40:34.124429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.345 [2024-11-20 16:40:34.124436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.345 [2024-11-20 16:40:34.124443] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.345 [2024-11-20 16:40:34.137478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.345 [2024-11-20 16:40:34.138044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.345 [2024-11-20 16:40:34.138062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:48.345 [2024-11-20 16:40:34.138069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:48.345 [2024-11-20 16:40:34.138289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:48.345 [2024-11-20 16:40:34.138510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.345 [2024-11-20 16:40:34.138518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.345 [2024-11-20 16:40:34.138525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.345 [2024-11-20 16:40:34.138532] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.345 16:40:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:48.345 16:40:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:28:48.345 16:40:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:48.345 16:40:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:48.345 16:40:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:48.345 [2024-11-20 16:40:34.151436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.345 [2024-11-20 16:40:34.151853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.345 [2024-11-20 16:40:34.151870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:48.345 [2024-11-20 16:40:34.151877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:48.345 [2024-11-20 16:40:34.152107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:48.345 [2024-11-20 16:40:34.152329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.345 [2024-11-20 16:40:34.152337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.346 [2024-11-20 16:40:34.152345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.346 [2024-11-20 16:40:34.152351] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.346 [2024-11-20 16:40:34.165392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.346 [2024-11-20 16:40:34.166095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.346 [2024-11-20 16:40:34.166133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:48.346 [2024-11-20 16:40:34.166146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:48.346 [2024-11-20 16:40:34.166389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:48.346 [2024-11-20 16:40:34.166613] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.346 [2024-11-20 16:40:34.166623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.346 [2024-11-20 16:40:34.166631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.346 [2024-11-20 16:40:34.166639] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.346 [2024-11-20 16:40:34.179249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.346 [2024-11-20 16:40:34.179946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.346 [2024-11-20 16:40:34.179991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:48.346 [2024-11-20 16:40:34.180003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:48.346 [2024-11-20 16:40:34.180243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:48.346 [2024-11-20 16:40:34.180468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.346 [2024-11-20 16:40:34.180476] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.346 [2024-11-20 16:40:34.180484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.346 [2024-11-20 16:40:34.180492] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.346 16:40:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:48.346 16:40:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:48.346 16:40:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.346 16:40:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:48.346 [2024-11-20 16:40:34.193116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.346 [2024-11-20 16:40:34.193701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.346 [2024-11-20 16:40:34.193738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:48.346 [2024-11-20 16:40:34.193755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:48.346 [2024-11-20 16:40:34.194007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:48.346 [2024-11-20 16:40:34.194232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.346 [2024-11-20 16:40:34.194241] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.346 [2024-11-20 16:40:34.194249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.346 [2024-11-20 16:40:34.194257] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.346 [2024-11-20 16:40:34.196551] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:48.346 16:40:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.346 16:40:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:48.346 16:40:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.346 16:40:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:48.346 [2024-11-20 16:40:34.207088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.346 [2024-11-20 16:40:34.207759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.346 [2024-11-20 16:40:34.207798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:48.346 [2024-11-20 16:40:34.207811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:48.346 [2024-11-20 16:40:34.208059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:48.346 [2024-11-20 16:40:34.208286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.346 [2024-11-20 16:40:34.208295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.346 [2024-11-20 16:40:34.208302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.346 [2024-11-20 16:40:34.208310] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.346 [2024-11-20 16:40:34.220918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.346 [2024-11-20 16:40:34.221582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.346 [2024-11-20 16:40:34.221620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:48.346 [2024-11-20 16:40:34.221631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:48.346 [2024-11-20 16:40:34.221870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:48.346 [2024-11-20 16:40:34.222104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.346 [2024-11-20 16:40:34.222114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.346 [2024-11-20 16:40:34.222122] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.346 [2024-11-20 16:40:34.222130] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.346 Malloc0 00:28:48.346 16:40:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.346 16:40:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:48.346 16:40:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.346 16:40:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:48.346 [2024-11-20 16:40:34.234764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.346 [2024-11-20 16:40:34.235327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.346 [2024-11-20 16:40:34.235365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:48.346 [2024-11-20 16:40:34.235376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:48.346 [2024-11-20 16:40:34.235616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:48.346 [2024-11-20 16:40:34.235841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.346 [2024-11-20 16:40:34.235850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.346 [2024-11-20 16:40:34.235858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.346 [2024-11-20 16:40:34.235867] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.346 16:40:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.346 16:40:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:48.346 16:40:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.346 16:40:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:48.346 [2024-11-20 16:40:34.248723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.346 [2024-11-20 16:40:34.249266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.346 [2024-11-20 16:40:34.249305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e6280 with addr=10.0.0.2, port=4420 00:28:48.347 [2024-11-20 16:40:34.249316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e6280 is same with the state(6) to be set 00:28:48.347 [2024-11-20 16:40:34.249556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e6280 (9): Bad file descriptor 00:28:48.347 [2024-11-20 16:40:34.249783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.347 [2024-11-20 16:40:34.249796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.347 [2024-11-20 16:40:34.249804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.347 [2024-11-20 16:40:34.249813] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.347 16:40:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.347 16:40:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:48.347 16:40:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.347 16:40:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:48.347 [2024-11-20 16:40:34.261589] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:48.347 [2024-11-20 16:40:34.262661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.347 16:40:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.347 16:40:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2387918 00:28:48.609 [2024-11-20 16:40:34.330917] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:28:49.814 4495.00 IOPS, 17.56 MiB/s [2024-11-20T15:40:36.717Z] 5334.38 IOPS, 20.84 MiB/s [2024-11-20T15:40:37.658Z] 5990.11 IOPS, 23.40 MiB/s [2024-11-20T15:40:39.043Z] 6511.40 IOPS, 25.44 MiB/s [2024-11-20T15:40:39.986Z] 6983.27 IOPS, 27.28 MiB/s [2024-11-20T15:40:40.929Z] 7338.92 IOPS, 28.67 MiB/s [2024-11-20T15:40:41.871Z] 7647.46 IOPS, 29.87 MiB/s [2024-11-20T15:40:42.813Z] 7895.50 IOPS, 30.84 MiB/s [2024-11-20T15:40:42.813Z] 8107.07 IOPS, 31.67 MiB/s 00:28:56.854 Latency(us) 00:28:56.854 [2024-11-20T15:40:42.813Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:56.854 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:56.854 Verification LBA range: start 0x0 length 0x4000 00:28:56.854 Nvme1n1 : 15.01 8111.23 31.68 9815.41 0.00 7114.63 795.31 14854.83 00:28:56.854 [2024-11-20T15:40:42.813Z] =================================================================================================================== 00:28:56.854 [2024-11-20T15:40:42.813Z] Total : 8111.23 31.68 9815.41 0.00 7114.63 795.31 14854.83 00:28:56.854 16:40:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:28:56.854 16:40:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:56.854 16:40:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.854 16:40:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:56.854 16:40:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.854 16:40:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:56.854 16:40:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:56.854 16:40:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:56.854 16:40:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:28:56.854 16:40:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:56.855 16:40:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:28:56.855 16:40:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:56.855 16:40:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:57.117 rmmod nvme_tcp 00:28:57.117 rmmod nvme_fabrics 00:28:57.117 rmmod nvme_keyring 00:28:57.117 16:40:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:57.117 16:40:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:28:57.117 16:40:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:28:57.117 16:40:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 2389060 ']' 00:28:57.117 16:40:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 2389060 00:28:57.117 16:40:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 2389060 ']' 00:28:57.117 16:40:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 2389060 00:28:57.117 16:40:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:28:57.117 16:40:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:57.117 16:40:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2389060 00:28:57.117 16:40:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:57.117 16:40:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:57.117 16:40:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2389060' 00:28:57.117 killing process with pid 2389060 00:28:57.117 16:40:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 2389060 00:28:57.117 16:40:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 2389060 00:28:57.117 16:40:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:57.117 16:40:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:57.117 16:40:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:57.117 16:40:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:28:57.117 16:40:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:28:57.117 16:40:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:57.117 16:40:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:28:57.379 16:40:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:57.379 16:40:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:57.379 16:40:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:57.379 16:40:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:57.379 16:40:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.293 16:40:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:59.293 00:28:59.293 real 0m27.870s 00:28:59.293 user 1m2.589s 00:28:59.293 sys 0m7.390s 00:28:59.294 16:40:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:59.294 16:40:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:59.294 ************************************ 00:28:59.294 END TEST nvmf_bdevperf 00:28:59.294 ************************************ 00:28:59.294 16:40:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:59.294 16:40:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:59.294 16:40:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:59.294 16:40:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.294 ************************************ 00:28:59.294 START TEST nvmf_target_disconnect 00:28:59.294 ************************************ 00:28:59.294 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:59.580 * Looking for test storage... 00:28:59.580 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:59.580 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:59.580 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:28:59.580 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:59.580 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:59.580 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:59.580 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:59.580 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:59.580 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:28:59.580 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:28:59.580 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:28:59.580 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:28:59.580 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:28:59.580 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:28:59.580 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:28:59.580 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:59.580 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:28:59.580 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:28:59.580 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:59.580 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:59.580 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:28:59.580 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:28:59.580 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:59.580 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:28:59.580 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:28:59.580 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:28:59.580 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:28:59.580 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:59.580 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:28:59.580 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:28:59.580 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:59.580 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:59.580 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:28:59.580 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:59.580 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:59.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.580 --rc genhtml_branch_coverage=1 00:28:59.580 --rc genhtml_function_coverage=1 00:28:59.580 --rc genhtml_legend=1 00:28:59.580 --rc geninfo_all_blocks=1 00:28:59.580 --rc geninfo_unexecuted_blocks=1 00:28:59.580 00:28:59.580 ' 00:28:59.580 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:59.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.580 --rc genhtml_branch_coverage=1 00:28:59.580 --rc genhtml_function_coverage=1 00:28:59.580 --rc genhtml_legend=1 00:28:59.580 --rc geninfo_all_blocks=1 00:28:59.580 --rc geninfo_unexecuted_blocks=1 00:28:59.580 00:28:59.580 ' 00:28:59.580 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:59.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.580 --rc genhtml_branch_coverage=1 00:28:59.580 --rc genhtml_function_coverage=1 00:28:59.580 --rc genhtml_legend=1 00:28:59.580 --rc geninfo_all_blocks=1 00:28:59.581 --rc geninfo_unexecuted_blocks=1 00:28:59.581 00:28:59.581 ' 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:59.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.581 --rc genhtml_branch_coverage=1 00:28:59.581 --rc genhtml_function_coverage=1 00:28:59.581 --rc genhtml_legend=1 00:28:59.581 --rc geninfo_all_blocks=1 00:28:59.581 --rc geninfo_unexecuted_blocks=1 00:28:59.581 00:28:59.581 ' 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:59.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:28:59.581 16:40:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:07.826 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:07.826 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:07.826 Found net devices under 0000:31:00.0: cvl_0_0 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:07.826 Found net devices under 0000:31:00.1: cvl_0_1 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:07.826 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:07.827 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:07.827 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:07.827 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:07.827 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:07.827 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:07.827 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:07.827 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:07.827 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:07.827 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:07.827 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:07.827 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:07.827 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:07.827 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:07.827 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:07.827 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:07.827 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:07.827 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:07.827 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:07.827 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:07.827 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:07.827 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:07.827 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:07.827 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:07.827 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:07.827 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:29:07.827 00:29:07.827 --- 10.0.0.2 ping statistics --- 00:29:07.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:07.827 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:29:07.827 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:07.827 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:07.827 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:29:07.827 00:29:07.827 --- 10.0.0.1 ping statistics --- 00:29:07.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:07.827 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:29:07.827 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:07.827 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:29:07.827 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:07.827 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:07.827 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:07.827 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:07.827 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:07.827 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:07.827 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:07.827 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:07.827 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:07.827 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:07.827 16:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:07.827 ************************************ 00:29:07.827 START TEST nvmf_target_disconnect_tc1 00:29:07.827 ************************************ 00:29:07.827 16:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:29:07.827 16:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:07.827 16:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:29:07.827 16:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:07.827 16:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:07.827 16:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:07.827 16:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:07.827 16:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:07.827 16:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:07.827 16:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:07.827 16:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:07.827 16:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:07.827 16:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:07.827 [2024-11-20 16:40:53.116933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.827 [2024-11-20 16:40:53.116993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1070f60 with addr=10.0.0.2, port=4420 00:29:07.827 [2024-11-20 16:40:53.117020] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:07.827 [2024-11-20 16:40:53.117035] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:07.827 [2024-11-20 16:40:53.117043] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:29:07.827 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:07.827 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:07.827 Initializing NVMe Controllers 00:29:07.827 16:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:29:07.827 16:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:07.827 16:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:07.827 16:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:07.827 00:29:07.827 real 0m0.124s 00:29:07.827 user 0m0.057s 00:29:07.827 sys 0m0.068s 00:29:07.827 16:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:07.827 16:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:07.827 ************************************ 00:29:07.827 END TEST nvmf_target_disconnect_tc1 00:29:07.827 ************************************ 00:29:07.827 16:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:07.827 16:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:07.827 16:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:07.827 16:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:07.827 ************************************ 00:29:07.827 START TEST nvmf_target_disconnect_tc2 00:29:07.827 ************************************ 00:29:07.827 16:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:29:07.827 16:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:07.827 16:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:07.827 16:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:07.827 16:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:07.827 16:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:07.827 16:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2395146 00:29:07.827 16:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2395146 00:29:07.827 16:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:07.827 16:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2395146 ']' 00:29:07.827 16:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:07.827 16:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:07.828 16:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:07.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:07.828 16:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:07.828 16:40:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:07.828 [2024-11-20 16:40:53.281209] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:29:07.828 [2024-11-20 16:40:53.281281] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:07.828 [2024-11-20 16:40:53.381710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:07.828 [2024-11-20 16:40:53.434806] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:07.828 [2024-11-20 16:40:53.434858] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:07.828 [2024-11-20 16:40:53.434868] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:07.828 [2024-11-20 16:40:53.434875] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:07.828 [2024-11-20 16:40:53.434882] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:07.828 [2024-11-20 16:40:53.437339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:07.828 [2024-11-20 16:40:53.437499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:07.828 [2024-11-20 16:40:53.437643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:07.828 [2024-11-20 16:40:53.437644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:08.400 16:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:08.400 16:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:08.400 16:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:08.400 16:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:08.400 16:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:08.400 16:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:08.400 16:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:08.400 16:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.400 16:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:08.400 Malloc0 00:29:08.400 16:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.400 16:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:08.400 16:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.400 16:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:08.400 [2024-11-20 16:40:54.203976] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:08.400 16:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.400 16:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:08.400 16:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.400 16:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:08.400 16:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.400 16:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:08.400 16:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.400 16:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:08.400 16:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.400 16:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:08.400 16:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.400 16:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:08.400 [2024-11-20 16:40:54.244393] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:08.400 16:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.400 16:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:08.400 16:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.400 16:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:08.400 16:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.400 16:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2395486 00:29:08.400 16:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:08.400 16:40:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:10.322 16:40:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2395146 00:29:10.322 16:40:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:10.322 Read completed with error (sct=0, sc=8) 00:29:10.322 starting I/O failed 00:29:10.322 Read completed with error (sct=0, sc=8) 00:29:10.322 starting I/O failed 00:29:10.322 Read completed with error (sct=0, sc=8) 00:29:10.322 starting I/O failed 00:29:10.322 Read completed with error (sct=0, sc=8) 00:29:10.322 starting I/O failed 00:29:10.322 Read completed with error (sct=0, sc=8) 00:29:10.322 starting I/O failed 00:29:10.322 Read completed with error (sct=0, sc=8) 00:29:10.322 starting I/O failed 00:29:10.322 Read completed with error (sct=0, sc=8) 00:29:10.322 starting I/O failed 00:29:10.322 Read completed with error (sct=0, sc=8) 00:29:10.322 starting I/O failed 00:29:10.322 Read completed with error (sct=0, sc=8) 00:29:10.322 starting I/O failed 00:29:10.322 Read completed with error (sct=0, sc=8) 00:29:10.322 starting I/O failed 00:29:10.322 Read completed with error (sct=0, sc=8) 00:29:10.322 starting I/O failed 00:29:10.322 Read completed with error (sct=0, sc=8) 00:29:10.322 starting I/O failed 00:29:10.322 Write completed with error (sct=0, sc=8) 00:29:10.322 starting I/O failed 00:29:10.322 Write completed with error (sct=0, sc=8) 00:29:10.322 starting I/O failed 00:29:10.322 Write completed with error (sct=0, sc=8) 00:29:10.322 starting I/O failed 00:29:10.322 Write completed with error (sct=0, sc=8) 00:29:10.322 starting I/O failed 00:29:10.322 Write completed with error (sct=0, sc=8) 00:29:10.322 starting I/O failed 00:29:10.322 Read completed with error (sct=0, sc=8) 00:29:10.322 starting I/O failed 00:29:10.322 Read completed with error (sct=0, sc=8) 00:29:10.322 starting I/O failed 00:29:10.322 Write completed with error (sct=0, sc=8) 00:29:10.322 starting I/O failed 00:29:10.322 Write completed with error (sct=0, sc=8) 00:29:10.322 starting I/O failed 00:29:10.322 Write completed with error (sct=0, sc=8) 00:29:10.322 starting I/O failed 00:29:10.322 Write completed with error (sct=0, sc=8) 00:29:10.322 starting I/O failed 00:29:10.322 Write completed with error (sct=0, sc=8) 00:29:10.322 starting I/O failed 00:29:10.322 Read completed with error (sct=0, sc=8) 00:29:10.322 starting I/O failed 00:29:10.322 Write completed with error (sct=0, sc=8) 00:29:10.322 starting I/O failed 00:29:10.322 Write completed with error (sct=0, sc=8) 00:29:10.322 starting I/O failed 00:29:10.322 Read completed with error (sct=0, sc=8) 00:29:10.322 starting I/O failed 00:29:10.322 Write completed with error (sct=0, sc=8) 00:29:10.322 starting I/O failed 00:29:10.322 Write completed with error (sct=0, sc=8) 00:29:10.322 starting I/O failed 00:29:10.322 Write completed with error (sct=0, sc=8) 00:29:10.322 starting I/O failed 00:29:10.322 Read completed with error (sct=0, sc=8) 00:29:10.322 starting I/O failed 00:29:10.322 [2024-11-20 16:40:56.277861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:10.322 [2024-11-20 16:40:56.278478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.322 [2024-11-20 16:40:56.278520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.322 qpair failed and we were unable to recover it. 00:29:10.593 [2024-11-20 16:40:56.278846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.593 [2024-11-20 16:40:56.278860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.593 qpair failed and we were unable to recover it. 00:29:10.593 [2024-11-20 16:40:56.279187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.593 [2024-11-20 16:40:56.279225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.593 qpair failed and we were unable to recover it. 00:29:10.593 [2024-11-20 16:40:56.279493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.593 [2024-11-20 16:40:56.279505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.593 qpair failed and we were unable to recover it. 00:29:10.593 [2024-11-20 16:40:56.279705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.593 [2024-11-20 16:40:56.279716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.593 qpair failed and we were unable to recover it. 00:29:10.593 [2024-11-20 16:40:56.280132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.593 [2024-11-20 16:40:56.280144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.593 qpair failed and we were unable to recover it. 00:29:10.593 [2024-11-20 16:40:56.280345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.593 [2024-11-20 16:40:56.280355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.593 qpair failed and we were unable to recover it. 00:29:10.593 [2024-11-20 16:40:56.280713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.593 [2024-11-20 16:40:56.280723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.593 qpair failed and we were unable to recover it. 00:29:10.593 [2024-11-20 16:40:56.280888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.593 [2024-11-20 16:40:56.280899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.593 qpair failed and we were unable to recover it. 00:29:10.593 [2024-11-20 16:40:56.280998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.593 [2024-11-20 16:40:56.281009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.593 qpair failed and we were unable to recover it. 00:29:10.593 [2024-11-20 16:40:56.281220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.593 [2024-11-20 16:40:56.281230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.593 qpair failed and we were unable to recover it. 00:29:10.593 [2024-11-20 16:40:56.281552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.593 [2024-11-20 16:40:56.281563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.593 qpair failed and we were unable to recover it. 00:29:10.593 [2024-11-20 16:40:56.281815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.593 [2024-11-20 16:40:56.281826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.593 qpair failed and we were unable to recover it. 00:29:10.593 [2024-11-20 16:40:56.282032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.593 [2024-11-20 16:40:56.282043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.593 qpair failed and we were unable to recover it. 00:29:10.593 [2024-11-20 16:40:56.282382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.593 [2024-11-20 16:40:56.282392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.593 qpair failed and we were unable to recover it. 00:29:10.593 [2024-11-20 16:40:56.282717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.593 [2024-11-20 16:40:56.282727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.593 qpair failed and we were unable to recover it. 00:29:10.593 [2024-11-20 16:40:56.283036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.593 [2024-11-20 16:40:56.283047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.593 qpair failed and we were unable to recover it. 00:29:10.593 [2024-11-20 16:40:56.283307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.593 [2024-11-20 16:40:56.283317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.593 qpair failed and we were unable to recover it. 00:29:10.593 [2024-11-20 16:40:56.283650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.593 [2024-11-20 16:40:56.283661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.593 qpair failed and we were unable to recover it. 00:29:10.593 [2024-11-20 16:40:56.283937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.593 [2024-11-20 16:40:56.283947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.593 qpair failed and we were unable to recover it. 00:29:10.593 [2024-11-20 16:40:56.284305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.593 [2024-11-20 16:40:56.284315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.593 qpair failed and we were unable to recover it. 00:29:10.593 [2024-11-20 16:40:56.284482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.593 [2024-11-20 16:40:56.284493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.593 qpair failed and we were unable to recover it. 00:29:10.593 [2024-11-20 16:40:56.284666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.593 [2024-11-20 16:40:56.284676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.593 qpair failed and we were unable to recover it. 00:29:10.593 [2024-11-20 16:40:56.284987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.593 [2024-11-20 16:40:56.284998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.593 qpair failed and we were unable to recover it. 00:29:10.593 [2024-11-20 16:40:56.285382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.593 [2024-11-20 16:40:56.285393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.593 qpair failed and we were unable to recover it. 00:29:10.593 [2024-11-20 16:40:56.285671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.593 [2024-11-20 16:40:56.285682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.593 qpair failed and we were unable to recover it. 00:29:10.593 [2024-11-20 16:40:56.285865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.593 [2024-11-20 16:40:56.285875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.593 qpair failed and we were unable to recover it. 00:29:10.593 [2024-11-20 16:40:56.286214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.593 [2024-11-20 16:40:56.286225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.593 qpair failed and we were unable to recover it. 00:29:10.593 [2024-11-20 16:40:56.286407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.593 [2024-11-20 16:40:56.286417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.593 qpair failed and we were unable to recover it. 00:29:10.593 [2024-11-20 16:40:56.286682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.593 [2024-11-20 16:40:56.286693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.593 qpair failed and we were unable to recover it. 00:29:10.593 [2024-11-20 16:40:56.287069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.593 [2024-11-20 16:40:56.287079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.593 qpair failed and we were unable to recover it. 00:29:10.593 [2024-11-20 16:40:56.287443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.593 [2024-11-20 16:40:56.287453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.593 qpair failed and we were unable to recover it. 00:29:10.593 [2024-11-20 16:40:56.287594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.593 [2024-11-20 16:40:56.287604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.593 qpair failed and we were unable to recover it. 00:29:10.593 [2024-11-20 16:40:56.287862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.593 [2024-11-20 16:40:56.287872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.593 qpair failed and we were unable to recover it. 00:29:10.593 [2024-11-20 16:40:56.288202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.593 [2024-11-20 16:40:56.288212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.593 qpair failed and we were unable to recover it. 00:29:10.593 [2024-11-20 16:40:56.288494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.593 [2024-11-20 16:40:56.288505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.593 qpair failed and we were unable to recover it. 00:29:10.593 [2024-11-20 16:40:56.288657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.594 [2024-11-20 16:40:56.288667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.594 qpair failed and we were unable to recover it. 00:29:10.594 [2024-11-20 16:40:56.288969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.594 [2024-11-20 16:40:56.288979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.594 qpair failed and we were unable to recover it. 00:29:10.594 [2024-11-20 16:40:56.289297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.594 [2024-11-20 16:40:56.289307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.594 qpair failed and we were unable to recover it. 00:29:10.594 [2024-11-20 16:40:56.289583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.594 [2024-11-20 16:40:56.289593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.594 qpair failed and we were unable to recover it. 00:29:10.594 [2024-11-20 16:40:56.289781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.594 [2024-11-20 16:40:56.289790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.594 qpair failed and we were unable to recover it. 00:29:10.594 [2024-11-20 16:40:56.290173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.594 [2024-11-20 16:40:56.290182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.594 qpair failed and we were unable to recover it. 00:29:10.594 [2024-11-20 16:40:56.290468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.594 [2024-11-20 16:40:56.290477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.594 qpair failed and we were unable to recover it. 00:29:10.594 [2024-11-20 16:40:56.290651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.594 [2024-11-20 16:40:56.290660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.594 qpair failed and we were unable to recover it. 00:29:10.594 [2024-11-20 16:40:56.290942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.594 [2024-11-20 16:40:56.290952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.594 qpair failed and we were unable to recover it. 00:29:10.594 [2024-11-20 16:40:56.291337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.594 [2024-11-20 16:40:56.291349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.594 qpair failed and we were unable to recover it. 00:29:10.594 [2024-11-20 16:40:56.291672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.594 [2024-11-20 16:40:56.291681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.594 qpair failed and we were unable to recover it. 00:29:10.594 [2024-11-20 16:40:56.291960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.594 [2024-11-20 16:40:56.291969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.594 qpair failed and we were unable to recover it. 00:29:10.594 [2024-11-20 16:40:56.292278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.594 [2024-11-20 16:40:56.292287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.594 qpair failed and we were unable to recover it. 00:29:10.594 [2024-11-20 16:40:56.292442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.594 [2024-11-20 16:40:56.292451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.594 qpair failed and we were unable to recover it. 00:29:10.594 [2024-11-20 16:40:56.292767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.594 [2024-11-20 16:40:56.292777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.594 qpair failed and we were unable to recover it. 00:29:10.594 [2024-11-20 16:40:56.293118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.594 [2024-11-20 16:40:56.293127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.594 qpair failed and we were unable to recover it. 00:29:10.594 [2024-11-20 16:40:56.293343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.594 [2024-11-20 16:40:56.293353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.594 qpair failed and we were unable to recover it. 00:29:10.594 [2024-11-20 16:40:56.293657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.594 [2024-11-20 16:40:56.293666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.594 qpair failed and we were unable to recover it. 00:29:10.594 [2024-11-20 16:40:56.293951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.594 [2024-11-20 16:40:56.293960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.594 qpair failed and we were unable to recover it. 00:29:10.594 [2024-11-20 16:40:56.294193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.594 [2024-11-20 16:40:56.294203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.594 qpair failed and we were unable to recover it. 00:29:10.594 [2024-11-20 16:40:56.294510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.594 [2024-11-20 16:40:56.294519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.594 qpair failed and we were unable to recover it. 00:29:10.594 [2024-11-20 16:40:56.294882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.594 [2024-11-20 16:40:56.294892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.594 qpair failed and we were unable to recover it. 00:29:10.594 [2024-11-20 16:40:56.295100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.594 [2024-11-20 16:40:56.295111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.594 qpair failed and we were unable to recover it. 00:29:10.594 [2024-11-20 16:40:56.295405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.594 [2024-11-20 16:40:56.295414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.594 qpair failed and we were unable to recover it. 00:29:10.594 [2024-11-20 16:40:56.295743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.594 [2024-11-20 16:40:56.295753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.594 qpair failed and we were unable to recover it. 00:29:10.594 [2024-11-20 16:40:56.295968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.594 [2024-11-20 16:40:56.295978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.594 qpair failed and we were unable to recover it. 00:29:10.594 [2024-11-20 16:40:56.296310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.594 [2024-11-20 16:40:56.296320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.594 qpair failed and we were unable to recover it. 00:29:10.594 [2024-11-20 16:40:56.296645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.594 [2024-11-20 16:40:56.296655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.594 qpair failed and we were unable to recover it. 00:29:10.594 [2024-11-20 16:40:56.296965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.594 [2024-11-20 16:40:56.296975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.594 qpair failed and we were unable to recover it. 00:29:10.594 [2024-11-20 16:40:56.297317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.594 [2024-11-20 16:40:56.297327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.594 qpair failed and we were unable to recover it. 00:29:10.594 [2024-11-20 16:40:56.297600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.594 [2024-11-20 16:40:56.297610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.594 qpair failed and we were unable to recover it. 00:29:10.594 [2024-11-20 16:40:56.297911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.594 [2024-11-20 16:40:56.297920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.594 qpair failed and we were unable to recover it. 00:29:10.594 [2024-11-20 16:40:56.298222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.594 [2024-11-20 16:40:56.298232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.594 qpair failed and we were unable to recover it. 00:29:10.594 [2024-11-20 16:40:56.298530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.594 [2024-11-20 16:40:56.298539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.594 qpair failed and we were unable to recover it. 00:29:10.594 [2024-11-20 16:40:56.298802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.594 [2024-11-20 16:40:56.298812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.594 qpair failed and we were unable to recover it. 00:29:10.594 [2024-11-20 16:40:56.299138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.594 [2024-11-20 16:40:56.299148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.594 qpair failed and we were unable to recover it. 00:29:10.594 [2024-11-20 16:40:56.299510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.594 [2024-11-20 16:40:56.299519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.594 qpair failed and we were unable to recover it. 00:29:10.594 [2024-11-20 16:40:56.299826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.594 [2024-11-20 16:40:56.299835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.594 qpair failed and we were unable to recover it. 00:29:10.594 [2024-11-20 16:40:56.300158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.594 [2024-11-20 16:40:56.300168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.594 qpair failed and we were unable to recover it. 00:29:10.594 [2024-11-20 16:40:56.300460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.594 [2024-11-20 16:40:56.300470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.594 qpair failed and we were unable to recover it. 00:29:10.594 [2024-11-20 16:40:56.300758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.594 [2024-11-20 16:40:56.300767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.594 qpair failed and we were unable to recover it. 00:29:10.594 [2024-11-20 16:40:56.301055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.594 [2024-11-20 16:40:56.301065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.594 qpair failed and we were unable to recover it. 00:29:10.594 [2024-11-20 16:40:56.301268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.594 [2024-11-20 16:40:56.301278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.594 qpair failed and we were unable to recover it. 00:29:10.594 [2024-11-20 16:40:56.301552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.594 [2024-11-20 16:40:56.301561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.594 qpair failed and we were unable to recover it. 00:29:10.595 [2024-11-20 16:40:56.301893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.595 [2024-11-20 16:40:56.301903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.595 qpair failed and we were unable to recover it. 00:29:10.595 [2024-11-20 16:40:56.302201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.595 [2024-11-20 16:40:56.302211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.595 qpair failed and we were unable to recover it. 00:29:10.595 [2024-11-20 16:40:56.302561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.595 [2024-11-20 16:40:56.302571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.595 qpair failed and we were unable to recover it. 00:29:10.595 [2024-11-20 16:40:56.302779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.595 [2024-11-20 16:40:56.302789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.595 qpair failed and we were unable to recover it. 00:29:10.595 [2024-11-20 16:40:56.303091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.595 [2024-11-20 16:40:56.303102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.595 qpair failed and we were unable to recover it. 00:29:10.595 [2024-11-20 16:40:56.303400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.595 [2024-11-20 16:40:56.303410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.595 qpair failed and we were unable to recover it. 00:29:10.595 [2024-11-20 16:40:56.303688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.595 [2024-11-20 16:40:56.303700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.595 qpair failed and we were unable to recover it. 00:29:10.595 [2024-11-20 16:40:56.304029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.595 [2024-11-20 16:40:56.304039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.595 qpair failed and we were unable to recover it. 00:29:10.595 [2024-11-20 16:40:56.304363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.595 [2024-11-20 16:40:56.304373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.595 qpair failed and we were unable to recover it. 00:29:10.595 [2024-11-20 16:40:56.304669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.595 [2024-11-20 16:40:56.304679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.595 qpair failed and we were unable to recover it. 00:29:10.595 [2024-11-20 16:40:56.304972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.595 [2024-11-20 16:40:56.304986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.595 qpair failed and we were unable to recover it. 00:29:10.595 [2024-11-20 16:40:56.305287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.595 [2024-11-20 16:40:56.305297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.595 qpair failed and we were unable to recover it. 00:29:10.595 [2024-11-20 16:40:56.305490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.595 [2024-11-20 16:40:56.305502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.595 qpair failed and we were unable to recover it. 00:29:10.595 [2024-11-20 16:40:56.305822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.595 [2024-11-20 16:40:56.305832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.595 qpair failed and we were unable to recover it. 00:29:10.595 [2024-11-20 16:40:56.306128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.595 [2024-11-20 16:40:56.306138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.595 qpair failed and we were unable to recover it. 00:29:10.595 [2024-11-20 16:40:56.306446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.595 [2024-11-20 16:40:56.306456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.595 qpair failed and we were unable to recover it. 00:29:10.595 [2024-11-20 16:40:56.306741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.595 [2024-11-20 16:40:56.306751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.595 qpair failed and we were unable to recover it. 00:29:10.595 [2024-11-20 16:40:56.307027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.595 [2024-11-20 16:40:56.307037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.595 qpair failed and we were unable to recover it. 00:29:10.595 [2024-11-20 16:40:56.307397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.595 [2024-11-20 16:40:56.307406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.595 qpair failed and we were unable to recover it. 00:29:10.595 [2024-11-20 16:40:56.307672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.595 [2024-11-20 16:40:56.307681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.595 qpair failed and we were unable to recover it. 00:29:10.595 [2024-11-20 16:40:56.308022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.595 [2024-11-20 16:40:56.308032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.595 qpair failed and we were unable to recover it. 00:29:10.595 [2024-11-20 16:40:56.308313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.595 [2024-11-20 16:40:56.308322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.595 qpair failed and we were unable to recover it. 00:29:10.595 [2024-11-20 16:40:56.308597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.595 [2024-11-20 16:40:56.308606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.595 qpair failed and we were unable to recover it. 00:29:10.595 [2024-11-20 16:40:56.308817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.595 [2024-11-20 16:40:56.308826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.595 qpair failed and we were unable to recover it. 00:29:10.595 [2024-11-20 16:40:56.309221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.595 [2024-11-20 16:40:56.309231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.595 qpair failed and we were unable to recover it. 00:29:10.595 [2024-11-20 16:40:56.309557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.595 [2024-11-20 16:40:56.309567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.595 qpair failed and we were unable to recover it. 00:29:10.595 [2024-11-20 16:40:56.309847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.595 [2024-11-20 16:40:56.309857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.595 qpair failed and we were unable to recover it. 00:29:10.595 [2024-11-20 16:40:56.310144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.595 [2024-11-20 16:40:56.310154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.595 qpair failed and we were unable to recover it. 00:29:10.595 [2024-11-20 16:40:56.310481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.595 [2024-11-20 16:40:56.310490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.595 qpair failed and we were unable to recover it. 00:29:10.595 [2024-11-20 16:40:56.310773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.595 [2024-11-20 16:40:56.310783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.595 qpair failed and we were unable to recover it. 00:29:10.595 [2024-11-20 16:40:56.311046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.595 [2024-11-20 16:40:56.311056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.595 qpair failed and we were unable to recover it. 00:29:10.595 [2024-11-20 16:40:56.311350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.595 [2024-11-20 16:40:56.311361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.595 qpair failed and we were unable to recover it. 00:29:10.595 [2024-11-20 16:40:56.311638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.595 [2024-11-20 16:40:56.311648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.595 qpair failed and we were unable to recover it. 00:29:10.595 [2024-11-20 16:40:56.311794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.595 [2024-11-20 16:40:56.311807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.595 qpair failed and we were unable to recover it. 00:29:10.595 [2024-11-20 16:40:56.312147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.595 [2024-11-20 16:40:56.312158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.595 qpair failed and we were unable to recover it. 00:29:10.595 [2024-11-20 16:40:56.312450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.595 [2024-11-20 16:40:56.312460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.595 qpair failed and we were unable to recover it. 00:29:10.595 [2024-11-20 16:40:56.312742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.595 [2024-11-20 16:40:56.312752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.595 qpair failed and we were unable to recover it. 00:29:10.595 [2024-11-20 16:40:56.313032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.595 [2024-11-20 16:40:56.313042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.595 qpair failed and we were unable to recover it. 00:29:10.595 [2024-11-20 16:40:56.313347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.595 [2024-11-20 16:40:56.313356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.595 qpair failed and we were unable to recover it. 00:29:10.595 [2024-11-20 16:40:56.313643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.595 [2024-11-20 16:40:56.313652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.595 qpair failed and we were unable to recover it. 00:29:10.595 [2024-11-20 16:40:56.314041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.595 [2024-11-20 16:40:56.314052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.595 qpair failed and we were unable to recover it. 00:29:10.595 [2024-11-20 16:40:56.314429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.595 [2024-11-20 16:40:56.314438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.595 qpair failed and we were unable to recover it. 00:29:10.595 [2024-11-20 16:40:56.314725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.596 [2024-11-20 16:40:56.314735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.596 qpair failed and we were unable to recover it. 00:29:10.596 [2024-11-20 16:40:56.314991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.596 [2024-11-20 16:40:56.315001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.596 qpair failed and we were unable to recover it. 00:29:10.596 [2024-11-20 16:40:56.315347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.596 [2024-11-20 16:40:56.315357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.596 qpair failed and we were unable to recover it. 00:29:10.596 [2024-11-20 16:40:56.315659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.596 [2024-11-20 16:40:56.315668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.596 qpair failed and we were unable to recover it. 00:29:10.596 [2024-11-20 16:40:56.315990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.596 [2024-11-20 16:40:56.316000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.596 qpair failed and we were unable to recover it. 00:29:10.596 [2024-11-20 16:40:56.316336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.596 [2024-11-20 16:40:56.316346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.596 qpair failed and we were unable to recover it. 00:29:10.596 [2024-11-20 16:40:56.316518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.596 [2024-11-20 16:40:56.316527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.596 qpair failed and we were unable to recover it. 00:29:10.596 [2024-11-20 16:40:56.316779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.596 [2024-11-20 16:40:56.316789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.596 qpair failed and we were unable to recover it. 00:29:10.596 [2024-11-20 16:40:56.317163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.596 [2024-11-20 16:40:56.317173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.596 qpair failed and we were unable to recover it. 00:29:10.596 [2024-11-20 16:40:56.317460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.596 [2024-11-20 16:40:56.317470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.596 qpair failed and we were unable to recover it. 00:29:10.596 [2024-11-20 16:40:56.317751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.596 [2024-11-20 16:40:56.317760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.596 qpair failed and we were unable to recover it. 00:29:10.596 [2024-11-20 16:40:56.317968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.596 [2024-11-20 16:40:56.317978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.596 qpair failed and we were unable to recover it. 00:29:10.596 [2024-11-20 16:40:56.318291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.596 [2024-11-20 16:40:56.318301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.596 qpair failed and we were unable to recover it. 00:29:10.596 [2024-11-20 16:40:56.318595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.596 [2024-11-20 16:40:56.318604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.596 qpair failed and we were unable to recover it. 00:29:10.596 [2024-11-20 16:40:56.318876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.596 [2024-11-20 16:40:56.318885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.596 qpair failed and we were unable to recover it. 00:29:10.596 [2024-11-20 16:40:56.319188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.596 [2024-11-20 16:40:56.319198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.596 qpair failed and we were unable to recover it. 00:29:10.596 [2024-11-20 16:40:56.319512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.596 [2024-11-20 16:40:56.319522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.596 qpair failed and we were unable to recover it. 00:29:10.596 [2024-11-20 16:40:56.319693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.596 [2024-11-20 16:40:56.319702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.596 qpair failed and we were unable to recover it. 00:29:10.596 [2024-11-20 16:40:56.320041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.596 [2024-11-20 16:40:56.320051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.596 qpair failed and we were unable to recover it. 00:29:10.596 [2024-11-20 16:40:56.320236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.596 [2024-11-20 16:40:56.320246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.596 qpair failed and we were unable to recover it. 00:29:10.596 [2024-11-20 16:40:56.320610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.596 [2024-11-20 16:40:56.320620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.596 qpair failed and we were unable to recover it. 00:29:10.596 [2024-11-20 16:40:56.320991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.596 [2024-11-20 16:40:56.321001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.596 qpair failed and we were unable to recover it. 00:29:10.596 [2024-11-20 16:40:56.321313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.596 [2024-11-20 16:40:56.321323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.596 qpair failed and we were unable to recover it. 00:29:10.596 [2024-11-20 16:40:56.321624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.596 [2024-11-20 16:40:56.321634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.596 qpair failed and we were unable to recover it. 00:29:10.596 [2024-11-20 16:40:56.321927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.596 [2024-11-20 16:40:56.321938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.596 qpair failed and we were unable to recover it. 00:29:10.596 [2024-11-20 16:40:56.322242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.596 [2024-11-20 16:40:56.322252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.596 qpair failed and we were unable to recover it. 00:29:10.596 [2024-11-20 16:40:56.322521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.596 [2024-11-20 16:40:56.322531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.596 qpair failed and we were unable to recover it. 00:29:10.596 [2024-11-20 16:40:56.322815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.596 [2024-11-20 16:40:56.322825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.596 qpair failed and we were unable to recover it. 00:29:10.596 [2024-11-20 16:40:56.323120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.596 [2024-11-20 16:40:56.323130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.596 qpair failed and we were unable to recover it. 00:29:10.596 [2024-11-20 16:40:56.323455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.596 [2024-11-20 16:40:56.323465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.596 qpair failed and we were unable to recover it. 00:29:10.596 [2024-11-20 16:40:56.323756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.596 [2024-11-20 16:40:56.323766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.596 qpair failed and we were unable to recover it. 00:29:10.596 [2024-11-20 16:40:56.324067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.596 [2024-11-20 16:40:56.324077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.596 qpair failed and we were unable to recover it. 00:29:10.596 [2024-11-20 16:40:56.324367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.596 [2024-11-20 16:40:56.324379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.596 qpair failed and we were unable to recover it. 00:29:10.596 [2024-11-20 16:40:56.324529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.596 [2024-11-20 16:40:56.324540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.596 qpair failed and we were unable to recover it. 00:29:10.596 [2024-11-20 16:40:56.324835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.596 [2024-11-20 16:40:56.324845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.596 qpair failed and we were unable to recover it. 00:29:10.596 [2024-11-20 16:40:56.325148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.596 [2024-11-20 16:40:56.325158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.596 qpair failed and we were unable to recover it. 00:29:10.596 [2024-11-20 16:40:56.325453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.596 [2024-11-20 16:40:56.325463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.596 qpair failed and we were unable to recover it. 00:29:10.596 [2024-11-20 16:40:56.325749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.596 [2024-11-20 16:40:56.325759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.596 qpair failed and we were unable to recover it. 00:29:10.596 [2024-11-20 16:40:56.326066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.596 [2024-11-20 16:40:56.326076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.596 qpair failed and we were unable to recover it. 00:29:10.596 [2024-11-20 16:40:56.326380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.596 [2024-11-20 16:40:56.326390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.596 qpair failed and we were unable to recover it. 00:29:10.596 [2024-11-20 16:40:56.326574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.596 [2024-11-20 16:40:56.326585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.596 qpair failed and we were unable to recover it. 00:29:10.596 [2024-11-20 16:40:56.326744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.597 [2024-11-20 16:40:56.326754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.597 qpair failed and we were unable to recover it. 00:29:10.597 [2024-11-20 16:40:56.327023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.597 [2024-11-20 16:40:56.327034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.597 qpair failed and we were unable to recover it. 00:29:10.597 [2024-11-20 16:40:56.327199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.597 [2024-11-20 16:40:56.327208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.597 qpair failed and we were unable to recover it. 00:29:10.597 [2024-11-20 16:40:56.327413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.597 [2024-11-20 16:40:56.327423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.597 qpair failed and we were unable to recover it. 00:29:10.597 [2024-11-20 16:40:56.327719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.597 [2024-11-20 16:40:56.327729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.597 qpair failed and we were unable to recover it. 00:29:10.597 [2024-11-20 16:40:56.328023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.597 [2024-11-20 16:40:56.328033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.597 qpair failed and we were unable to recover it. 00:29:10.597 [2024-11-20 16:40:56.328343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.597 [2024-11-20 16:40:56.328352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.597 qpair failed and we were unable to recover it. 00:29:10.597 [2024-11-20 16:40:56.328660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.597 [2024-11-20 16:40:56.328670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.597 qpair failed and we were unable to recover it. 00:29:10.597 [2024-11-20 16:40:56.328856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.597 [2024-11-20 16:40:56.328865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.597 qpair failed and we were unable to recover it. 00:29:10.597 [2024-11-20 16:40:56.329145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.597 [2024-11-20 16:40:56.329162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.597 qpair failed and we were unable to recover it. 00:29:10.597 [2024-11-20 16:40:56.329468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.597 [2024-11-20 16:40:56.329478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.597 qpair failed and we were unable to recover it. 00:29:10.597 [2024-11-20 16:40:56.329769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.597 [2024-11-20 16:40:56.329779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.597 qpair failed and we were unable to recover it. 00:29:10.597 [2024-11-20 16:40:56.330079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.597 [2024-11-20 16:40:56.330089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.597 qpair failed and we were unable to recover it. 00:29:10.597 [2024-11-20 16:40:56.330283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.597 [2024-11-20 16:40:56.330293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.597 qpair failed and we were unable to recover it. 00:29:10.597 [2024-11-20 16:40:56.330569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.597 [2024-11-20 16:40:56.330579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.597 qpair failed and we were unable to recover it. 00:29:10.597 [2024-11-20 16:40:56.330856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.597 [2024-11-20 16:40:56.330865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.597 qpair failed and we were unable to recover it. 00:29:10.597 [2024-11-20 16:40:56.331146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.597 [2024-11-20 16:40:56.331156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.597 qpair failed and we were unable to recover it. 00:29:10.597 [2024-11-20 16:40:56.331470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.597 [2024-11-20 16:40:56.331479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.597 qpair failed and we were unable to recover it. 00:29:10.597 [2024-11-20 16:40:56.331761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.597 [2024-11-20 16:40:56.331772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.597 qpair failed and we were unable to recover it. 00:29:10.597 [2024-11-20 16:40:56.331963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.597 [2024-11-20 16:40:56.331973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.597 qpair failed and we were unable to recover it. 00:29:10.597 [2024-11-20 16:40:56.332270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.597 [2024-11-20 16:40:56.332280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.597 qpair failed and we were unable to recover it. 00:29:10.597 [2024-11-20 16:40:56.332557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.597 [2024-11-20 16:40:56.332567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.597 qpair failed and we were unable to recover it. 00:29:10.597 [2024-11-20 16:40:56.332750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.597 [2024-11-20 16:40:56.332760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.597 qpair failed and we were unable to recover it. 00:29:10.597 [2024-11-20 16:40:56.333065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.597 [2024-11-20 16:40:56.333075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.597 qpair failed and we were unable to recover it. 00:29:10.597 [2024-11-20 16:40:56.333363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.597 [2024-11-20 16:40:56.333372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.597 qpair failed and we were unable to recover it. 00:29:10.598 [2024-11-20 16:40:56.333571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.598 [2024-11-20 16:40:56.333580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.598 qpair failed and we were unable to recover it. 00:29:10.598 [2024-11-20 16:40:56.333880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.598 [2024-11-20 16:40:56.333890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.598 qpair failed and we were unable to recover it. 00:29:10.598 [2024-11-20 16:40:56.334148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.598 [2024-11-20 16:40:56.334159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.598 qpair failed and we were unable to recover it. 00:29:10.598 [2024-11-20 16:40:56.334468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.598 [2024-11-20 16:40:56.334478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.598 qpair failed and we were unable to recover it. 00:29:10.598 [2024-11-20 16:40:56.334793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.598 [2024-11-20 16:40:56.334803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.598 qpair failed and we were unable to recover it. 00:29:10.598 [2024-11-20 16:40:56.335090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.598 [2024-11-20 16:40:56.335100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.598 qpair failed and we were unable to recover it. 00:29:10.598 [2024-11-20 16:40:56.335413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.598 [2024-11-20 16:40:56.335422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.598 qpair failed and we were unable to recover it. 00:29:10.598 [2024-11-20 16:40:56.335701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.598 [2024-11-20 16:40:56.335711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.598 qpair failed and we were unable to recover it. 00:29:10.598 [2024-11-20 16:40:56.336035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.598 [2024-11-20 16:40:56.336045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.598 qpair failed and we were unable to recover it. 00:29:10.598 [2024-11-20 16:40:56.336211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.598 [2024-11-20 16:40:56.336222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.598 qpair failed and we were unable to recover it. 00:29:10.598 [2024-11-20 16:40:56.336594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.598 [2024-11-20 16:40:56.336604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.598 qpair failed and we were unable to recover it. 00:29:10.598 [2024-11-20 16:40:56.336958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.598 [2024-11-20 16:40:56.336969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.598 qpair failed and we were unable to recover it. 00:29:10.598 [2024-11-20 16:40:56.337270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.598 [2024-11-20 16:40:56.337280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.598 qpair failed and we were unable to recover it. 00:29:10.598 [2024-11-20 16:40:56.337541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.598 [2024-11-20 16:40:56.337550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.598 qpair failed and we were unable to recover it. 00:29:10.598 [2024-11-20 16:40:56.337853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.598 [2024-11-20 16:40:56.337863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.598 qpair failed and we were unable to recover it. 00:29:10.598 [2024-11-20 16:40:56.338190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.598 [2024-11-20 16:40:56.338201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.598 qpair failed and we were unable to recover it. 00:29:10.598 [2024-11-20 16:40:56.338392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.598 [2024-11-20 16:40:56.338402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.598 qpair failed and we were unable to recover it. 00:29:10.598 [2024-11-20 16:40:56.338693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.598 [2024-11-20 16:40:56.338702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.598 qpair failed and we were unable to recover it. 00:29:10.598 [2024-11-20 16:40:56.339012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.598 [2024-11-20 16:40:56.339022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.598 qpair failed and we were unable to recover it. 00:29:10.598 [2024-11-20 16:40:56.339354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.598 [2024-11-20 16:40:56.339364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.598 qpair failed and we were unable to recover it. 00:29:10.598 [2024-11-20 16:40:56.339649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.598 [2024-11-20 16:40:56.339659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.598 qpair failed and we were unable to recover it. 00:29:10.598 [2024-11-20 16:40:56.339966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.598 [2024-11-20 16:40:56.339976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.598 qpair failed and we were unable to recover it. 00:29:10.598 [2024-11-20 16:40:56.340277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.598 [2024-11-20 16:40:56.340287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.598 qpair failed and we were unable to recover it. 00:29:10.598 [2024-11-20 16:40:56.340509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.598 [2024-11-20 16:40:56.340519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.598 qpair failed and we were unable to recover it. 00:29:10.598 [2024-11-20 16:40:56.340812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.598 [2024-11-20 16:40:56.340822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.598 qpair failed and we were unable to recover it. 00:29:10.598 [2024-11-20 16:40:56.341143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.598 [2024-11-20 16:40:56.341153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.598 qpair failed and we were unable to recover it. 00:29:10.598 [2024-11-20 16:40:56.341444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.598 [2024-11-20 16:40:56.341454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.598 qpair failed and we were unable to recover it. 00:29:10.598 [2024-11-20 16:40:56.341754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.598 [2024-11-20 16:40:56.341763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.598 qpair failed and we were unable to recover it. 00:29:10.598 [2024-11-20 16:40:56.342058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.598 [2024-11-20 16:40:56.342068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.598 qpair failed and we were unable to recover it. 00:29:10.598 [2024-11-20 16:40:56.342349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.598 [2024-11-20 16:40:56.342359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.598 qpair failed and we were unable to recover it. 00:29:10.598 [2024-11-20 16:40:56.342648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.598 [2024-11-20 16:40:56.342658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.598 qpair failed and we were unable to recover it. 00:29:10.598 [2024-11-20 16:40:56.342848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.598 [2024-11-20 16:40:56.342857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.598 qpair failed and we were unable to recover it. 00:29:10.598 [2024-11-20 16:40:56.343048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.598 [2024-11-20 16:40:56.343059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.598 qpair failed and we were unable to recover it. 00:29:10.598 [2024-11-20 16:40:56.343378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.598 [2024-11-20 16:40:56.343387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.598 qpair failed and we were unable to recover it. 00:29:10.598 [2024-11-20 16:40:56.343687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.598 [2024-11-20 16:40:56.343701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.598 qpair failed and we were unable to recover it. 00:29:10.598 [2024-11-20 16:40:56.344021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.598 [2024-11-20 16:40:56.344031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.598 qpair failed and we were unable to recover it. 00:29:10.598 [2024-11-20 16:40:56.344215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.598 [2024-11-20 16:40:56.344224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.598 qpair failed and we were unable to recover it. 00:29:10.598 [2024-11-20 16:40:56.344626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.598 [2024-11-20 16:40:56.344636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.598 qpair failed and we were unable to recover it. 00:29:10.599 [2024-11-20 16:40:56.344915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.599 [2024-11-20 16:40:56.344924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.599 qpair failed and we were unable to recover it. 00:29:10.599 [2024-11-20 16:40:56.345103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.599 [2024-11-20 16:40:56.345113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.599 qpair failed and we were unable to recover it. 00:29:10.599 [2024-11-20 16:40:56.345431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.599 [2024-11-20 16:40:56.345440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.599 qpair failed and we were unable to recover it. 00:29:10.599 [2024-11-20 16:40:56.345720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.599 [2024-11-20 16:40:56.345730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.599 qpair failed and we were unable to recover it. 00:29:10.599 [2024-11-20 16:40:56.346039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.599 [2024-11-20 16:40:56.346049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.599 qpair failed and we were unable to recover it. 00:29:10.599 [2024-11-20 16:40:56.346336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.599 [2024-11-20 16:40:56.346345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.599 qpair failed and we were unable to recover it. 00:29:10.599 [2024-11-20 16:40:56.346631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.599 [2024-11-20 16:40:56.346641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.599 qpair failed and we were unable to recover it. 00:29:10.599 [2024-11-20 16:40:56.346945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.599 [2024-11-20 16:40:56.346954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.599 qpair failed and we were unable to recover it. 00:29:10.599 [2024-11-20 16:40:56.347251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.599 [2024-11-20 16:40:56.347261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.599 qpair failed and we were unable to recover it. 00:29:10.599 [2024-11-20 16:40:56.347609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.599 [2024-11-20 16:40:56.347620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.599 qpair failed and we were unable to recover it. 00:29:10.599 [2024-11-20 16:40:56.347928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.599 [2024-11-20 16:40:56.347937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.599 qpair failed and we were unable to recover it. 00:29:10.599 [2024-11-20 16:40:56.348232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.599 [2024-11-20 16:40:56.348242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.599 qpair failed and we were unable to recover it. 00:29:10.599 [2024-11-20 16:40:56.348612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.599 [2024-11-20 16:40:56.348621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.599 qpair failed and we were unable to recover it. 00:29:10.599 [2024-11-20 16:40:56.348923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.599 [2024-11-20 16:40:56.348932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.599 qpair failed and we were unable to recover it. 00:29:10.599 [2024-11-20 16:40:56.349250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.599 [2024-11-20 16:40:56.349260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.599 qpair failed and we were unable to recover it. 00:29:10.599 [2024-11-20 16:40:56.349590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.599 [2024-11-20 16:40:56.349599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.599 qpair failed and we were unable to recover it. 00:29:10.599 [2024-11-20 16:40:56.349806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.599 [2024-11-20 16:40:56.349815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.599 qpair failed and we were unable to recover it. 00:29:10.599 [2024-11-20 16:40:56.350104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.599 [2024-11-20 16:40:56.350114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.599 qpair failed and we were unable to recover it. 00:29:10.599 [2024-11-20 16:40:56.350407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.599 [2024-11-20 16:40:56.350417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.599 qpair failed and we were unable to recover it. 00:29:10.599 [2024-11-20 16:40:56.350728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.599 [2024-11-20 16:40:56.350737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.599 qpair failed and we were unable to recover it. 00:29:10.599 [2024-11-20 16:40:56.351022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.599 [2024-11-20 16:40:56.351031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.599 qpair failed and we were unable to recover it. 00:29:10.599 [2024-11-20 16:40:56.351209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.599 [2024-11-20 16:40:56.351219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.599 qpair failed and we were unable to recover it. 00:29:10.599 [2024-11-20 16:40:56.351522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.599 [2024-11-20 16:40:56.351531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.599 qpair failed and we were unable to recover it. 00:29:10.599 [2024-11-20 16:40:56.351751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.599 [2024-11-20 16:40:56.351763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.599 qpair failed and we were unable to recover it. 00:29:10.599 [2024-11-20 16:40:56.351971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.599 [2024-11-20 16:40:56.351980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.599 qpair failed and we were unable to recover it. 00:29:10.599 [2024-11-20 16:40:56.352326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.599 [2024-11-20 16:40:56.352336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.599 qpair failed and we were unable to recover it. 00:29:10.599 [2024-11-20 16:40:56.352664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.599 [2024-11-20 16:40:56.352674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.599 qpair failed and we were unable to recover it. 00:29:10.599 [2024-11-20 16:40:56.353003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.599 [2024-11-20 16:40:56.353013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.599 qpair failed and we were unable to recover it. 00:29:10.599 [2024-11-20 16:40:56.353307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.599 [2024-11-20 16:40:56.353316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.599 qpair failed and we were unable to recover it. 00:29:10.599 [2024-11-20 16:40:56.353619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.599 [2024-11-20 16:40:56.353628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.599 qpair failed and we were unable to recover it. 00:29:10.599 [2024-11-20 16:40:56.353807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.599 [2024-11-20 16:40:56.353816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.599 qpair failed and we were unable to recover it. 00:29:10.599 [2024-11-20 16:40:56.354147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.599 [2024-11-20 16:40:56.354156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.599 qpair failed and we were unable to recover it. 00:29:10.599 [2024-11-20 16:40:56.354455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.599 [2024-11-20 16:40:56.354465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.599 qpair failed and we were unable to recover it. 00:29:10.599 [2024-11-20 16:40:56.354745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.599 [2024-11-20 16:40:56.354754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.599 qpair failed and we were unable to recover it. 00:29:10.599 [2024-11-20 16:40:56.355030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.599 [2024-11-20 16:40:56.355040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.599 qpair failed and we were unable to recover it. 00:29:10.599 [2024-11-20 16:40:56.355341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.599 [2024-11-20 16:40:56.355350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.599 qpair failed and we were unable to recover it. 00:29:10.599 [2024-11-20 16:40:56.355525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.599 [2024-11-20 16:40:56.355536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.599 qpair failed and we were unable to recover it. 00:29:10.599 [2024-11-20 16:40:56.355842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.599 [2024-11-20 16:40:56.355851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.599 qpair failed and we were unable to recover it. 00:29:10.599 [2024-11-20 16:40:56.356137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.599 [2024-11-20 16:40:56.356147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.599 qpair failed and we were unable to recover it. 00:29:10.600 [2024-11-20 16:40:56.356460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.600 [2024-11-20 16:40:56.356470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.600 qpair failed and we were unable to recover it. 00:29:10.600 [2024-11-20 16:40:56.356658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.600 [2024-11-20 16:40:56.356667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.600 qpair failed and we were unable to recover it. 00:29:10.600 [2024-11-20 16:40:56.357003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.600 [2024-11-20 16:40:56.357013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.600 qpair failed and we were unable to recover it. 00:29:10.600 [2024-11-20 16:40:56.357312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.600 [2024-11-20 16:40:56.357322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.600 qpair failed and we were unable to recover it. 00:29:10.600 [2024-11-20 16:40:56.357632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.600 [2024-11-20 16:40:56.357641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.600 qpair failed and we were unable to recover it. 00:29:10.600 [2024-11-20 16:40:56.357944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.600 [2024-11-20 16:40:56.357953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.600 qpair failed and we were unable to recover it. 00:29:10.600 [2024-11-20 16:40:56.358251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.600 [2024-11-20 16:40:56.358261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.600 qpair failed and we were unable to recover it. 00:29:10.600 [2024-11-20 16:40:56.358572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.600 [2024-11-20 16:40:56.358582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.600 qpair failed and we were unable to recover it. 00:29:10.600 [2024-11-20 16:40:56.358850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.600 [2024-11-20 16:40:56.358859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.600 qpair failed and we were unable to recover it. 00:29:10.600 [2024-11-20 16:40:56.359031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.600 [2024-11-20 16:40:56.359042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.600 qpair failed and we were unable to recover it. 00:29:10.600 [2024-11-20 16:40:56.359366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.600 [2024-11-20 16:40:56.359375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.600 qpair failed and we were unable to recover it. 00:29:10.600 [2024-11-20 16:40:56.359683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.600 [2024-11-20 16:40:56.359692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.600 qpair failed and we were unable to recover it. 00:29:10.600 [2024-11-20 16:40:56.360089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.600 [2024-11-20 16:40:56.360099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.600 qpair failed and we were unable to recover it. 00:29:10.600 [2024-11-20 16:40:56.360370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.600 [2024-11-20 16:40:56.360379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.600 qpair failed and we were unable to recover it. 00:29:10.600 [2024-11-20 16:40:56.360598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.600 [2024-11-20 16:40:56.360609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.600 qpair failed and we were unable to recover it. 00:29:10.600 [2024-11-20 16:40:56.360791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.600 [2024-11-20 16:40:56.360803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.600 qpair failed and we were unable to recover it. 00:29:10.600 [2024-11-20 16:40:56.360976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.600 [2024-11-20 16:40:56.360989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.600 qpair failed and we were unable to recover it. 00:29:10.600 [2024-11-20 16:40:56.361246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.600 [2024-11-20 16:40:56.361255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.600 qpair failed and we were unable to recover it. 00:29:10.600 [2024-11-20 16:40:56.361571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.600 [2024-11-20 16:40:56.361581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.600 qpair failed and we were unable to recover it. 00:29:10.600 [2024-11-20 16:40:56.361887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.600 [2024-11-20 16:40:56.361897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.600 qpair failed and we were unable to recover it. 00:29:10.600 [2024-11-20 16:40:56.362196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.600 [2024-11-20 16:40:56.362206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.600 qpair failed and we were unable to recover it. 00:29:10.600 [2024-11-20 16:40:56.362495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.600 [2024-11-20 16:40:56.362504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.600 qpair failed and we were unable to recover it. 00:29:10.600 [2024-11-20 16:40:56.362892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.600 [2024-11-20 16:40:56.362901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.600 qpair failed and we were unable to recover it. 00:29:10.600 [2024-11-20 16:40:56.363231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.600 [2024-11-20 16:40:56.363240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.600 qpair failed and we were unable to recover it. 00:29:10.600 [2024-11-20 16:40:56.363573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.600 [2024-11-20 16:40:56.363582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.600 qpair failed and we were unable to recover it. 00:29:10.600 [2024-11-20 16:40:56.363889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.600 [2024-11-20 16:40:56.363900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.600 qpair failed and we were unable to recover it. 00:29:10.600 [2024-11-20 16:40:56.364218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.600 [2024-11-20 16:40:56.364228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.600 qpair failed and we were unable to recover it. 00:29:10.600 [2024-11-20 16:40:56.364506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.600 [2024-11-20 16:40:56.364516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.600 qpair failed and we were unable to recover it. 00:29:10.600 [2024-11-20 16:40:56.364830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.600 [2024-11-20 16:40:56.364840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.600 qpair failed and we were unable to recover it. 00:29:10.600 [2024-11-20 16:40:56.365124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.600 [2024-11-20 16:40:56.365134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.600 qpair failed and we were unable to recover it. 00:29:10.600 [2024-11-20 16:40:56.365455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.600 [2024-11-20 16:40:56.365465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.600 qpair failed and we were unable to recover it. 00:29:10.600 [2024-11-20 16:40:56.365747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.600 [2024-11-20 16:40:56.365756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.600 qpair failed and we were unable to recover it. 00:29:10.600 [2024-11-20 16:40:56.366059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.600 [2024-11-20 16:40:56.366069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.600 qpair failed and we were unable to recover it. 00:29:10.600 [2024-11-20 16:40:56.366350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.600 [2024-11-20 16:40:56.366360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.600 qpair failed and we were unable to recover it. 00:29:10.600 [2024-11-20 16:40:56.366673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.600 [2024-11-20 16:40:56.366683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.600 qpair failed and we were unable to recover it. 00:29:10.600 [2024-11-20 16:40:56.367032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.601 [2024-11-20 16:40:56.367042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.601 qpair failed and we were unable to recover it. 00:29:10.601 [2024-11-20 16:40:56.367349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.601 [2024-11-20 16:40:56.367358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.601 qpair failed and we were unable to recover it. 00:29:10.601 [2024-11-20 16:40:56.367660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.601 [2024-11-20 16:40:56.367670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.601 qpair failed and we were unable to recover it. 00:29:10.601 [2024-11-20 16:40:56.367972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.601 [2024-11-20 16:40:56.367984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.601 qpair failed and we were unable to recover it. 00:29:10.601 [2024-11-20 16:40:56.368172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.601 [2024-11-20 16:40:56.368182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.601 qpair failed and we were unable to recover it. 00:29:10.601 [2024-11-20 16:40:56.368523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.601 [2024-11-20 16:40:56.368532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.601 qpair failed and we were unable to recover it. 00:29:10.601 [2024-11-20 16:40:56.368848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.601 [2024-11-20 16:40:56.368857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.601 qpair failed and we were unable to recover it. 00:29:10.601 [2024-11-20 16:40:56.369021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.601 [2024-11-20 16:40:56.369032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.601 qpair failed and we were unable to recover it. 00:29:10.601 [2024-11-20 16:40:56.369302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.601 [2024-11-20 16:40:56.369311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.601 qpair failed and we were unable to recover it. 00:29:10.601 [2024-11-20 16:40:56.369611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.601 [2024-11-20 16:40:56.369620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.601 qpair failed and we were unable to recover it. 00:29:10.601 [2024-11-20 16:40:56.369911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.601 [2024-11-20 16:40:56.369921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.601 qpair failed and we were unable to recover it. 00:29:10.601 [2024-11-20 16:40:56.370220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.601 [2024-11-20 16:40:56.370230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.601 qpair failed and we were unable to recover it. 00:29:10.601 [2024-11-20 16:40:56.370534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.601 [2024-11-20 16:40:56.370544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.601 qpair failed and we were unable to recover it. 00:29:10.601 [2024-11-20 16:40:56.370727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.601 [2024-11-20 16:40:56.370737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.601 qpair failed and we were unable to recover it. 00:29:10.601 [2024-11-20 16:40:56.370954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.601 [2024-11-20 16:40:56.370963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.601 qpair failed and we were unable to recover it. 00:29:10.601 [2024-11-20 16:40:56.371286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.601 [2024-11-20 16:40:56.371296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.601 qpair failed and we were unable to recover it. 00:29:10.601 [2024-11-20 16:40:56.371570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.601 [2024-11-20 16:40:56.371579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.601 qpair failed and we were unable to recover it. 00:29:10.601 [2024-11-20 16:40:56.371886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.601 [2024-11-20 16:40:56.371895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.601 qpair failed and we were unable to recover it. 00:29:10.601 [2024-11-20 16:40:56.372192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.601 [2024-11-20 16:40:56.372203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.601 qpair failed and we were unable to recover it. 00:29:10.601 [2024-11-20 16:40:56.372533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.601 [2024-11-20 16:40:56.372543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.601 qpair failed and we were unable to recover it. 00:29:10.601 [2024-11-20 16:40:56.372867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.601 [2024-11-20 16:40:56.372878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.601 qpair failed and we were unable to recover it. 00:29:10.601 [2024-11-20 16:40:56.373130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.601 [2024-11-20 16:40:56.373140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.601 qpair failed and we were unable to recover it. 00:29:10.601 [2024-11-20 16:40:56.373461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.601 [2024-11-20 16:40:56.373470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.601 qpair failed and we were unable to recover it. 00:29:10.601 [2024-11-20 16:40:56.373775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.601 [2024-11-20 16:40:56.373785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.601 qpair failed and we were unable to recover it. 00:29:10.601 [2024-11-20 16:40:56.373911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.601 [2024-11-20 16:40:56.373921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.601 qpair failed and we were unable to recover it. 00:29:10.601 [2024-11-20 16:40:56.374227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.601 [2024-11-20 16:40:56.374237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.601 qpair failed and we were unable to recover it. 00:29:10.601 [2024-11-20 16:40:56.374529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.601 [2024-11-20 16:40:56.374540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.601 qpair failed and we were unable to recover it. 00:29:10.601 [2024-11-20 16:40:56.374852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.601 [2024-11-20 16:40:56.374862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.601 qpair failed and we were unable to recover it. 00:29:10.601 [2024-11-20 16:40:56.375142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.601 [2024-11-20 16:40:56.375151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.601 qpair failed and we were unable to recover it. 00:29:10.601 [2024-11-20 16:40:56.375454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.601 [2024-11-20 16:40:56.375463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.601 qpair failed and we were unable to recover it. 00:29:10.601 [2024-11-20 16:40:56.375765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.601 [2024-11-20 16:40:56.375775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.601 qpair failed and we were unable to recover it. 00:29:10.601 [2024-11-20 16:40:56.376049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.601 [2024-11-20 16:40:56.376058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.601 qpair failed and we were unable to recover it. 00:29:10.601 [2024-11-20 16:40:56.376351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.601 [2024-11-20 16:40:56.376361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.601 qpair failed and we were unable to recover it. 00:29:10.601 [2024-11-20 16:40:56.376709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.601 [2024-11-20 16:40:56.376719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.601 qpair failed and we were unable to recover it. 00:29:10.601 [2024-11-20 16:40:56.377020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.601 [2024-11-20 16:40:56.377030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.601 qpair failed and we were unable to recover it. 00:29:10.601 [2024-11-20 16:40:56.377426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.601 [2024-11-20 16:40:56.377435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.601 qpair failed and we were unable to recover it. 00:29:10.601 [2024-11-20 16:40:56.377792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.601 [2024-11-20 16:40:56.377801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.601 qpair failed and we were unable to recover it. 00:29:10.601 [2024-11-20 16:40:56.378102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.601 [2024-11-20 16:40:56.378112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.601 qpair failed and we were unable to recover it. 00:29:10.601 [2024-11-20 16:40:56.378187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.601 [2024-11-20 16:40:56.378198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.601 qpair failed and we were unable to recover it. 00:29:10.601 [2024-11-20 16:40:56.378505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.601 [2024-11-20 16:40:56.378515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.601 qpair failed and we were unable to recover it. 00:29:10.601 [2024-11-20 16:40:56.378808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.601 [2024-11-20 16:40:56.378818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.601 qpair failed and we were unable to recover it. 00:29:10.601 [2024-11-20 16:40:56.379103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.602 [2024-11-20 16:40:56.379113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.602 qpair failed and we were unable to recover it. 00:29:10.602 [2024-11-20 16:40:56.379307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.602 [2024-11-20 16:40:56.379316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.602 qpair failed and we were unable to recover it. 00:29:10.602 [2024-11-20 16:40:56.379546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.602 [2024-11-20 16:40:56.379555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.602 qpair failed and we were unable to recover it. 00:29:10.602 [2024-11-20 16:40:56.379910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.602 [2024-11-20 16:40:56.379920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.602 qpair failed and we were unable to recover it. 00:29:10.602 [2024-11-20 16:40:56.380224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.602 [2024-11-20 16:40:56.380234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.602 qpair failed and we were unable to recover it. 00:29:10.602 [2024-11-20 16:40:56.380574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.602 [2024-11-20 16:40:56.380583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.602 qpair failed and we were unable to recover it. 00:29:10.602 [2024-11-20 16:40:56.380852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.602 [2024-11-20 16:40:56.380861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.602 qpair failed and we were unable to recover it. 00:29:10.602 [2024-11-20 16:40:56.381161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.602 [2024-11-20 16:40:56.381172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.602 qpair failed and we were unable to recover it. 00:29:10.602 [2024-11-20 16:40:56.381483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.602 [2024-11-20 16:40:56.381492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.602 qpair failed and we were unable to recover it. 00:29:10.602 [2024-11-20 16:40:56.381873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.602 [2024-11-20 16:40:56.381883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.602 qpair failed and we were unable to recover it. 00:29:10.602 [2024-11-20 16:40:56.382064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.602 [2024-11-20 16:40:56.382074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.602 qpair failed and we were unable to recover it. 00:29:10.602 [2024-11-20 16:40:56.382309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.602 [2024-11-20 16:40:56.382319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.602 qpair failed and we were unable to recover it. 00:29:10.602 [2024-11-20 16:40:56.382653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.602 [2024-11-20 16:40:56.382662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.602 qpair failed and we were unable to recover it. 00:29:10.602 [2024-11-20 16:40:56.382966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.602 [2024-11-20 16:40:56.382975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.602 qpair failed and we were unable to recover it. 00:29:10.602 [2024-11-20 16:40:56.383264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.602 [2024-11-20 16:40:56.383273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.602 qpair failed and we were unable to recover it. 00:29:10.602 [2024-11-20 16:40:56.383564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.602 [2024-11-20 16:40:56.383574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.602 qpair failed and we were unable to recover it. 00:29:10.602 [2024-11-20 16:40:56.383879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.602 [2024-11-20 16:40:56.383889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.602 qpair failed and we were unable to recover it. 00:29:10.602 [2024-11-20 16:40:56.384065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.602 [2024-11-20 16:40:56.384078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.602 qpair failed and we were unable to recover it. 00:29:10.602 [2024-11-20 16:40:56.384415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.602 [2024-11-20 16:40:56.384424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.602 qpair failed and we were unable to recover it. 00:29:10.602 [2024-11-20 16:40:56.384730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.602 [2024-11-20 16:40:56.384740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.602 qpair failed and we were unable to recover it. 00:29:10.602 [2024-11-20 16:40:56.385038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.602 [2024-11-20 16:40:56.385047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.602 qpair failed and we were unable to recover it. 00:29:10.602 [2024-11-20 16:40:56.385345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.602 [2024-11-20 16:40:56.385354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.602 qpair failed and we were unable to recover it. 00:29:10.602 [2024-11-20 16:40:56.385666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.602 [2024-11-20 16:40:56.385676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.602 qpair failed and we were unable to recover it. 00:29:10.602 [2024-11-20 16:40:56.385962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.602 [2024-11-20 16:40:56.385972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.602 qpair failed and we were unable to recover it. 00:29:10.602 [2024-11-20 16:40:56.386312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.602 [2024-11-20 16:40:56.386321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.602 qpair failed and we were unable to recover it. 00:29:10.602 [2024-11-20 16:40:56.386630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.602 [2024-11-20 16:40:56.386639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.602 qpair failed and we were unable to recover it. 00:29:10.602 [2024-11-20 16:40:56.386940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.602 [2024-11-20 16:40:56.386950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.602 qpair failed and we were unable to recover it. 00:29:10.602 [2024-11-20 16:40:56.387138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.602 [2024-11-20 16:40:56.387148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.602 qpair failed and we were unable to recover it. 00:29:10.602 [2024-11-20 16:40:56.387373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.602 [2024-11-20 16:40:56.387383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.602 qpair failed and we were unable to recover it. 00:29:10.602 [2024-11-20 16:40:56.387682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.602 [2024-11-20 16:40:56.387691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.602 qpair failed and we were unable to recover it. 00:29:10.602 [2024-11-20 16:40:56.387976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.602 [2024-11-20 16:40:56.387989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.602 qpair failed and we were unable to recover it. 00:29:10.602 [2024-11-20 16:40:56.388258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.602 [2024-11-20 16:40:56.388267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.602 qpair failed and we were unable to recover it. 00:29:10.602 [2024-11-20 16:40:56.388545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.602 [2024-11-20 16:40:56.388555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.602 qpair failed and we were unable to recover it. 00:29:10.602 [2024-11-20 16:40:56.388838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.602 [2024-11-20 16:40:56.388848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.602 qpair failed and we were unable to recover it. 00:29:10.602 [2024-11-20 16:40:56.389156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.602 [2024-11-20 16:40:56.389166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.602 qpair failed and we were unable to recover it. 00:29:10.602 [2024-11-20 16:40:56.389478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.602 [2024-11-20 16:40:56.389489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.602 qpair failed and we were unable to recover it. 00:29:10.602 [2024-11-20 16:40:56.389810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.603 [2024-11-20 16:40:56.389820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.603 qpair failed and we were unable to recover it. 00:29:10.603 [2024-11-20 16:40:56.390122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.603 [2024-11-20 16:40:56.390133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.603 qpair failed and we were unable to recover it. 00:29:10.603 [2024-11-20 16:40:56.390445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.603 [2024-11-20 16:40:56.390456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.603 qpair failed and we were unable to recover it. 00:29:10.603 [2024-11-20 16:40:56.390771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.603 [2024-11-20 16:40:56.390780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.603 qpair failed and we were unable to recover it. 00:29:10.603 [2024-11-20 16:40:56.391041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.603 [2024-11-20 16:40:56.391051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.603 qpair failed and we were unable to recover it. 00:29:10.603 [2024-11-20 16:40:56.391371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.603 [2024-11-20 16:40:56.391380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.603 qpair failed and we were unable to recover it. 00:29:10.603 [2024-11-20 16:40:56.391667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.603 [2024-11-20 16:40:56.391677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.603 qpair failed and we were unable to recover it. 00:29:10.603 [2024-11-20 16:40:56.391987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.603 [2024-11-20 16:40:56.391997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.603 qpair failed and we were unable to recover it. 00:29:10.603 [2024-11-20 16:40:56.392350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.603 [2024-11-20 16:40:56.392360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.603 qpair failed and we were unable to recover it. 00:29:10.603 [2024-11-20 16:40:56.392638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.603 [2024-11-20 16:40:56.392647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.603 qpair failed and we were unable to recover it. 00:29:10.603 [2024-11-20 16:40:56.392932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.603 [2024-11-20 16:40:56.392941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.603 qpair failed and we were unable to recover it. 00:29:10.603 [2024-11-20 16:40:56.393225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.603 [2024-11-20 16:40:56.393234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.603 qpair failed and we were unable to recover it. 00:29:10.603 [2024-11-20 16:40:56.393530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.603 [2024-11-20 16:40:56.393540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.603 qpair failed and we were unable to recover it. 00:29:10.603 [2024-11-20 16:40:56.393850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.603 [2024-11-20 16:40:56.393859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.603 qpair failed and we were unable to recover it. 00:29:10.603 [2024-11-20 16:40:56.394047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.603 [2024-11-20 16:40:56.394057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.603 qpair failed and we were unable to recover it. 00:29:10.603 [2024-11-20 16:40:56.394356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.603 [2024-11-20 16:40:56.394365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.603 qpair failed and we were unable to recover it. 00:29:10.603 [2024-11-20 16:40:56.394658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.603 [2024-11-20 16:40:56.394669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.603 qpair failed and we were unable to recover it. 00:29:10.603 [2024-11-20 16:40:56.394969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.603 [2024-11-20 16:40:56.394978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.603 qpair failed and we were unable to recover it. 00:29:10.603 [2024-11-20 16:40:56.395217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.603 [2024-11-20 16:40:56.395226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.603 qpair failed and we were unable to recover it. 00:29:10.603 [2024-11-20 16:40:56.395586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.603 [2024-11-20 16:40:56.395595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.603 qpair failed and we were unable to recover it. 00:29:10.603 [2024-11-20 16:40:56.395877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.603 [2024-11-20 16:40:56.395887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.603 qpair failed and we were unable to recover it. 00:29:10.603 [2024-11-20 16:40:56.396174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.603 [2024-11-20 16:40:56.396184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.603 qpair failed and we were unable to recover it. 00:29:10.603 [2024-11-20 16:40:56.396490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.603 [2024-11-20 16:40:56.396502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.603 qpair failed and we were unable to recover it. 00:29:10.603 [2024-11-20 16:40:56.396845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.603 [2024-11-20 16:40:56.396855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.603 qpair failed and we were unable to recover it. 00:29:10.603 [2024-11-20 16:40:56.397183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.603 [2024-11-20 16:40:56.397193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.603 qpair failed and we were unable to recover it. 00:29:10.603 [2024-11-20 16:40:56.397419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.603 [2024-11-20 16:40:56.397429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.603 qpair failed and we were unable to recover it. 00:29:10.603 [2024-11-20 16:40:56.397748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.603 [2024-11-20 16:40:56.397758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.603 qpair failed and we were unable to recover it. 00:29:10.603 [2024-11-20 16:40:56.398077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.603 [2024-11-20 16:40:56.398088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.603 qpair failed and we were unable to recover it. 00:29:10.603 [2024-11-20 16:40:56.398392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.603 [2024-11-20 16:40:56.398401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.603 qpair failed and we were unable to recover it. 00:29:10.603 [2024-11-20 16:40:56.398708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.603 [2024-11-20 16:40:56.398718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.603 qpair failed and we were unable to recover it. 00:29:10.603 [2024-11-20 16:40:56.398997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.603 [2024-11-20 16:40:56.399007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.603 qpair failed and we were unable to recover it. 00:29:10.603 [2024-11-20 16:40:56.399214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.603 [2024-11-20 16:40:56.399224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.603 qpair failed and we were unable to recover it. 00:29:10.603 [2024-11-20 16:40:56.399544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.603 [2024-11-20 16:40:56.399553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.603 qpair failed and we were unable to recover it. 00:29:10.603 [2024-11-20 16:40:56.399852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.603 [2024-11-20 16:40:56.399861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.603 qpair failed and we were unable to recover it. 00:29:10.603 [2024-11-20 16:40:56.400193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.603 [2024-11-20 16:40:56.400203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.603 qpair failed and we were unable to recover it. 00:29:10.603 [2024-11-20 16:40:56.400505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.603 [2024-11-20 16:40:56.400515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.603 qpair failed and we were unable to recover it. 00:29:10.603 [2024-11-20 16:40:56.400689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.603 [2024-11-20 16:40:56.400698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.603 qpair failed and we were unable to recover it. 00:29:10.603 [2024-11-20 16:40:56.401024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.603 [2024-11-20 16:40:56.401034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.603 qpair failed and we were unable to recover it. 00:29:10.603 [2024-11-20 16:40:56.401351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.603 [2024-11-20 16:40:56.401361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.603 qpair failed and we were unable to recover it. 00:29:10.603 [2024-11-20 16:40:56.401640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.603 [2024-11-20 16:40:56.401649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.603 qpair failed and we were unable to recover it. 00:29:10.603 [2024-11-20 16:40:56.401961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.603 [2024-11-20 16:40:56.401971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.603 qpair failed and we were unable to recover it. 00:29:10.603 [2024-11-20 16:40:56.402297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.603 [2024-11-20 16:40:56.402308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.603 qpair failed and we were unable to recover it. 00:29:10.603 [2024-11-20 16:40:56.402612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.603 [2024-11-20 16:40:56.402623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.603 qpair failed and we were unable to recover it. 00:29:10.604 [2024-11-20 16:40:56.402793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.604 [2024-11-20 16:40:56.402803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.604 qpair failed and we were unable to recover it. 00:29:10.604 [2024-11-20 16:40:56.403137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.604 [2024-11-20 16:40:56.403147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.604 qpair failed and we were unable to recover it. 00:29:10.604 [2024-11-20 16:40:56.403428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.604 [2024-11-20 16:40:56.403438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.604 qpair failed and we were unable to recover it. 00:29:10.604 [2024-11-20 16:40:56.403737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.604 [2024-11-20 16:40:56.403746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.604 qpair failed and we were unable to recover it. 00:29:10.604 [2024-11-20 16:40:56.404047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.604 [2024-11-20 16:40:56.404058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.604 qpair failed and we were unable to recover it. 00:29:10.604 [2024-11-20 16:40:56.404426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.604 [2024-11-20 16:40:56.404435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.604 qpair failed and we were unable to recover it. 00:29:10.604 [2024-11-20 16:40:56.404723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.604 [2024-11-20 16:40:56.404736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.604 qpair failed and we were unable to recover it. 00:29:10.604 [2024-11-20 16:40:56.405036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.604 [2024-11-20 16:40:56.405046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.604 qpair failed and we were unable to recover it. 00:29:10.604 [2024-11-20 16:40:56.405342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.604 [2024-11-20 16:40:56.405351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.604 qpair failed and we were unable to recover it. 00:29:10.604 [2024-11-20 16:40:56.405673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.604 [2024-11-20 16:40:56.405682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.604 qpair failed and we were unable to recover it. 00:29:10.604 [2024-11-20 16:40:56.405864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.604 [2024-11-20 16:40:56.405875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.604 qpair failed and we were unable to recover it. 00:29:10.604 [2024-11-20 16:40:56.406162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.604 [2024-11-20 16:40:56.406171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.604 qpair failed and we were unable to recover it. 00:29:10.604 [2024-11-20 16:40:56.406453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.604 [2024-11-20 16:40:56.406463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.604 qpair failed and we were unable to recover it. 00:29:10.604 [2024-11-20 16:40:56.406794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.604 [2024-11-20 16:40:56.406805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.604 qpair failed and we were unable to recover it. 00:29:10.604 [2024-11-20 16:40:56.407133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.604 [2024-11-20 16:40:56.407143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.604 qpair failed and we were unable to recover it. 00:29:10.604 [2024-11-20 16:40:56.407434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.604 [2024-11-20 16:40:56.407443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.604 qpair failed and we were unable to recover it. 00:29:10.604 [2024-11-20 16:40:56.407744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.604 [2024-11-20 16:40:56.407753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.604 qpair failed and we were unable to recover it. 00:29:10.604 [2024-11-20 16:40:56.408036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.604 [2024-11-20 16:40:56.408047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.604 qpair failed and we were unable to recover it. 00:29:10.604 [2024-11-20 16:40:56.408343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.604 [2024-11-20 16:40:56.408352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.604 qpair failed and we were unable to recover it. 00:29:10.604 [2024-11-20 16:40:56.408666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.604 [2024-11-20 16:40:56.408676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.604 qpair failed and we were unable to recover it. 00:29:10.604 [2024-11-20 16:40:56.409006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.604 [2024-11-20 16:40:56.409017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.604 qpair failed and we were unable to recover it. 00:29:10.604 [2024-11-20 16:40:56.409327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.604 [2024-11-20 16:40:56.409336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.604 qpair failed and we were unable to recover it. 00:29:10.604 [2024-11-20 16:40:56.409644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.604 [2024-11-20 16:40:56.409654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.604 qpair failed and we were unable to recover it. 00:29:10.604 [2024-11-20 16:40:56.409980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.604 [2024-11-20 16:40:56.409994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.604 qpair failed and we were unable to recover it. 00:29:10.604 [2024-11-20 16:40:56.410307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.604 [2024-11-20 16:40:56.410316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.604 qpair failed and we were unable to recover it. 00:29:10.604 [2024-11-20 16:40:56.410623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.604 [2024-11-20 16:40:56.410633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.604 qpair failed and we were unable to recover it. 00:29:10.604 [2024-11-20 16:40:56.410911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.604 [2024-11-20 16:40:56.410921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.604 qpair failed and we were unable to recover it. 00:29:10.604 [2024-11-20 16:40:56.411103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.604 [2024-11-20 16:40:56.411114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.604 qpair failed and we were unable to recover it. 00:29:10.604 [2024-11-20 16:40:56.411432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.604 [2024-11-20 16:40:56.411442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.604 qpair failed and we were unable to recover it. 00:29:10.604 [2024-11-20 16:40:56.411598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.604 [2024-11-20 16:40:56.411608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.604 qpair failed and we were unable to recover it. 00:29:10.604 [2024-11-20 16:40:56.411917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.604 [2024-11-20 16:40:56.411927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.604 qpair failed and we were unable to recover it. 00:29:10.604 [2024-11-20 16:40:56.412229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.604 [2024-11-20 16:40:56.412239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.604 qpair failed and we were unable to recover it. 00:29:10.604 [2024-11-20 16:40:56.412592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.604 [2024-11-20 16:40:56.412602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.604 qpair failed and we were unable to recover it. 00:29:10.604 [2024-11-20 16:40:56.412879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.604 [2024-11-20 16:40:56.412896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.604 qpair failed and we were unable to recover it. 00:29:10.604 [2024-11-20 16:40:56.413108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.604 [2024-11-20 16:40:56.413118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.604 qpair failed and we were unable to recover it. 00:29:10.604 [2024-11-20 16:40:56.413385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.604 [2024-11-20 16:40:56.413394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.604 qpair failed and we were unable to recover it. 00:29:10.604 [2024-11-20 16:40:56.413715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.604 [2024-11-20 16:40:56.413725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.604 qpair failed and we were unable to recover it. 00:29:10.604 [2024-11-20 16:40:56.414090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.604 [2024-11-20 16:40:56.414099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.604 qpair failed and we were unable to recover it. 00:29:10.604 [2024-11-20 16:40:56.414391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.604 [2024-11-20 16:40:56.414400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.604 qpair failed and we were unable to recover it. 00:29:10.604 [2024-11-20 16:40:56.414718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.604 [2024-11-20 16:40:56.414727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.604 qpair failed and we were unable to recover it. 00:29:10.604 [2024-11-20 16:40:56.415007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.604 [2024-11-20 16:40:56.415016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.604 qpair failed and we were unable to recover it. 00:29:10.604 [2024-11-20 16:40:56.415325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.604 [2024-11-20 16:40:56.415335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.604 qpair failed and we were unable to recover it. 00:29:10.604 [2024-11-20 16:40:56.415605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.605 [2024-11-20 16:40:56.415614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.605 qpair failed and we were unable to recover it. 00:29:10.605 [2024-11-20 16:40:56.415895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.605 [2024-11-20 16:40:56.415910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.605 qpair failed and we were unable to recover it. 00:29:10.605 [2024-11-20 16:40:56.416217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.605 [2024-11-20 16:40:56.416227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.605 qpair failed and we were unable to recover it. 00:29:10.605 [2024-11-20 16:40:56.416535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.605 [2024-11-20 16:40:56.416545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.605 qpair failed and we were unable to recover it. 00:29:10.605 [2024-11-20 16:40:56.416849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.605 [2024-11-20 16:40:56.416859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.605 qpair failed and we were unable to recover it. 00:29:10.605 [2024-11-20 16:40:56.417064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.605 [2024-11-20 16:40:56.417076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.605 qpair failed and we were unable to recover it. 00:29:10.605 [2024-11-20 16:40:56.417371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.605 [2024-11-20 16:40:56.417381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.605 qpair failed and we were unable to recover it. 00:29:10.605 [2024-11-20 16:40:56.417682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.605 [2024-11-20 16:40:56.417692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.605 qpair failed and we were unable to recover it. 00:29:10.605 [2024-11-20 16:40:56.417975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.605 [2024-11-20 16:40:56.417987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.605 qpair failed and we were unable to recover it. 00:29:10.605 [2024-11-20 16:40:56.418290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.605 [2024-11-20 16:40:56.418300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.605 qpair failed and we were unable to recover it. 00:29:10.605 [2024-11-20 16:40:56.418604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.605 [2024-11-20 16:40:56.418613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.605 qpair failed and we were unable to recover it. 00:29:10.605 [2024-11-20 16:40:56.418890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.605 [2024-11-20 16:40:56.418900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.605 qpair failed and we were unable to recover it. 00:29:10.605 [2024-11-20 16:40:56.419199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.605 [2024-11-20 16:40:56.419208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.605 qpair failed and we were unable to recover it. 00:29:10.605 [2024-11-20 16:40:56.419511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.605 [2024-11-20 16:40:56.419521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.605 qpair failed and we were unable to recover it. 00:29:10.605 [2024-11-20 16:40:56.419805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.605 [2024-11-20 16:40:56.419814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.605 qpair failed and we were unable to recover it. 00:29:10.605 [2024-11-20 16:40:56.420118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.605 [2024-11-20 16:40:56.420128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.605 qpair failed and we were unable to recover it. 00:29:10.605 [2024-11-20 16:40:56.420427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.605 [2024-11-20 16:40:56.420437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.605 qpair failed and we were unable to recover it. 00:29:10.605 [2024-11-20 16:40:56.420635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.605 [2024-11-20 16:40:56.420645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.605 qpair failed and we were unable to recover it. 00:29:10.605 [2024-11-20 16:40:56.420948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.605 [2024-11-20 16:40:56.420958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.605 qpair failed and we were unable to recover it. 00:29:10.605 [2024-11-20 16:40:56.421323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.605 [2024-11-20 16:40:56.421333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.605 qpair failed and we were unable to recover it. 00:29:10.605 [2024-11-20 16:40:56.421666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.605 [2024-11-20 16:40:56.421676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.605 qpair failed and we were unable to recover it. 00:29:10.605 [2024-11-20 16:40:56.421986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.605 [2024-11-20 16:40:56.421997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.605 qpair failed and we were unable to recover it. 00:29:10.605 [2024-11-20 16:40:56.422271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.605 [2024-11-20 16:40:56.422282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.605 qpair failed and we were unable to recover it. 00:29:10.605 [2024-11-20 16:40:56.422558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.605 [2024-11-20 16:40:56.422568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.605 qpair failed and we were unable to recover it. 00:29:10.605 [2024-11-20 16:40:56.422866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.605 [2024-11-20 16:40:56.422876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.605 qpair failed and we were unable to recover it. 00:29:10.605 [2024-11-20 16:40:56.423231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.605 [2024-11-20 16:40:56.423241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.605 qpair failed and we were unable to recover it. 00:29:10.605 [2024-11-20 16:40:56.423525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.605 [2024-11-20 16:40:56.423536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.605 qpair failed and we were unable to recover it. 00:29:10.605 [2024-11-20 16:40:56.423837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.605 [2024-11-20 16:40:56.423847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.605 qpair failed and we were unable to recover it. 00:29:10.605 [2024-11-20 16:40:56.424150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.605 [2024-11-20 16:40:56.424160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.605 qpair failed and we were unable to recover it. 00:29:10.605 [2024-11-20 16:40:56.424490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.605 [2024-11-20 16:40:56.424500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.605 qpair failed and we were unable to recover it. 00:29:10.605 [2024-11-20 16:40:56.424809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.605 [2024-11-20 16:40:56.424819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.605 qpair failed and we were unable to recover it. 00:29:10.605 [2024-11-20 16:40:56.425122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.605 [2024-11-20 16:40:56.425131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.605 qpair failed and we were unable to recover it. 00:29:10.605 [2024-11-20 16:40:56.425427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.605 [2024-11-20 16:40:56.425439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.605 qpair failed and we were unable to recover it. 00:29:10.605 [2024-11-20 16:40:56.425736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.605 [2024-11-20 16:40:56.425746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.605 qpair failed and we were unable to recover it. 00:29:10.605 [2024-11-20 16:40:56.426042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.605 [2024-11-20 16:40:56.426052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.605 qpair failed and we were unable to recover it. 00:29:10.605 [2024-11-20 16:40:56.426344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.605 [2024-11-20 16:40:56.426354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.605 qpair failed and we were unable to recover it. 00:29:10.605 [2024-11-20 16:40:56.426658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.605 [2024-11-20 16:40:56.426668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.605 qpair failed and we were unable to recover it. 00:29:10.605 [2024-11-20 16:40:56.426972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.605 [2024-11-20 16:40:56.426985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.605 qpair failed and we were unable to recover it. 00:29:10.605 [2024-11-20 16:40:56.427269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.605 [2024-11-20 16:40:56.427280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.605 qpair failed and we were unable to recover it. 00:29:10.606 [2024-11-20 16:40:56.427583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.606 [2024-11-20 16:40:56.427594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.606 qpair failed and we were unable to recover it. 00:29:10.606 [2024-11-20 16:40:56.427894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.606 [2024-11-20 16:40:56.427903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.606 qpair failed and we were unable to recover it. 00:29:10.606 [2024-11-20 16:40:56.428283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.606 [2024-11-20 16:40:56.428293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.606 qpair failed and we were unable to recover it. 00:29:10.606 [2024-11-20 16:40:56.428600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.606 [2024-11-20 16:40:56.428609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.606 qpair failed and we were unable to recover it. 00:29:10.606 [2024-11-20 16:40:56.428916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.606 [2024-11-20 16:40:56.428926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.606 qpair failed and we were unable to recover it. 00:29:10.606 [2024-11-20 16:40:56.429134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.606 [2024-11-20 16:40:56.429143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.606 qpair failed and we were unable to recover it. 00:29:10.606 [2024-11-20 16:40:56.429490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.606 [2024-11-20 16:40:56.429506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.606 qpair failed and we were unable to recover it. 00:29:10.606 [2024-11-20 16:40:56.429813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.606 [2024-11-20 16:40:56.429823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.606 qpair failed and we were unable to recover it. 00:29:10.606 [2024-11-20 16:40:56.430131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.606 [2024-11-20 16:40:56.430141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.606 qpair failed and we were unable to recover it. 00:29:10.606 [2024-11-20 16:40:56.430458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.606 [2024-11-20 16:40:56.430467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.606 qpair failed and we were unable to recover it. 00:29:10.606 [2024-11-20 16:40:56.430780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.606 [2024-11-20 16:40:56.430790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.606 qpair failed and we were unable to recover it. 00:29:10.606 [2024-11-20 16:40:56.431107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.606 [2024-11-20 16:40:56.431117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.606 qpair failed and we were unable to recover it. 00:29:10.606 [2024-11-20 16:40:56.431344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.606 [2024-11-20 16:40:56.431353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.606 qpair failed and we were unable to recover it. 00:29:10.606 [2024-11-20 16:40:56.431674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.606 [2024-11-20 16:40:56.431683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.606 qpair failed and we were unable to recover it. 00:29:10.606 [2024-11-20 16:40:56.431968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.606 [2024-11-20 16:40:56.431978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.606 qpair failed and we were unable to recover it. 00:29:10.606 [2024-11-20 16:40:56.432280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.606 [2024-11-20 16:40:56.432290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.606 qpair failed and we were unable to recover it. 00:29:10.606 [2024-11-20 16:40:56.432596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.606 [2024-11-20 16:40:56.432605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.606 qpair failed and we were unable to recover it. 00:29:10.606 [2024-11-20 16:40:56.432884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.606 [2024-11-20 16:40:56.432894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.606 qpair failed and we were unable to recover it. 00:29:10.606 [2024-11-20 16:40:56.433148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.606 [2024-11-20 16:40:56.433159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.606 qpair failed and we were unable to recover it. 00:29:10.606 [2024-11-20 16:40:56.433466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.606 [2024-11-20 16:40:56.433476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.606 qpair failed and we were unable to recover it. 00:29:10.606 [2024-11-20 16:40:56.433761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.606 [2024-11-20 16:40:56.433771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.606 qpair failed and we were unable to recover it. 00:29:10.606 [2024-11-20 16:40:56.433955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.606 [2024-11-20 16:40:56.433966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.606 qpair failed and we were unable to recover it. 00:29:10.606 [2024-11-20 16:40:56.434271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.606 [2024-11-20 16:40:56.434281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.606 qpair failed and we were unable to recover it. 00:29:10.606 [2024-11-20 16:40:56.434587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.606 [2024-11-20 16:40:56.434597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.606 qpair failed and we were unable to recover it. 00:29:10.606 [2024-11-20 16:40:56.434787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.606 [2024-11-20 16:40:56.434796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.606 qpair failed and we were unable to recover it. 00:29:10.606 [2024-11-20 16:40:56.435164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.606 [2024-11-20 16:40:56.435174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.606 qpair failed and we were unable to recover it. 00:29:10.606 [2024-11-20 16:40:56.435499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.606 [2024-11-20 16:40:56.435508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.606 qpair failed and we were unable to recover it. 00:29:10.606 [2024-11-20 16:40:56.435819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.606 [2024-11-20 16:40:56.435829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.606 qpair failed and we were unable to recover it. 00:29:10.606 [2024-11-20 16:40:56.436123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.606 [2024-11-20 16:40:56.436133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.606 qpair failed and we were unable to recover it. 00:29:10.606 [2024-11-20 16:40:56.436314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.606 [2024-11-20 16:40:56.436323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.606 qpair failed and we were unable to recover it. 00:29:10.606 [2024-11-20 16:40:56.436648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.606 [2024-11-20 16:40:56.436657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.606 qpair failed and we were unable to recover it. 00:29:10.606 [2024-11-20 16:40:56.436952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.606 [2024-11-20 16:40:56.436961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.606 qpair failed and we were unable to recover it. 00:29:10.606 [2024-11-20 16:40:56.437258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.606 [2024-11-20 16:40:56.437268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.606 qpair failed and we were unable to recover it. 00:29:10.606 [2024-11-20 16:40:56.437570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.606 [2024-11-20 16:40:56.437579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.606 qpair failed and we were unable to recover it. 00:29:10.606 [2024-11-20 16:40:56.437854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.606 [2024-11-20 16:40:56.437867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.606 qpair failed and we were unable to recover it. 00:29:10.607 [2024-11-20 16:40:56.438163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.607 [2024-11-20 16:40:56.438174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.607 qpair failed and we were unable to recover it. 00:29:10.607 [2024-11-20 16:40:56.438536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.607 [2024-11-20 16:40:56.438546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.607 qpair failed and we were unable to recover it. 00:29:10.607 [2024-11-20 16:40:56.438842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.607 [2024-11-20 16:40:56.438852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.607 qpair failed and we were unable to recover it. 00:29:10.607 [2024-11-20 16:40:56.439133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.607 [2024-11-20 16:40:56.439143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.607 qpair failed and we were unable to recover it. 00:29:10.607 [2024-11-20 16:40:56.439451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.607 [2024-11-20 16:40:56.439461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.607 qpair failed and we were unable to recover it. 00:29:10.607 [2024-11-20 16:40:56.439766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.607 [2024-11-20 16:40:56.439776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.607 qpair failed and we were unable to recover it. 00:29:10.607 [2024-11-20 16:40:56.440056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.607 [2024-11-20 16:40:56.440066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.607 qpair failed and we were unable to recover it. 00:29:10.607 [2024-11-20 16:40:56.440354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.607 [2024-11-20 16:40:56.440365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.607 qpair failed and we were unable to recover it. 00:29:10.607 [2024-11-20 16:40:56.440663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.607 [2024-11-20 16:40:56.440673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.607 qpair failed and we were unable to recover it. 00:29:10.607 [2024-11-20 16:40:56.440998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.607 [2024-11-20 16:40:56.441008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.607 qpair failed and we were unable to recover it. 00:29:10.607 [2024-11-20 16:40:56.441315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.607 [2024-11-20 16:40:56.441325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.607 qpair failed and we were unable to recover it. 00:29:10.607 [2024-11-20 16:40:56.441627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.607 [2024-11-20 16:40:56.441637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.607 qpair failed and we were unable to recover it. 00:29:10.607 [2024-11-20 16:40:56.441962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.607 [2024-11-20 16:40:56.441972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.607 qpair failed and we were unable to recover it. 00:29:10.607 [2024-11-20 16:40:56.442292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.607 [2024-11-20 16:40:56.442303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.607 qpair failed and we were unable to recover it. 00:29:10.607 [2024-11-20 16:40:56.442604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.607 [2024-11-20 16:40:56.442615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.607 qpair failed and we were unable to recover it. 00:29:10.607 [2024-11-20 16:40:56.442925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.607 [2024-11-20 16:40:56.442935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.607 qpair failed and we were unable to recover it. 00:29:10.607 [2024-11-20 16:40:56.443142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.607 [2024-11-20 16:40:56.443154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.607 qpair failed and we were unable to recover it. 00:29:10.607 [2024-11-20 16:40:56.443448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.607 [2024-11-20 16:40:56.443457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.607 qpair failed and we were unable to recover it. 00:29:10.607 [2024-11-20 16:40:56.443770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.607 [2024-11-20 16:40:56.443779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.607 qpair failed and we were unable to recover it. 00:29:10.607 [2024-11-20 16:40:56.444136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.607 [2024-11-20 16:40:56.444146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.607 qpair failed and we were unable to recover it. 00:29:10.607 [2024-11-20 16:40:56.444439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.607 [2024-11-20 16:40:56.444455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.607 qpair failed and we were unable to recover it. 00:29:10.607 [2024-11-20 16:40:56.444736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.607 [2024-11-20 16:40:56.444745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.607 qpair failed and we were unable to recover it. 00:29:10.607 [2024-11-20 16:40:56.445046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.607 [2024-11-20 16:40:56.445056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.607 qpair failed and we were unable to recover it. 00:29:10.607 [2024-11-20 16:40:56.445367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.607 [2024-11-20 16:40:56.445376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.607 qpair failed and we were unable to recover it. 00:29:10.607 [2024-11-20 16:40:56.445654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.607 [2024-11-20 16:40:56.445664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.607 qpair failed and we were unable to recover it. 00:29:10.607 [2024-11-20 16:40:56.445976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.607 [2024-11-20 16:40:56.445989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.607 qpair failed and we were unable to recover it. 00:29:10.607 [2024-11-20 16:40:56.446263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.607 [2024-11-20 16:40:56.446275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.607 qpair failed and we were unable to recover it. 00:29:10.607 [2024-11-20 16:40:56.446566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.607 [2024-11-20 16:40:56.446576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.607 qpair failed and we were unable to recover it. 00:29:10.607 [2024-11-20 16:40:56.446879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.607 [2024-11-20 16:40:56.446889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.607 qpair failed and we were unable to recover it. 00:29:10.607 [2024-11-20 16:40:56.447200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.607 [2024-11-20 16:40:56.447210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.607 qpair failed and we were unable to recover it. 00:29:10.607 [2024-11-20 16:40:56.447317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.607 [2024-11-20 16:40:56.447326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.607 qpair failed and we were unable to recover it. 00:29:10.607 [2024-11-20 16:40:56.447490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.607 [2024-11-20 16:40:56.447501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.607 qpair failed and we were unable to recover it. 00:29:10.607 [2024-11-20 16:40:56.447808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.607 [2024-11-20 16:40:56.447817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.607 qpair failed and we were unable to recover it. 00:29:10.607 [2024-11-20 16:40:56.448139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.607 [2024-11-20 16:40:56.448149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.607 qpair failed and we were unable to recover it. 00:29:10.607 [2024-11-20 16:40:56.448346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.607 [2024-11-20 16:40:56.448356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.607 qpair failed and we were unable to recover it. 00:29:10.607 [2024-11-20 16:40:56.448677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.607 [2024-11-20 16:40:56.448687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.607 qpair failed and we were unable to recover it. 00:29:10.607 [2024-11-20 16:40:56.448964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.607 [2024-11-20 16:40:56.448973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.607 qpair failed and we were unable to recover it. 00:29:10.607 [2024-11-20 16:40:56.449297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.607 [2024-11-20 16:40:56.449307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.607 qpair failed and we were unable to recover it. 00:29:10.607 [2024-11-20 16:40:56.449610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.607 [2024-11-20 16:40:56.449620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.607 qpair failed and we were unable to recover it. 00:29:10.607 [2024-11-20 16:40:56.449959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.607 [2024-11-20 16:40:56.449968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.607 qpair failed and we were unable to recover it. 00:29:10.607 [2024-11-20 16:40:56.450280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.607 [2024-11-20 16:40:56.450290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.607 qpair failed and we were unable to recover it. 00:29:10.607 [2024-11-20 16:40:56.450596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.608 [2024-11-20 16:40:56.450606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.608 qpair failed and we were unable to recover it. 00:29:10.608 [2024-11-20 16:40:56.450892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.608 [2024-11-20 16:40:56.450902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.608 qpair failed and we were unable to recover it. 00:29:10.608 [2024-11-20 16:40:56.451217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.608 [2024-11-20 16:40:56.451227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.608 qpair failed and we were unable to recover it. 00:29:10.608 [2024-11-20 16:40:56.451500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.608 [2024-11-20 16:40:56.451517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.608 qpair failed and we were unable to recover it. 00:29:10.608 [2024-11-20 16:40:56.451841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.608 [2024-11-20 16:40:56.451851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.608 qpair failed and we were unable to recover it. 00:29:10.608 [2024-11-20 16:40:56.452141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.608 [2024-11-20 16:40:56.452151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.608 qpair failed and we were unable to recover it. 00:29:10.608 [2024-11-20 16:40:56.452468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.608 [2024-11-20 16:40:56.452478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.608 qpair failed and we were unable to recover it. 00:29:10.608 [2024-11-20 16:40:56.452802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.608 [2024-11-20 16:40:56.452812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.608 qpair failed and we were unable to recover it. 00:29:10.608 [2024-11-20 16:40:56.453139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.608 [2024-11-20 16:40:56.453149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.608 qpair failed and we were unable to recover it. 00:29:10.608 [2024-11-20 16:40:56.453345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.608 [2024-11-20 16:40:56.453355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.608 qpair failed and we were unable to recover it. 00:29:10.608 [2024-11-20 16:40:56.453621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.608 [2024-11-20 16:40:56.453631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.608 qpair failed and we were unable to recover it. 00:29:10.608 [2024-11-20 16:40:56.453817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.608 [2024-11-20 16:40:56.453826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.608 qpair failed and we were unable to recover it. 00:29:10.608 [2024-11-20 16:40:56.454003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.608 [2024-11-20 16:40:56.454013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.608 qpair failed and we were unable to recover it. 00:29:10.608 [2024-11-20 16:40:56.454093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.608 [2024-11-20 16:40:56.454104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.608 qpair failed and we were unable to recover it. 00:29:10.608 [2024-11-20 16:40:56.454443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.608 [2024-11-20 16:40:56.454453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.608 qpair failed and we were unable to recover it. 00:29:10.608 [2024-11-20 16:40:56.454755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.608 [2024-11-20 16:40:56.454764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.608 qpair failed and we were unable to recover it. 00:29:10.608 [2024-11-20 16:40:56.455083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.608 [2024-11-20 16:40:56.455094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.608 qpair failed and we were unable to recover it. 00:29:10.608 [2024-11-20 16:40:56.455305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.608 [2024-11-20 16:40:56.455315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.608 qpair failed and we were unable to recover it. 00:29:10.608 [2024-11-20 16:40:56.455615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.608 [2024-11-20 16:40:56.455624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.608 qpair failed and we were unable to recover it. 00:29:10.608 [2024-11-20 16:40:56.455901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.608 [2024-11-20 16:40:56.455918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.608 qpair failed and we were unable to recover it. 00:29:10.608 [2024-11-20 16:40:56.456218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.608 [2024-11-20 16:40:56.456228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.608 qpair failed and we were unable to recover it. 00:29:10.608 [2024-11-20 16:40:56.456539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.608 [2024-11-20 16:40:56.456548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.608 qpair failed and we were unable to recover it. 00:29:10.608 [2024-11-20 16:40:56.456852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.608 [2024-11-20 16:40:56.456862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.608 qpair failed and we were unable to recover it. 00:29:10.608 [2024-11-20 16:40:56.457141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.608 [2024-11-20 16:40:56.457151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.608 qpair failed and we were unable to recover it. 00:29:10.608 [2024-11-20 16:40:56.457430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.608 [2024-11-20 16:40:56.457439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.608 qpair failed and we were unable to recover it. 00:29:10.608 [2024-11-20 16:40:56.457723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.608 [2024-11-20 16:40:56.457734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.608 qpair failed and we were unable to recover it. 00:29:10.608 [2024-11-20 16:40:56.458017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.608 [2024-11-20 16:40:56.458029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.608 qpair failed and we were unable to recover it. 00:29:10.608 [2024-11-20 16:40:56.458327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.608 [2024-11-20 16:40:56.458336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.608 qpair failed and we were unable to recover it. 00:29:10.608 [2024-11-20 16:40:56.458642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.608 [2024-11-20 16:40:56.458652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.608 qpair failed and we were unable to recover it. 00:29:10.608 [2024-11-20 16:40:56.458819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.608 [2024-11-20 16:40:56.458830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.608 qpair failed and we were unable to recover it. 00:29:10.608 [2024-11-20 16:40:56.459142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.608 [2024-11-20 16:40:56.459152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.608 qpair failed and we were unable to recover it. 00:29:10.608 [2024-11-20 16:40:56.459465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.608 [2024-11-20 16:40:56.459474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.608 qpair failed and we were unable to recover it. 00:29:10.608 [2024-11-20 16:40:56.459780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.608 [2024-11-20 16:40:56.459789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.608 qpair failed and we were unable to recover it. 00:29:10.608 [2024-11-20 16:40:56.460096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.608 [2024-11-20 16:40:56.460106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.608 qpair failed and we were unable to recover it. 00:29:10.608 [2024-11-20 16:40:56.460417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.608 [2024-11-20 16:40:56.460427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.608 qpair failed and we were unable to recover it. 00:29:10.608 [2024-11-20 16:40:56.460711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.608 [2024-11-20 16:40:56.460720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.608 qpair failed and we were unable to recover it. 00:29:10.608 [2024-11-20 16:40:56.461007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.608 [2024-11-20 16:40:56.461017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.608 qpair failed and we were unable to recover it. 00:29:10.608 [2024-11-20 16:40:56.461350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.608 [2024-11-20 16:40:56.461360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.608 qpair failed and we were unable to recover it. 00:29:10.608 [2024-11-20 16:40:56.461639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.608 [2024-11-20 16:40:56.461649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.608 qpair failed and we were unable to recover it. 00:29:10.608 [2024-11-20 16:40:56.461952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.609 [2024-11-20 16:40:56.461962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.609 qpair failed and we were unable to recover it. 00:29:10.609 [2024-11-20 16:40:56.462298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.609 [2024-11-20 16:40:56.462308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.609 qpair failed and we were unable to recover it. 00:29:10.609 [2024-11-20 16:40:56.462584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.609 [2024-11-20 16:40:56.462594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.609 qpair failed and we were unable to recover it. 00:29:10.609 [2024-11-20 16:40:56.462906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.609 [2024-11-20 16:40:56.462916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.609 qpair failed and we were unable to recover it. 00:29:10.609 [2024-11-20 16:40:56.463226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.609 [2024-11-20 16:40:56.463237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.609 qpair failed and we were unable to recover it. 00:29:10.609 [2024-11-20 16:40:56.463463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.609 [2024-11-20 16:40:56.463474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.609 qpair failed and we were unable to recover it. 00:29:10.609 [2024-11-20 16:40:56.463775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.609 [2024-11-20 16:40:56.463785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.609 qpair failed and we were unable to recover it. 00:29:10.609 [2024-11-20 16:40:56.463970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.609 [2024-11-20 16:40:56.463985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.609 qpair failed and we were unable to recover it. 00:29:10.609 [2024-11-20 16:40:56.464318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.609 [2024-11-20 16:40:56.464328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.609 qpair failed and we were unable to recover it. 00:29:10.609 [2024-11-20 16:40:56.464631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.609 [2024-11-20 16:40:56.464641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.609 qpair failed and we were unable to recover it. 00:29:10.609 [2024-11-20 16:40:56.464948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.609 [2024-11-20 16:40:56.464958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.609 qpair failed and we were unable to recover it. 00:29:10.609 [2024-11-20 16:40:56.465242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.609 [2024-11-20 16:40:56.465253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.609 qpair failed and we were unable to recover it. 00:29:10.609 [2024-11-20 16:40:56.465589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.609 [2024-11-20 16:40:56.465599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.609 qpair failed and we were unable to recover it. 00:29:10.609 [2024-11-20 16:40:56.465893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.609 [2024-11-20 16:40:56.465903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.609 qpair failed and we were unable to recover it. 00:29:10.609 [2024-11-20 16:40:56.466202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.609 [2024-11-20 16:40:56.466212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.609 qpair failed and we were unable to recover it. 00:29:10.609 [2024-11-20 16:40:56.466512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.609 [2024-11-20 16:40:56.466522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.609 qpair failed and we were unable to recover it. 00:29:10.609 [2024-11-20 16:40:56.466876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.609 [2024-11-20 16:40:56.466886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.609 qpair failed and we were unable to recover it. 00:29:10.609 [2024-11-20 16:40:56.467178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.609 [2024-11-20 16:40:56.467188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.609 qpair failed and we were unable to recover it. 00:29:10.609 [2024-11-20 16:40:56.467480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.609 [2024-11-20 16:40:56.467491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.609 qpair failed and we were unable to recover it. 00:29:10.609 [2024-11-20 16:40:56.467806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.609 [2024-11-20 16:40:56.467816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.609 qpair failed and we were unable to recover it. 00:29:10.609 [2024-11-20 16:40:56.468121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.609 [2024-11-20 16:40:56.468132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.609 qpair failed and we were unable to recover it. 00:29:10.609 [2024-11-20 16:40:56.468442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.609 [2024-11-20 16:40:56.468453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.609 qpair failed and we were unable to recover it. 00:29:10.609 [2024-11-20 16:40:56.468758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.609 [2024-11-20 16:40:56.468768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.609 qpair failed and we were unable to recover it. 00:29:10.609 [2024-11-20 16:40:56.469046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.609 [2024-11-20 16:40:56.469057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.609 qpair failed and we were unable to recover it. 00:29:10.609 [2024-11-20 16:40:56.469360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.609 [2024-11-20 16:40:56.469370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.609 qpair failed and we were unable to recover it. 00:29:10.609 [2024-11-20 16:40:56.469674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.609 [2024-11-20 16:40:56.469683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.609 qpair failed and we were unable to recover it. 00:29:10.609 [2024-11-20 16:40:56.469963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.609 [2024-11-20 16:40:56.469972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.609 qpair failed and we were unable to recover it. 00:29:10.609 [2024-11-20 16:40:56.470282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.609 [2024-11-20 16:40:56.470292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.609 qpair failed and we were unable to recover it. 00:29:10.609 [2024-11-20 16:40:56.470592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.609 [2024-11-20 16:40:56.470602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.609 qpair failed and we were unable to recover it. 00:29:10.609 [2024-11-20 16:40:56.470777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.609 [2024-11-20 16:40:56.470786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.609 qpair failed and we were unable to recover it. 00:29:10.609 [2024-11-20 16:40:56.471099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.609 [2024-11-20 16:40:56.471109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.609 qpair failed and we were unable to recover it. 00:29:10.609 [2024-11-20 16:40:56.471417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.609 [2024-11-20 16:40:56.471426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.609 qpair failed and we were unable to recover it. 00:29:10.609 [2024-11-20 16:40:56.471751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.609 [2024-11-20 16:40:56.471760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.609 qpair failed and we were unable to recover it. 00:29:10.609 [2024-11-20 16:40:56.471942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.609 [2024-11-20 16:40:56.471952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.609 qpair failed and we were unable to recover it. 00:29:10.609 [2024-11-20 16:40:56.472212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.609 [2024-11-20 16:40:56.472222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.609 qpair failed and we were unable to recover it. 00:29:10.609 [2024-11-20 16:40:56.472512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.609 [2024-11-20 16:40:56.472521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.609 qpair failed and we were unable to recover it. 00:29:10.609 [2024-11-20 16:40:56.472904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.609 [2024-11-20 16:40:56.472913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.609 qpair failed and we were unable to recover it. 00:29:10.609 [2024-11-20 16:40:56.473243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.609 [2024-11-20 16:40:56.473253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.609 qpair failed and we were unable to recover it. 00:29:10.609 [2024-11-20 16:40:56.473600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.609 [2024-11-20 16:40:56.473609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.609 qpair failed and we were unable to recover it. 00:29:10.610 [2024-11-20 16:40:56.473901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.610 [2024-11-20 16:40:56.473917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.610 qpair failed and we were unable to recover it. 00:29:10.610 [2024-11-20 16:40:56.474223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.610 [2024-11-20 16:40:56.474233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.610 qpair failed and we were unable to recover it. 00:29:10.610 [2024-11-20 16:40:56.474512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.610 [2024-11-20 16:40:56.474522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.610 qpair failed and we were unable to recover it. 00:29:10.610 [2024-11-20 16:40:56.474829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.610 [2024-11-20 16:40:56.474838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.610 qpair failed and we were unable to recover it. 00:29:10.610 [2024-11-20 16:40:56.475123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.610 [2024-11-20 16:40:56.475133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.610 qpair failed and we were unable to recover it. 00:29:10.610 [2024-11-20 16:40:56.475435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.610 [2024-11-20 16:40:56.475445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.610 qpair failed and we were unable to recover it. 00:29:10.610 [2024-11-20 16:40:56.475635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.610 [2024-11-20 16:40:56.475646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.610 qpair failed and we were unable to recover it. 00:29:10.610 [2024-11-20 16:40:56.475989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.610 [2024-11-20 16:40:56.475999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.610 qpair failed and we were unable to recover it. 00:29:10.610 [2024-11-20 16:40:56.476293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.610 [2024-11-20 16:40:56.476302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.610 qpair failed and we were unable to recover it. 00:29:10.610 [2024-11-20 16:40:56.476482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.610 [2024-11-20 16:40:56.476491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.610 qpair failed and we were unable to recover it. 00:29:10.610 [2024-11-20 16:40:56.476830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.610 [2024-11-20 16:40:56.476839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.610 qpair failed and we were unable to recover it. 00:29:10.610 [2024-11-20 16:40:56.477207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.610 [2024-11-20 16:40:56.477217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.610 qpair failed and we were unable to recover it. 00:29:10.610 [2024-11-20 16:40:56.477486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.610 [2024-11-20 16:40:56.477495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.610 qpair failed and we were unable to recover it. 00:29:10.610 [2024-11-20 16:40:56.477690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.610 [2024-11-20 16:40:56.477701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.610 qpair failed and we were unable to recover it. 00:29:10.610 [2024-11-20 16:40:56.477976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.610 [2024-11-20 16:40:56.477991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.610 qpair failed and we were unable to recover it. 00:29:10.610 [2024-11-20 16:40:56.478287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.610 [2024-11-20 16:40:56.478297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.610 qpair failed and we were unable to recover it. 00:29:10.610 [2024-11-20 16:40:56.478602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.610 [2024-11-20 16:40:56.478614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.610 qpair failed and we were unable to recover it. 00:29:10.610 [2024-11-20 16:40:56.478933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.610 [2024-11-20 16:40:56.478943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.610 qpair failed and we were unable to recover it. 00:29:10.610 [2024-11-20 16:40:56.479298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.610 [2024-11-20 16:40:56.479308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.610 qpair failed and we were unable to recover it. 00:29:10.610 [2024-11-20 16:40:56.479691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.610 [2024-11-20 16:40:56.479701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.610 qpair failed and we were unable to recover it. 00:29:10.610 [2024-11-20 16:40:56.480009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.610 [2024-11-20 16:40:56.480019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.610 qpair failed and we were unable to recover it. 00:29:10.610 [2024-11-20 16:40:56.480377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.610 [2024-11-20 16:40:56.480387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.610 qpair failed and we were unable to recover it. 00:29:10.610 [2024-11-20 16:40:56.480688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.610 [2024-11-20 16:40:56.480697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.610 qpair failed and we were unable to recover it. 00:29:10.610 [2024-11-20 16:40:56.480977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.610 [2024-11-20 16:40:56.480992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.610 qpair failed and we were unable to recover it. 00:29:10.610 [2024-11-20 16:40:56.481347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.610 [2024-11-20 16:40:56.481357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.610 qpair failed and we were unable to recover it. 00:29:10.610 [2024-11-20 16:40:56.481640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.610 [2024-11-20 16:40:56.481651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.610 qpair failed and we were unable to recover it. 00:29:10.610 [2024-11-20 16:40:56.481935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.610 [2024-11-20 16:40:56.481944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.610 qpair failed and we were unable to recover it. 00:29:10.610 [2024-11-20 16:40:56.482249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.610 [2024-11-20 16:40:56.482259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.610 qpair failed and we were unable to recover it. 00:29:10.610 [2024-11-20 16:40:56.482568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.610 [2024-11-20 16:40:56.482577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.610 qpair failed and we were unable to recover it. 00:29:10.610 [2024-11-20 16:40:56.482866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.610 [2024-11-20 16:40:56.482875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.610 qpair failed and we were unable to recover it. 00:29:10.610 [2024-11-20 16:40:56.483195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.610 [2024-11-20 16:40:56.483206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.610 qpair failed and we were unable to recover it. 00:29:10.610 [2024-11-20 16:40:56.483512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.610 [2024-11-20 16:40:56.483521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.610 qpair failed and we were unable to recover it. 00:29:10.610 [2024-11-20 16:40:56.483842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.610 [2024-11-20 16:40:56.483851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.610 qpair failed and we were unable to recover it. 00:29:10.610 [2024-11-20 16:40:56.484016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.611 [2024-11-20 16:40:56.484027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.611 qpair failed and we were unable to recover it. 00:29:10.611 [2024-11-20 16:40:56.484312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.611 [2024-11-20 16:40:56.484322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.611 qpair failed and we were unable to recover it. 00:29:10.611 [2024-11-20 16:40:56.484617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.611 [2024-11-20 16:40:56.484627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.611 qpair failed and we were unable to recover it. 00:29:10.611 [2024-11-20 16:40:56.484942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.611 [2024-11-20 16:40:56.484952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.611 qpair failed and we were unable to recover it. 00:29:10.611 [2024-11-20 16:40:56.485249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.611 [2024-11-20 16:40:56.485260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.611 qpair failed and we were unable to recover it. 00:29:10.611 [2024-11-20 16:40:56.485553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.611 [2024-11-20 16:40:56.485564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.611 qpair failed and we were unable to recover it. 00:29:10.611 [2024-11-20 16:40:56.485825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.611 [2024-11-20 16:40:56.485835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.611 qpair failed and we were unable to recover it. 00:29:10.611 [2024-11-20 16:40:56.486133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.611 [2024-11-20 16:40:56.486144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.611 qpair failed and we were unable to recover it. 00:29:10.611 [2024-11-20 16:40:56.486436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.611 [2024-11-20 16:40:56.486447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.611 qpair failed and we were unable to recover it. 00:29:10.611 [2024-11-20 16:40:56.486750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.611 [2024-11-20 16:40:56.486761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.611 qpair failed and we were unable to recover it. 00:29:10.611 [2024-11-20 16:40:56.487057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.611 [2024-11-20 16:40:56.487068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.611 qpair failed and we were unable to recover it. 00:29:10.611 [2024-11-20 16:40:56.487378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.611 [2024-11-20 16:40:56.487387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.611 qpair failed and we were unable to recover it. 00:29:10.611 [2024-11-20 16:40:56.487689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.611 [2024-11-20 16:40:56.487698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.611 qpair failed and we were unable to recover it. 00:29:10.611 [2024-11-20 16:40:56.488004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.611 [2024-11-20 16:40:56.488013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.611 qpair failed and we were unable to recover it. 00:29:10.611 [2024-11-20 16:40:56.488354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.611 [2024-11-20 16:40:56.488364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.611 qpair failed and we were unable to recover it. 00:29:10.611 [2024-11-20 16:40:56.488669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.611 [2024-11-20 16:40:56.488678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.611 qpair failed and we were unable to recover it. 00:29:10.611 [2024-11-20 16:40:56.489012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.611 [2024-11-20 16:40:56.489022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.611 qpair failed and we were unable to recover it. 00:29:10.611 [2024-11-20 16:40:56.489335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.611 [2024-11-20 16:40:56.489344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.611 qpair failed and we were unable to recover it. 00:29:10.611 [2024-11-20 16:40:56.489642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.611 [2024-11-20 16:40:56.489651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.611 qpair failed and we were unable to recover it. 00:29:10.611 [2024-11-20 16:40:56.489818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.611 [2024-11-20 16:40:56.489829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.611 qpair failed and we were unable to recover it. 00:29:10.611 [2024-11-20 16:40:56.490176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.611 [2024-11-20 16:40:56.490186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.611 qpair failed and we were unable to recover it. 00:29:10.611 [2024-11-20 16:40:56.490500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.611 [2024-11-20 16:40:56.490509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.611 qpair failed and we were unable to recover it. 00:29:10.611 [2024-11-20 16:40:56.490814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.611 [2024-11-20 16:40:56.490824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.611 qpair failed and we were unable to recover it. 00:29:10.611 [2024-11-20 16:40:56.491107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.611 [2024-11-20 16:40:56.491117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.611 qpair failed and we were unable to recover it. 00:29:10.611 [2024-11-20 16:40:56.491433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.611 [2024-11-20 16:40:56.491443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.611 qpair failed and we were unable to recover it. 00:29:10.611 [2024-11-20 16:40:56.491752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.611 [2024-11-20 16:40:56.491762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.611 qpair failed and we were unable to recover it. 00:29:10.611 [2024-11-20 16:40:56.492084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.611 [2024-11-20 16:40:56.492093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.611 qpair failed and we were unable to recover it. 00:29:10.611 [2024-11-20 16:40:56.492409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.611 [2024-11-20 16:40:56.492418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.611 qpair failed and we were unable to recover it. 00:29:10.611 [2024-11-20 16:40:56.492611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.611 [2024-11-20 16:40:56.492621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.611 qpair failed and we were unable to recover it. 00:29:10.611 [2024-11-20 16:40:56.492841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.611 [2024-11-20 16:40:56.492850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.611 qpair failed and we were unable to recover it. 00:29:10.611 [2024-11-20 16:40:56.493145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.611 [2024-11-20 16:40:56.493155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.611 qpair failed and we were unable to recover it. 00:29:10.611 [2024-11-20 16:40:56.493441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.611 [2024-11-20 16:40:56.493457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.611 qpair failed and we were unable to recover it. 00:29:10.611 [2024-11-20 16:40:56.493682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.611 [2024-11-20 16:40:56.493692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.611 qpair failed and we were unable to recover it. 00:29:10.611 [2024-11-20 16:40:56.493993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.611 [2024-11-20 16:40:56.494004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.611 qpair failed and we were unable to recover it. 00:29:10.611 [2024-11-20 16:40:56.494324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.611 [2024-11-20 16:40:56.494334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.611 qpair failed and we were unable to recover it. 00:29:10.611 [2024-11-20 16:40:56.494603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.611 [2024-11-20 16:40:56.494613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.611 qpair failed and we were unable to recover it. 00:29:10.611 [2024-11-20 16:40:56.494936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.611 [2024-11-20 16:40:56.494945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.611 qpair failed and we were unable to recover it. 00:29:10.611 [2024-11-20 16:40:56.495228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.611 [2024-11-20 16:40:56.495239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.611 qpair failed and we were unable to recover it. 00:29:10.611 [2024-11-20 16:40:56.495518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.612 [2024-11-20 16:40:56.495528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.612 qpair failed and we were unable to recover it. 00:29:10.612 [2024-11-20 16:40:56.495706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.612 [2024-11-20 16:40:56.495716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.612 qpair failed and we were unable to recover it. 00:29:10.612 [2024-11-20 16:40:56.495996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.612 [2024-11-20 16:40:56.496006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.612 qpair failed and we were unable to recover it. 00:29:10.612 [2024-11-20 16:40:56.496360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.612 [2024-11-20 16:40:56.496369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.612 qpair failed and we were unable to recover it. 00:29:10.612 [2024-11-20 16:40:56.496574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.612 [2024-11-20 16:40:56.496583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.612 qpair failed and we were unable to recover it. 00:29:10.612 [2024-11-20 16:40:56.496784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.612 [2024-11-20 16:40:56.496794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.612 qpair failed and we were unable to recover it. 00:29:10.612 [2024-11-20 16:40:56.497096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.612 [2024-11-20 16:40:56.497106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.612 qpair failed and we were unable to recover it. 00:29:10.612 [2024-11-20 16:40:56.497404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.612 [2024-11-20 16:40:56.497414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.612 qpair failed and we were unable to recover it. 00:29:10.612 [2024-11-20 16:40:56.497597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.612 [2024-11-20 16:40:56.497608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.612 qpair failed and we were unable to recover it. 00:29:10.612 [2024-11-20 16:40:56.497891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.612 [2024-11-20 16:40:56.497901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.612 qpair failed and we were unable to recover it. 00:29:10.612 [2024-11-20 16:40:56.498084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.612 [2024-11-20 16:40:56.498095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.612 qpair failed and we were unable to recover it. 00:29:10.612 [2024-11-20 16:40:56.498276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.612 [2024-11-20 16:40:56.498287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.612 qpair failed and we were unable to recover it. 00:29:10.612 [2024-11-20 16:40:56.498595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.612 [2024-11-20 16:40:56.498605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.612 qpair failed and we were unable to recover it. 00:29:10.612 [2024-11-20 16:40:56.498910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.612 [2024-11-20 16:40:56.498923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.612 qpair failed and we were unable to recover it. 00:29:10.612 [2024-11-20 16:40:56.499108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.612 [2024-11-20 16:40:56.499119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.612 qpair failed and we were unable to recover it. 00:29:10.612 [2024-11-20 16:40:56.499465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.612 [2024-11-20 16:40:56.499475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.612 qpair failed and we were unable to recover it. 00:29:10.612 [2024-11-20 16:40:56.499780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.612 [2024-11-20 16:40:56.499790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.612 qpair failed and we were unable to recover it. 00:29:10.612 [2024-11-20 16:40:56.500037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.612 [2024-11-20 16:40:56.500047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.612 qpair failed and we were unable to recover it. 00:29:10.612 [2024-11-20 16:40:56.500364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.612 [2024-11-20 16:40:56.500374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.612 qpair failed and we were unable to recover it. 00:29:10.612 [2024-11-20 16:40:56.500689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.612 [2024-11-20 16:40:56.500698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.612 qpair failed and we were unable to recover it. 00:29:10.612 [2024-11-20 16:40:56.500993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.612 [2024-11-20 16:40:56.501003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.612 qpair failed and we were unable to recover it. 00:29:10.612 [2024-11-20 16:40:56.501306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.612 [2024-11-20 16:40:56.501315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.612 qpair failed and we were unable to recover it. 00:29:10.612 [2024-11-20 16:40:56.501617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.612 [2024-11-20 16:40:56.501626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.612 qpair failed and we were unable to recover it. 00:29:10.612 [2024-11-20 16:40:56.501933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.612 [2024-11-20 16:40:56.501942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.612 qpair failed and we were unable to recover it. 00:29:10.612 [2024-11-20 16:40:56.502150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.612 [2024-11-20 16:40:56.502160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.612 qpair failed and we were unable to recover it. 00:29:10.612 [2024-11-20 16:40:56.502489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.612 [2024-11-20 16:40:56.502498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.612 qpair failed and we were unable to recover it. 00:29:10.612 [2024-11-20 16:40:56.502672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.612 [2024-11-20 16:40:56.502682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.612 qpair failed and we were unable to recover it. 00:29:10.612 [2024-11-20 16:40:56.503022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.612 [2024-11-20 16:40:56.503032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.612 qpair failed and we were unable to recover it. 00:29:10.612 [2024-11-20 16:40:56.503353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.612 [2024-11-20 16:40:56.503363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.612 qpair failed and we were unable to recover it. 00:29:10.612 [2024-11-20 16:40:56.503667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.612 [2024-11-20 16:40:56.503677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.612 qpair failed and we were unable to recover it. 00:29:10.612 [2024-11-20 16:40:56.504005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.612 [2024-11-20 16:40:56.504015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.612 qpair failed and we were unable to recover it. 00:29:10.612 [2024-11-20 16:40:56.504197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.612 [2024-11-20 16:40:56.504207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.612 qpair failed and we were unable to recover it. 00:29:10.612 [2024-11-20 16:40:56.504540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.612 [2024-11-20 16:40:56.504551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.612 qpair failed and we were unable to recover it. 00:29:10.612 [2024-11-20 16:40:56.504837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.612 [2024-11-20 16:40:56.504846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.612 qpair failed and we were unable to recover it. 00:29:10.612 [2024-11-20 16:40:56.505145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.612 [2024-11-20 16:40:56.505154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.612 qpair failed and we were unable to recover it. 00:29:10.612 [2024-11-20 16:40:56.505431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.612 [2024-11-20 16:40:56.505440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.612 qpair failed and we were unable to recover it. 00:29:10.612 [2024-11-20 16:40:56.505815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.612 [2024-11-20 16:40:56.505826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.612 qpair failed and we were unable to recover it. 00:29:10.612 [2024-11-20 16:40:56.506128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.612 [2024-11-20 16:40:56.506138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.612 qpair failed and we were unable to recover it. 00:29:10.612 [2024-11-20 16:40:56.506444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.612 [2024-11-20 16:40:56.506453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.612 qpair failed and we were unable to recover it. 00:29:10.612 [2024-11-20 16:40:56.506816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.612 [2024-11-20 16:40:56.506825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.613 qpair failed and we were unable to recover it. 00:29:10.613 [2024-11-20 16:40:56.507140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.613 [2024-11-20 16:40:56.507150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.613 qpair failed and we were unable to recover it. 00:29:10.613 [2024-11-20 16:40:56.507474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.613 [2024-11-20 16:40:56.507484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.613 qpair failed and we were unable to recover it. 00:29:10.613 [2024-11-20 16:40:56.507832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.613 [2024-11-20 16:40:56.507843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.613 qpair failed and we were unable to recover it. 00:29:10.613 [2024-11-20 16:40:56.508165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.613 [2024-11-20 16:40:56.508175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.613 qpair failed and we were unable to recover it. 00:29:10.613 [2024-11-20 16:40:56.508475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.613 [2024-11-20 16:40:56.508486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.613 qpair failed and we were unable to recover it. 00:29:10.613 [2024-11-20 16:40:56.508678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.613 [2024-11-20 16:40:56.508687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.613 qpair failed and we were unable to recover it. 00:29:10.613 [2024-11-20 16:40:56.509056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.613 [2024-11-20 16:40:56.509067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.613 qpair failed and we were unable to recover it. 00:29:10.613 [2024-11-20 16:40:56.509386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.613 [2024-11-20 16:40:56.509395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.613 qpair failed and we were unable to recover it. 00:29:10.613 [2024-11-20 16:40:56.509680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.613 [2024-11-20 16:40:56.509690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.613 qpair failed and we were unable to recover it. 00:29:10.613 [2024-11-20 16:40:56.509998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.613 [2024-11-20 16:40:56.510008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.613 qpair failed and we were unable to recover it. 00:29:10.613 [2024-11-20 16:40:56.510345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.613 [2024-11-20 16:40:56.510354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.613 qpair failed and we were unable to recover it. 00:29:10.613 [2024-11-20 16:40:56.510636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.613 [2024-11-20 16:40:56.510645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.613 qpair failed and we were unable to recover it. 00:29:10.613 [2024-11-20 16:40:56.510954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.613 [2024-11-20 16:40:56.510964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.613 qpair failed and we were unable to recover it. 00:29:10.613 [2024-11-20 16:40:56.511261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.613 [2024-11-20 16:40:56.511271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.613 qpair failed and we were unable to recover it. 00:29:10.613 [2024-11-20 16:40:56.511556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.613 [2024-11-20 16:40:56.511568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.613 qpair failed and we were unable to recover it. 00:29:10.613 [2024-11-20 16:40:56.511883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.613 [2024-11-20 16:40:56.511893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.613 qpair failed and we were unable to recover it. 00:29:10.613 [2024-11-20 16:40:56.512194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.613 [2024-11-20 16:40:56.512205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.613 qpair failed and we were unable to recover it. 00:29:10.613 [2024-11-20 16:40:56.512479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.613 [2024-11-20 16:40:56.512489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.613 qpair failed and we were unable to recover it. 00:29:10.613 [2024-11-20 16:40:56.512781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.613 [2024-11-20 16:40:56.512791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.613 qpair failed and we were unable to recover it. 00:29:10.613 [2024-11-20 16:40:56.513097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.613 [2024-11-20 16:40:56.513107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.613 qpair failed and we were unable to recover it. 00:29:10.613 [2024-11-20 16:40:56.513386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.613 [2024-11-20 16:40:56.513396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.613 qpair failed and we were unable to recover it. 00:29:10.613 [2024-11-20 16:40:56.513738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.613 [2024-11-20 16:40:56.513748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.613 qpair failed and we were unable to recover it. 00:29:10.613 [2024-11-20 16:40:56.514040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.613 [2024-11-20 16:40:56.514050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.613 qpair failed and we were unable to recover it. 00:29:10.613 [2024-11-20 16:40:56.514364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.613 [2024-11-20 16:40:56.514374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.613 qpair failed and we were unable to recover it. 00:29:10.613 [2024-11-20 16:40:56.514682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.613 [2024-11-20 16:40:56.514693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.613 qpair failed and we were unable to recover it. 00:29:10.613 [2024-11-20 16:40:56.514998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.613 [2024-11-20 16:40:56.515009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.613 qpair failed and we were unable to recover it. 00:29:10.613 [2024-11-20 16:40:56.515315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.613 [2024-11-20 16:40:56.515325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.613 qpair failed and we were unable to recover it. 00:29:10.613 [2024-11-20 16:40:56.515609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.613 [2024-11-20 16:40:56.515618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.613 qpair failed and we were unable to recover it. 00:29:10.613 [2024-11-20 16:40:56.515769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.613 [2024-11-20 16:40:56.515779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.613 qpair failed and we were unable to recover it. 00:29:10.613 [2024-11-20 16:40:56.516056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.613 [2024-11-20 16:40:56.516073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.613 qpair failed and we were unable to recover it. 00:29:10.613 [2024-11-20 16:40:56.516392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.613 [2024-11-20 16:40:56.516401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.613 qpair failed and we were unable to recover it. 00:29:10.613 [2024-11-20 16:40:56.516595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.613 [2024-11-20 16:40:56.516605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.613 qpair failed and we were unable to recover it. 00:29:10.613 [2024-11-20 16:40:56.516945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.613 [2024-11-20 16:40:56.516954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.613 qpair failed and we were unable to recover it. 00:29:10.613 [2024-11-20 16:40:56.517261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.613 [2024-11-20 16:40:56.517271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.613 qpair failed and we were unable to recover it. 00:29:10.613 [2024-11-20 16:40:56.517557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.613 [2024-11-20 16:40:56.517566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.613 qpair failed and we were unable to recover it. 00:29:10.613 [2024-11-20 16:40:56.517869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.613 [2024-11-20 16:40:56.517879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.613 qpair failed and we were unable to recover it. 00:29:10.613 [2024-11-20 16:40:56.518052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.613 [2024-11-20 16:40:56.518063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.613 qpair failed and we were unable to recover it. 00:29:10.613 [2024-11-20 16:40:56.518417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.613 [2024-11-20 16:40:56.518427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.613 qpair failed and we were unable to recover it. 00:29:10.613 [2024-11-20 16:40:56.518644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.613 [2024-11-20 16:40:56.518654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.613 qpair failed and we were unable to recover it. 00:29:10.614 [2024-11-20 16:40:56.518965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.614 [2024-11-20 16:40:56.518975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.614 qpair failed and we were unable to recover it. 00:29:10.614 [2024-11-20 16:40:56.519254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.614 [2024-11-20 16:40:56.519270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.614 qpair failed and we were unable to recover it. 00:29:10.614 [2024-11-20 16:40:56.519578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.614 [2024-11-20 16:40:56.519590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.614 qpair failed and we were unable to recover it. 00:29:10.614 [2024-11-20 16:40:56.519876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.614 [2024-11-20 16:40:56.519887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.614 qpair failed and we were unable to recover it. 00:29:10.614 [2024-11-20 16:40:56.520170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.614 [2024-11-20 16:40:56.520180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.614 qpair failed and we were unable to recover it. 00:29:10.614 [2024-11-20 16:40:56.520466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.614 [2024-11-20 16:40:56.520476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.614 qpair failed and we were unable to recover it. 00:29:10.614 [2024-11-20 16:40:56.520774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.614 [2024-11-20 16:40:56.520784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.614 qpair failed and we were unable to recover it. 00:29:10.614 [2024-11-20 16:40:56.521077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.614 [2024-11-20 16:40:56.521086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.614 qpair failed and we were unable to recover it. 00:29:10.614 [2024-11-20 16:40:56.521409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.614 [2024-11-20 16:40:56.521419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.614 qpair failed and we were unable to recover it. 00:29:10.614 [2024-11-20 16:40:56.521709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.614 [2024-11-20 16:40:56.521718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.614 qpair failed and we were unable to recover it. 00:29:10.614 [2024-11-20 16:40:56.521999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.614 [2024-11-20 16:40:56.522009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.614 qpair failed and we were unable to recover it. 00:29:10.614 [2024-11-20 16:40:56.522323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.614 [2024-11-20 16:40:56.522333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.614 qpair failed and we were unable to recover it. 00:29:10.614 [2024-11-20 16:40:56.522613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.614 [2024-11-20 16:40:56.522623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.614 qpair failed and we were unable to recover it. 00:29:10.614 [2024-11-20 16:40:56.522920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.614 [2024-11-20 16:40:56.522929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.614 qpair failed and we were unable to recover it. 00:29:10.614 [2024-11-20 16:40:56.523237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.614 [2024-11-20 16:40:56.523248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.614 qpair failed and we were unable to recover it. 00:29:10.614 [2024-11-20 16:40:56.523574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.614 [2024-11-20 16:40:56.523585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.614 qpair failed and we were unable to recover it. 00:29:10.614 [2024-11-20 16:40:56.523785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.614 [2024-11-20 16:40:56.523796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.614 qpair failed and we were unable to recover it. 00:29:10.614 [2024-11-20 16:40:56.523859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.614 [2024-11-20 16:40:56.523869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.614 qpair failed and we were unable to recover it. 00:29:10.614 [2024-11-20 16:40:56.524153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.614 [2024-11-20 16:40:56.524165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.614 qpair failed and we were unable to recover it. 00:29:10.614 [2024-11-20 16:40:56.524476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.614 [2024-11-20 16:40:56.524486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.614 qpair failed and we were unable to recover it. 00:29:10.614 [2024-11-20 16:40:56.524764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.614 [2024-11-20 16:40:56.524774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.614 qpair failed and we were unable to recover it. 00:29:10.614 [2024-11-20 16:40:56.525095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.614 [2024-11-20 16:40:56.525105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.614 qpair failed and we were unable to recover it. 00:29:10.614 [2024-11-20 16:40:56.525314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.614 [2024-11-20 16:40:56.525325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.614 qpair failed and we were unable to recover it. 00:29:10.614 [2024-11-20 16:40:56.525526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.614 [2024-11-20 16:40:56.525536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.614 qpair failed and we were unable to recover it. 00:29:10.614 [2024-11-20 16:40:56.525737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.614 [2024-11-20 16:40:56.525747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.614 qpair failed and we were unable to recover it. 00:29:10.614 [2024-11-20 16:40:56.526013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.614 [2024-11-20 16:40:56.526023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.614 qpair failed and we were unable to recover it. 00:29:10.614 [2024-11-20 16:40:56.526243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.614 [2024-11-20 16:40:56.526255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.614 qpair failed and we were unable to recover it. 00:29:10.614 [2024-11-20 16:40:56.526542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.614 [2024-11-20 16:40:56.526552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.614 qpair failed and we were unable to recover it. 00:29:10.614 [2024-11-20 16:40:56.526848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.614 [2024-11-20 16:40:56.526859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.614 qpair failed and we were unable to recover it. 00:29:10.614 [2024-11-20 16:40:56.527152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.614 [2024-11-20 16:40:56.527162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.614 qpair failed and we were unable to recover it. 00:29:10.614 [2024-11-20 16:40:56.527487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.614 [2024-11-20 16:40:56.527498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.614 qpair failed and we were unable to recover it. 00:29:10.614 [2024-11-20 16:40:56.527699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.614 [2024-11-20 16:40:56.527710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.614 qpair failed and we were unable to recover it. 00:29:10.614 [2024-11-20 16:40:56.528037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.614 [2024-11-20 16:40:56.528047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.614 qpair failed and we were unable to recover it. 00:29:10.614 [2024-11-20 16:40:56.528359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.614 [2024-11-20 16:40:56.528370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.614 qpair failed and we were unable to recover it. 00:29:10.614 [2024-11-20 16:40:56.528643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.614 [2024-11-20 16:40:56.528653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.614 qpair failed and we were unable to recover it. 00:29:10.614 [2024-11-20 16:40:56.529013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.614 [2024-11-20 16:40:56.529024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.614 qpair failed and we were unable to recover it. 00:29:10.614 [2024-11-20 16:40:56.529389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.614 [2024-11-20 16:40:56.529399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.614 qpair failed and we were unable to recover it. 00:29:10.614 [2024-11-20 16:40:56.529716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.614 [2024-11-20 16:40:56.529726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.614 qpair failed and we were unable to recover it. 00:29:10.614 [2024-11-20 16:40:56.530043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.614 [2024-11-20 16:40:56.530053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.614 qpair failed and we were unable to recover it. 00:29:10.614 [2024-11-20 16:40:56.530355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.614 [2024-11-20 16:40:56.530365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.614 qpair failed and we were unable to recover it. 00:29:10.615 [2024-11-20 16:40:56.530648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.615 [2024-11-20 16:40:56.530658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.615 qpair failed and we were unable to recover it. 00:29:10.615 [2024-11-20 16:40:56.530960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.615 [2024-11-20 16:40:56.530970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.615 qpair failed and we were unable to recover it. 00:29:10.615 [2024-11-20 16:40:56.531133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.615 [2024-11-20 16:40:56.531145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.615 qpair failed and we were unable to recover it. 00:29:10.615 [2024-11-20 16:40:56.531430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.615 [2024-11-20 16:40:56.531444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.615 qpair failed and we were unable to recover it. 00:29:10.615 [2024-11-20 16:40:56.531748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.615 [2024-11-20 16:40:56.531759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.615 qpair failed and we were unable to recover it. 00:29:10.615 [2024-11-20 16:40:56.532031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.615 [2024-11-20 16:40:56.532041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.615 qpair failed and we were unable to recover it. 00:29:10.615 [2024-11-20 16:40:56.532347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.615 [2024-11-20 16:40:56.532358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.615 qpair failed and we were unable to recover it. 00:29:10.615 [2024-11-20 16:40:56.532752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.615 [2024-11-20 16:40:56.532761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.615 qpair failed and we were unable to recover it. 00:29:10.615 [2024-11-20 16:40:56.533056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.615 [2024-11-20 16:40:56.533065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.615 qpair failed and we were unable to recover it. 00:29:10.615 [2024-11-20 16:40:56.533387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.615 [2024-11-20 16:40:56.533397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.615 qpair failed and we were unable to recover it. 00:29:10.615 [2024-11-20 16:40:56.533645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.615 [2024-11-20 16:40:56.533654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.615 qpair failed and we were unable to recover it. 00:29:10.615 [2024-11-20 16:40:56.533874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.615 [2024-11-20 16:40:56.533884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.615 qpair failed and we were unable to recover it. 00:29:10.615 [2024-11-20 16:40:56.534244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.615 [2024-11-20 16:40:56.534254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.615 qpair failed and we were unable to recover it. 00:29:10.615 [2024-11-20 16:40:56.534585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.615 [2024-11-20 16:40:56.534595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.615 qpair failed and we were unable to recover it. 00:29:10.615 [2024-11-20 16:40:56.534904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.615 [2024-11-20 16:40:56.534913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.615 qpair failed and we were unable to recover it. 00:29:10.615 [2024-11-20 16:40:56.535218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.615 [2024-11-20 16:40:56.535228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.615 qpair failed and we were unable to recover it. 00:29:10.615 [2024-11-20 16:40:56.535571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.615 [2024-11-20 16:40:56.535580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.615 qpair failed and we were unable to recover it. 00:29:10.615 [2024-11-20 16:40:56.535866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.615 [2024-11-20 16:40:56.535876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.615 qpair failed and we were unable to recover it. 00:29:10.615 [2024-11-20 16:40:56.536161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.615 [2024-11-20 16:40:56.536171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.615 qpair failed and we were unable to recover it. 00:29:10.615 [2024-11-20 16:40:56.536364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.615 [2024-11-20 16:40:56.536373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.615 qpair failed and we were unable to recover it. 00:29:10.615 [2024-11-20 16:40:56.536707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.615 [2024-11-20 16:40:56.536717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.615 qpair failed and we were unable to recover it. 00:29:10.615 [2024-11-20 16:40:56.536999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.615 [2024-11-20 16:40:56.537008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.615 qpair failed and we were unable to recover it. 00:29:10.615 [2024-11-20 16:40:56.537302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.615 [2024-11-20 16:40:56.537312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.615 qpair failed and we were unable to recover it. 00:29:10.615 [2024-11-20 16:40:56.537623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.615 [2024-11-20 16:40:56.537634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.615 qpair failed and we were unable to recover it. 00:29:10.615 [2024-11-20 16:40:56.537915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.615 [2024-11-20 16:40:56.537925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.615 qpair failed and we were unable to recover it. 00:29:10.615 [2024-11-20 16:40:56.538234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.615 [2024-11-20 16:40:56.538244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.615 qpair failed and we were unable to recover it. 00:29:10.615 [2024-11-20 16:40:56.538452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.615 [2024-11-20 16:40:56.538462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.615 qpair failed and we were unable to recover it. 00:29:10.615 [2024-11-20 16:40:56.538749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.615 [2024-11-20 16:40:56.538760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.615 qpair failed and we were unable to recover it. 00:29:10.615 [2024-11-20 16:40:56.539118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.615 [2024-11-20 16:40:56.539129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.615 qpair failed and we were unable to recover it. 00:29:10.615 [2024-11-20 16:40:56.539433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.615 [2024-11-20 16:40:56.539444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.615 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.539751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.539765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.540110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.540121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.540411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.540420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.540702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.540712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.540917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.540927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.541249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.541260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.541543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.541553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.541903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.541912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.542200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.542210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.542513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.542522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.542778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.542789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.543000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.543011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.543300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.543309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.543617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.543627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.543832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.543842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.544147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.544157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.544462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.544472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.544834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.544845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.545127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.545138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.545443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.545453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.545763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.545773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.546059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.546069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.546399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.546408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.546592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.546602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.546966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.546975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.547299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.547310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.547636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.547646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.547954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.547964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.548299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.548309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.548596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.548606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.548889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.548898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.549205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.549216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.549580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.549590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.549866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.549875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.550161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.550171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.550567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.550577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.550856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.550865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.551074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.551084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.551458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.551468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.551773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.551782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.552035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.552045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.552385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.552399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.552698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.552708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.553027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.553037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.553321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.553331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.553636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.553646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.554005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.554016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.554430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.554440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.554622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.554631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.554985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.554995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.555364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.555373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.555678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.555687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.555994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.556004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.556359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.556369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.556678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.556687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.556895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.556905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.557126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.557137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.557454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.557464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.557749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.557759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.558071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.558082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.558242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.558253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.558562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.558572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.885 qpair failed and we were unable to recover it. 00:29:10.885 [2024-11-20 16:40:56.558873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.885 [2024-11-20 16:40:56.558883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.559192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.559202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.559505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.559515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.559828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.559837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.560128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.560138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.560438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.560448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.560619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.560630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.560843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.560852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.561163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.561174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.561479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.561489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.561782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.561793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.562129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.562139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.562437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.562447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.562732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.562742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.563048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.563059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.563356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.563365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.563661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.563670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.563947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.563957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.564234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.564244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.564561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.564570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.564852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.564863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.565144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.565154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.565469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.565478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.565790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.565800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.566094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.566104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.566412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.566422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.566585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.566596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.566959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.566969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.567157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.567168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.567509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.567519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.567813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.567823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.568029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.568039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.568276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.568285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.568676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.568686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.568894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.568904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.569224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.569234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.569536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.569546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.569857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.569866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.570071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.570080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.570394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.570404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.570702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.570712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.571014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.571024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.571190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.571200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.571486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.571495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.571827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.571837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.572144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.572154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.572443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.572452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.572768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.572780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.573087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.573098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.573261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.573271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.573482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.573491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.573802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.573818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.574043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.574053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.574399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.574409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.574716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.574726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.574916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.574925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.575261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.575271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.575594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.575603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.575899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.575908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.576071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.576081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.576299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.576347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.576579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.576589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.576998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.577008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.577228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.577238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.577454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.577465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.577651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.577660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.577933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.577942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.578154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.578164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.578381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.578391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.578580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.578591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.578907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.578917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.886 [2024-11-20 16:40:56.579101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.886 [2024-11-20 16:40:56.579111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.886 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.579325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.579335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.579654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.579664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.579964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.579975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.580186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.580197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.580369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.580378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.580654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.580664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.580962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.580972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.581279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.581290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.581569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.581579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.581886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.581897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.582060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.582070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.582335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.582346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.582643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.582653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.582977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.582991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.583171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.583180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.583389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.583399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.583555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.583566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.583738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.583748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.583951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.583960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.584241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.584251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.584559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.584568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.584908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.584918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.585212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.585222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.585395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.585412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.585720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.585729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.586021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.586032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.586354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.586364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.586673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.586682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.586962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.586971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.587272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.587289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.587620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.587631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.587817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.587828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.588077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.588087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.588418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.588427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.588783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.588792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.589112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.589122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.589437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.589446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.589729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.589746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.589922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.589932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.590229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.590239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.590544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.590554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.590922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.590932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.591220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.591229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.591553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.591566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.591855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.591865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.592037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.592048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.592330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.592340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.592620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.592635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.592933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.592942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.593147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.593157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.593495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.593504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.593814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.593823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.594130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.594140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.594447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.594457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.594759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.594769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.595085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.595095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.595382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.595392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.595582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.595591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.595920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.595930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.596256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.596267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.596573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.596583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.887 [2024-11-20 16:40:56.596890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.887 [2024-11-20 16:40:56.596899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.887 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.597197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.597207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.597517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.597526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.597842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.597851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.598026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.598036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.598309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.598319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.598698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.598707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.598885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.598894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.599089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.599099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.599323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.599333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.599547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.599557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.599882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.599891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.600213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.600223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.600511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.600523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.600816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.600826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.601023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.601033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.601414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.601423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.601727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.601737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.602071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.602081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.602368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.602378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.602562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.602573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.602768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.602777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.603050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.603060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.603396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.603408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.603712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.603722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.604018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.604028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.604328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.604338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.604633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.604642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.604965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.604974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.605335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.605345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.605656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.605665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.605972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.605991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.606171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.606181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.606522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.606532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.606879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.606889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.607280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.607293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.607471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.607481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.607800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.607811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.608120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.608129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.608464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.608473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.608814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.608824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.609017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.609027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.609361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.609371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.609667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.609677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.610003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.610014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.610330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.610339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.610645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.610655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.610826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.610836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.611153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.611163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.611454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.611464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.611762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.611776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.611951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.611961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.612357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.612367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.612717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.612727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.612948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.612958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.613150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.613160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.613345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.613354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.613530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.613540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.613826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.613836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.614142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.614152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.614434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.614444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.614635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.614644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.614938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.614948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.615116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.615126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.615490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.615500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.615808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.615818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.616118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.616128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.616351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.616360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.616667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.616676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.617001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.617011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.617323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.617332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.617643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.888 [2024-11-20 16:40:56.617652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.888 qpair failed and we were unable to recover it. 00:29:10.888 [2024-11-20 16:40:56.617862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.617871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.618144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.618154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.618453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.618462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.618549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.618558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.618864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.618874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.619147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.619157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.619318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.619328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.619636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.619646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.619939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.619949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.620138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.620148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.620447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.620456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.620770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.620780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.621115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.621125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.621402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.621411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.621715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.621724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.622021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.622030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.622214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.622224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.622390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.622400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.622707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.622717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.623023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.623035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.623347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.623356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.623545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.623555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.623761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.623771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.624074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.624084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.624365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.624375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.624692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.624702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.624991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.625001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.625307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.625316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.625597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.625607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.625955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.625965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.626250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.626260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.626562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.626572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.626879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.626889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.627254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.627265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.627594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.627604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.627911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.627921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.628308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.628318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.628609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.628618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.628924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.628933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.629244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.629254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.629533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.629543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.629854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.629863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.630228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.630238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.630406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.630416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.630798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.630807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.631245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.631255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.631546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.631558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.631862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.631871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.632176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.632186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.632513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.632523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.632847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.632857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.633243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.633252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.633554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.633564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.633808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.633818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.634148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.634158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.634511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.634520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.634689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.634698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.634944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.634954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.635286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.635296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.635564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.635574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.635860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.635869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.636200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.636210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.636533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.636542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.636856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.636865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.637181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.637191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.637493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.637503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.637786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.637796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.638119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.638130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.638406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.638416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.638596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.638608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.638962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.638971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.639350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.639360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.639640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.889 [2024-11-20 16:40:56.639649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.889 qpair failed and we were unable to recover it. 00:29:10.889 [2024-11-20 16:40:56.639968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.639977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.640314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.640324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.640630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.640639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.640922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.640933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.641221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.641231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.641516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.641525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.641836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.641845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.642150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.642160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.642374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.642383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.642689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.642698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.643011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.643020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.643395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.643405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.643705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.643714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.644045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.644054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.644362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.644374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.644659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.644668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.644986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.644996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.645291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.645301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.645499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.645509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.645820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.645831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.646156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.646166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.646472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.646482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.646785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.646794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.647103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.647113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.647419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.647429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.647744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.647754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.648062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.648071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.648279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.648288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.648598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.648607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.648932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.648941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.649110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.649121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.649479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.649489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.649790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.649799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.650013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.650024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.650221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.650231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.650599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.650608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.650912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.650921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.651124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.651134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.651348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.651358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.651663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.651673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.651870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.651879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.652204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.652218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.652536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.652545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.652868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.652877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.653164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.653174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.653465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.653474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.653659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.653668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.653973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.653986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.654291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.654300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.654589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.654598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.654915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.654925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.655212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.655223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.655502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.655513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.655785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.655794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.656078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.656087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.656401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.656412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.656733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.656743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.657083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.657094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.657437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.657446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.657736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.657746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.658060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.658070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.658370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.658380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.658569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.658578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.658813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.658822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.659201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.659211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.659502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.659517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.890 [2024-11-20 16:40:56.659825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.890 [2024-11-20 16:40:56.659834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.890 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.660146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.660156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.660469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.660478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.660757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.660766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.661071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.661081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.661420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.661429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.661736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.661745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.662029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.662039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.662250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.662260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.662590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.662600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.662897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.662911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.663221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.663231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.663422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.663432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.663761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.663771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.664085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.664096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.664404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.664414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.664721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.664732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.665038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.665048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.665317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.665327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.665608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.665618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.665893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.665902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.666220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.666230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.666536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.666545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.666734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.666744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.667050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.667060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.667342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.667352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.667552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.667561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.667785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.667794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.668133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.668143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.668337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.668346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.668593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.668602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.668909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.668920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.669212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.669223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.669410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.669419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.669621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.669630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.669908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.669918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.670227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.670237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.670523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.670532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.670839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.670849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.671139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.671149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.671454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.671464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.671766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.671777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.672180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.672190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.672482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.672491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.672809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.672818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.673103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.673112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.673426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.673436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.673628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.673638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.673992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.674002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.674307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.674317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.674621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.674630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.674904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.674913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.675214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.675224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.675607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.675616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.675882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.675892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.676206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.676216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.676517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.676527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.676811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.676821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.677114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.677124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.677498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.677508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.677817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.677826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.678120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.678130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.678418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.678429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.678707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.678717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.678879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.678889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.679263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.679273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.679591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.679601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.679897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.679906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.680096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.680107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.680494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.680505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.680769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.680779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.680966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.680977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.681269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.681279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.891 qpair failed and we were unable to recover it. 00:29:10.891 [2024-11-20 16:40:56.681556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.891 [2024-11-20 16:40:56.681565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.681872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.681882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.682189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.682199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.682503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.682512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.682873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.682882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.683161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.683171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.683467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.683477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.683782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.683791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.684062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.684072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.684382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.684391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.684670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.684680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.684985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.684998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.685318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.685328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.685632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.685642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.685922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.685931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.686235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.686244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.686530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.686539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.686844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.686854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.687183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.687193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.687502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.687511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.687795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.687804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.688103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.688113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.688438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.688447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.688754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.688763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.689043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.689054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.689359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.689368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.689673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.689682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.689993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.690003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.690334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.690343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.690616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.690625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.690830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.690839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.691144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.691154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.691428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.691438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.691744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.691753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.692041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.692051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.692356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.692365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.692648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.692657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.692971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.692980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.693265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.693275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.693581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.693590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.693892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.693901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.694088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.694098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.694292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.694301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.694614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.694623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.694907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.694916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.695170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.695179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.695467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.695476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.695868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.695878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.696156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.696165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.696493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.696503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.696834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.696844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.697151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.697160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.697544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.697557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.697865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.697874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.698144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.698154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.698462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.698471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.698760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.698769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.698932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.698943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.699227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.699236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.699549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.699558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.699727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.699738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.700001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.700011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.700315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.700324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.700649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.700659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.700995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.701005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.701321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.701331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.701668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.701678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.892 [2024-11-20 16:40:56.701956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.892 [2024-11-20 16:40:56.701966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.892 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.702268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.702278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.702466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.702475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.702817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.702827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.703136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.703146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.703321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.703331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.703617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.703626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.703945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.703954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.704271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.704281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.704564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.704573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.704778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.704788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.705098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.705108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.705417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.705429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.705719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.705728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.706008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.706018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.706319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.706328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.706637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.706647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.706971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.706980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.707374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.707383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.707676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.707685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.707998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.708009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.708357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.708367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.708674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.708683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.708998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.709008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.709323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.709332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.709639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.709648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.709831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.709841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.710208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.710218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.710398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.710408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.710654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.710664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.710967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.710976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.711248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.711257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.711561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.711573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.711942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.711952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.712244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.712255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.712651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.712662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.712963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.712973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.713316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.713325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.713612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.713622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.713916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.713926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.714224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.714235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.714503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.714513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.714816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.714826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.715137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.715146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.715454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.715463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.715747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.715757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.716036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.716046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.716258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.716267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.716593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.716602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.716931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.716940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.717213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.717223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.717507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.717517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.717816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.717826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.718103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.718115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.718326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.718336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.718631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.718640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.718944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.718953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.719132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.719141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.719412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.719421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.719754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.719763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.720052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.720062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.720389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.720398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.720703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.720712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.721043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.721053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.721372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.721381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.721565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.721576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.721868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.721878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.722172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.722182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.722494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.722503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.722807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.722816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.723166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.723176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.723459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.723469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.723778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.723788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.724088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.893 [2024-11-20 16:40:56.724098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.893 qpair failed and we were unable to recover it. 00:29:10.893 [2024-11-20 16:40:56.724313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.724322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.724623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.724634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.724936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.724947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.725238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.725248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.725550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.725560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.725884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.725894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.726198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.726211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.726551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.726561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.726750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.726760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.727065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.727076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.727382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.727392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.727730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.727740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.728040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.728049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.728386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.728395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.728584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.728593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.728953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.728963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.729171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.729180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.729536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.729545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.729851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.729860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.730144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.730153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.730417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.730427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.730641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.730652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.730962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.730971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.731262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.731272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.731585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.731595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.731869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.731878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.732066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.732076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.732398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.732408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.732716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.732726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.733051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.733061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.733413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.733423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.733697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.733706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.734004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.734014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.734319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.734328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.734527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.734537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.734879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.734888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.735205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.735215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.735534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.735543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.735848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.735858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.736139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.736149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.736452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.736462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.736739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.736749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.737058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.737068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.737399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.737408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.737785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.737794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.738077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.738087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.738392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.738402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.738726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.738738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.739046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.739056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.739383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.739392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.739695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.739704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.739977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.739990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.740215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.740224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.740545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.740554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.740861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.740870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.741150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.741159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.741468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.741477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.741757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.741766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.742074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.742083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.742366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.742376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.742676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.742685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.742992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.743003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.743306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.743316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.743694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.743703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.743972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.743985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.744311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.744320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.744523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.894 [2024-11-20 16:40:56.744532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.894 qpair failed and we were unable to recover it. 00:29:10.894 [2024-11-20 16:40:56.744840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.744850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.745164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.745175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.745502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.745512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.745815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.745824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.746106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.746116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.746476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.746485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.746774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.746791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.747106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.747118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.747401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.747412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.747760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.747770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.748044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.748054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.748191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.748200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.748479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.748489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.748850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.748860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.749161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.749170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.749452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.749461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.749739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.749749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.750054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.750064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.750391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.750401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.750725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.750734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.751016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.751026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.751197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.751207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.751538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.751548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.751851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.751860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.752269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.752279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.752584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.752593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.752818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.752827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.753087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.753097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.753401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.753410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.753736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.753745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.753929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.753939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.754270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.754280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.754605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.754615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.754889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.754898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.755184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.755194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.755504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.755513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.755813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.755823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.756128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.756138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.756443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.756453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.756757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.756767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.757087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.757096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.757411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.757421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.757763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.757773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.758059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.758069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.758374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.758384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.758665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.758674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.758953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.758963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.759150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.759159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.759500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.759512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.759814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.759824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.760124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.760135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.760446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.760456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.760808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.760819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.761121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.761131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.761409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.761419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.761600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.761609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.761810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.761819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.762134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.762144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.762446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.762456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.762759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.762768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.763093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.763103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.763382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.763393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.763700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.763710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.764018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.764028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.764318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.764328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.764631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.764641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.764935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.764945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.765218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.765229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.765508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.765519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.765817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.765827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.766106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.766117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.766418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.766428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.766764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.895 [2024-11-20 16:40:56.766775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.895 qpair failed and we were unable to recover it. 00:29:10.895 [2024-11-20 16:40:56.767079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.767088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.767424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.767433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.767731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.767740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.768015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.768024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.768311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.768320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.768615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.768625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.768928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.768938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.769245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.769254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.769561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.769570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.769877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.769886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.770206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.770216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.770525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.770534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.770836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.770845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.771128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.771137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.771453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.771462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.771742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.771753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.772115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.772125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.772405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.772415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.772685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.772695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.772995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.773005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.773281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.773290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.773629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.773639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.773945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.773954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.774238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.774248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.774548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.774557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.774833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.774843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.775031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.775041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.775315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.775324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.775634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.775644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.775932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.775942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.776115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.776125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.776388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.776398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.776706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.776716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.776995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.777006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.777270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.777279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.777587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.777598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.777898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.777907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.778184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.778194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.778579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.778588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.778914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.778924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.779224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.779234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.779638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.779647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.779937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.779948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.780169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.780182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.780505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.780515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.780715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.780724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.781030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.781040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.781416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.781425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.781723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.781733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.782012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.782023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.782316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.782325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.782626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.782635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.782949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.782959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.783275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.783284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.783598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.783607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.783794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.783804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.784074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.784083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.784396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.784406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.784623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.784633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.784988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.784998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.785209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.785219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.785524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.785533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.785719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.785728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.786060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.786070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.786400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.786409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.786742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.786751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.787034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.787044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.787371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.787381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.787765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.787775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.787960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.787970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.788230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.896 [2024-11-20 16:40:56.788240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.896 qpair failed and we were unable to recover it. 00:29:10.896 [2024-11-20 16:40:56.788566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.788576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.788882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.788891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.789197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.789207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.789514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.789524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.789852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.789862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.790201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.790211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.790514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.790524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.790834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.790844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.791173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.791183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.791509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.791518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.791859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.791869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.792273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.792282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.792513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.792522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.792838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.792847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.793055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.793065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.793441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.793450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.793762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.793771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.794047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.794057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.794369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.794379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.794725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.794735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.795026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.795035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.795163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.795173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.795376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.795385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.795750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.795760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.796092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.796101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.796292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.796301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.796572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.796582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.796885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.796896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.797101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.797111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.797442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.797452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.797639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.797650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.797918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.797928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.798210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.798221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.798532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.798542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.798823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.798833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.799016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.799026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.799193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.799203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.799473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.799483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.799804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.799814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.799998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.800007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.800320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.800332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.800633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.800644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.800953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.800963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.801210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.801220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.801525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.801535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.801748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.801758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.802075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.802085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.802393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.802402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.802772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.802781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.803101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.803110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.803332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.803342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.803636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.803645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.803831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.803841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.804162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.804172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.804480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.804490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.804666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.804681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.804944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.804954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.805258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.805268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.805551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.805560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.805872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.805882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.806188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.806199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.806382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.806392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.806721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.806732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.807003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.807014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.807340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.807349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.807528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.807537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.807786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.897 [2024-11-20 16:40:56.807796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.897 qpair failed and we were unable to recover it. 00:29:10.897 [2024-11-20 16:40:56.808081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.808091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.808220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.808229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.808526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.808537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.808704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.808714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.808993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.809004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.809292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.809303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.809512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.809522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.809854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.809865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.810039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.810050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.810352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.810363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.810528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.810538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.810825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.810836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.811205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.811214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.811514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.811524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.811851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.811862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.812236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.812245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.812567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.812576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.812897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.812907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.813275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.813285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.813583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.813592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.813904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.813914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.814227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.814238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.814462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.814473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.814788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.814798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.815003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.815013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.815368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.815377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.815666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.815677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.815957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.815966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.816256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.816266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.816453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.816463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.816777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.816787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.817106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.817116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.817434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.817444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.817785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.817795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.818112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.818122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.818430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.818441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.818751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.818760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.819054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.819064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.819401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.819410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.819713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.819722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.820021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.820031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.820348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.820360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.820566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.820575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.820881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.820890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.821206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.821215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.821502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.821511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.821726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.821736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.822125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.822134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.822321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.822331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.822667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.822677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.822986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.822996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.823313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.823323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.823517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.823526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.823847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.823856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.824169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.824179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 Read completed with error (sct=0, sc=8) 00:29:10.898 starting I/O failed 00:29:10.898 Read completed with error (sct=0, sc=8) 00:29:10.898 starting I/O failed 00:29:10.898 Read completed with error (sct=0, sc=8) 00:29:10.898 starting I/O failed 00:29:10.898 Read completed with error (sct=0, sc=8) 00:29:10.898 starting I/O failed 00:29:10.898 Read completed with error (sct=0, sc=8) 00:29:10.898 starting I/O failed 00:29:10.898 Read completed with error (sct=0, sc=8) 00:29:10.898 starting I/O failed 00:29:10.898 Read completed with error (sct=0, sc=8) 00:29:10.898 starting I/O failed 00:29:10.898 Read completed with error (sct=0, sc=8) 00:29:10.898 starting I/O failed 00:29:10.898 Write completed with error (sct=0, sc=8) 00:29:10.898 starting I/O failed 00:29:10.898 Read completed with error (sct=0, sc=8) 00:29:10.898 starting I/O failed 00:29:10.898 Read completed with error (sct=0, sc=8) 00:29:10.898 starting I/O failed 00:29:10.898 Read completed with error (sct=0, sc=8) 00:29:10.898 starting I/O failed 00:29:10.898 Read completed with error (sct=0, sc=8) 00:29:10.898 starting I/O failed 00:29:10.898 Write completed with error (sct=0, sc=8) 00:29:10.898 starting I/O failed 00:29:10.898 Write completed with error (sct=0, sc=8) 00:29:10.898 starting I/O failed 00:29:10.898 Write completed with error (sct=0, sc=8) 00:29:10.898 starting I/O failed 00:29:10.898 Read completed with error (sct=0, sc=8) 00:29:10.898 starting I/O failed 00:29:10.898 Read completed with error (sct=0, sc=8) 00:29:10.898 starting I/O failed 00:29:10.898 Read completed with error (sct=0, sc=8) 00:29:10.898 starting I/O failed 00:29:10.898 Write completed with error (sct=0, sc=8) 00:29:10.898 starting I/O failed 00:29:10.898 Read completed with error (sct=0, sc=8) 00:29:10.898 starting I/O failed 00:29:10.898 Read completed with error (sct=0, sc=8) 00:29:10.898 starting I/O failed 00:29:10.898 Write completed with error (sct=0, sc=8) 00:29:10.898 starting I/O failed 00:29:10.898 Read completed with error (sct=0, sc=8) 00:29:10.898 starting I/O failed 00:29:10.898 Write completed with error (sct=0, sc=8) 00:29:10.898 starting I/O failed 00:29:10.898 Write completed with error (sct=0, sc=8) 00:29:10.898 starting I/O failed 00:29:10.898 Read completed with error (sct=0, sc=8) 00:29:10.898 starting I/O failed 00:29:10.898 Write completed with error (sct=0, sc=8) 00:29:10.898 starting I/O failed 00:29:10.898 Read completed with error (sct=0, sc=8) 00:29:10.898 starting I/O failed 00:29:10.898 Write completed with error (sct=0, sc=8) 00:29:10.898 starting I/O failed 00:29:10.898 Write completed with error (sct=0, sc=8) 00:29:10.898 starting I/O failed 00:29:10.898 Read completed with error (sct=0, sc=8) 00:29:10.898 starting I/O failed 00:29:10.898 [2024-11-20 16:40:56.824388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.898 [2024-11-20 16:40:56.824734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.824749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.824950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.824958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.825368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.825396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.825715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.825724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.826196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.826225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.826549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.826558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.826727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.826739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.898 [2024-11-20 16:40:56.826921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.898 [2024-11-20 16:40:56.826928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:10.898 qpair failed and we were unable to recover it. 00:29:10.899 [2024-11-20 16:40:56.827125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.899 [2024-11-20 16:40:56.827133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:10.899 qpair failed and we were unable to recover it. 00:29:10.899 [2024-11-20 16:40:56.827467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.899 [2024-11-20 16:40:56.827475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:10.899 qpair failed and we were unable to recover it. 00:29:10.899 [2024-11-20 16:40:56.827793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.899 [2024-11-20 16:40:56.827801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:10.899 qpair failed and we were unable to recover it. 00:29:10.899 [2024-11-20 16:40:56.827979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.899 [2024-11-20 16:40:56.827990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:10.899 qpair failed and we were unable to recover it. 00:29:10.899 [2024-11-20 16:40:56.828341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.899 [2024-11-20 16:40:56.828349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:10.899 qpair failed and we were unable to recover it. 00:29:10.899 [2024-11-20 16:40:56.828674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.899 [2024-11-20 16:40:56.828682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:10.899 qpair failed and we were unable to recover it. 00:29:10.899 [2024-11-20 16:40:56.828860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.899 [2024-11-20 16:40:56.828867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:10.899 qpair failed and we were unable to recover it. 00:29:10.899 [2024-11-20 16:40:56.829200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.899 [2024-11-20 16:40:56.829208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:10.899 qpair failed and we were unable to recover it. 00:29:10.899 [2024-11-20 16:40:56.829520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.899 [2024-11-20 16:40:56.829528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:10.899 qpair failed and we were unable to recover it. 00:29:10.899 [2024-11-20 16:40:56.829846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.899 [2024-11-20 16:40:56.829853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:10.899 qpair failed and we were unable to recover it. 00:29:10.899 [2024-11-20 16:40:56.830044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.899 [2024-11-20 16:40:56.830052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:10.899 qpair failed and we were unable to recover it. 00:29:10.899 [2024-11-20 16:40:56.830433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.899 [2024-11-20 16:40:56.830440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:10.899 qpair failed and we were unable to recover it. 00:29:10.899 [2024-11-20 16:40:56.830747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.899 [2024-11-20 16:40:56.830755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:10.899 qpair failed and we were unable to recover it. 00:29:10.899 [2024-11-20 16:40:56.830947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.899 [2024-11-20 16:40:56.830954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:10.899 qpair failed and we were unable to recover it. 00:29:10.899 [2024-11-20 16:40:56.831259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.899 [2024-11-20 16:40:56.831267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:10.899 qpair failed and we were unable to recover it. 00:29:10.899 [2024-11-20 16:40:56.831536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.899 [2024-11-20 16:40:56.831544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:10.899 qpair failed and we were unable to recover it. 00:29:10.899 [2024-11-20 16:40:56.831603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.899 [2024-11-20 16:40:56.831610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:10.899 qpair failed and we were unable to recover it. 00:29:10.899 [2024-11-20 16:40:56.831888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.899 [2024-11-20 16:40:56.831895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:10.899 qpair failed and we were unable to recover it. 00:29:10.899 [2024-11-20 16:40:56.832093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.899 [2024-11-20 16:40:56.832100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:10.899 qpair failed and we were unable to recover it. 00:29:10.899 [2024-11-20 16:40:56.832283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.899 [2024-11-20 16:40:56.832290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:10.899 qpair failed and we were unable to recover it. 00:29:10.899 [2024-11-20 16:40:56.832603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.899 [2024-11-20 16:40:56.832610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:10.899 qpair failed and we were unable to recover it. 00:29:10.899 [2024-11-20 16:40:56.832934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.899 [2024-11-20 16:40:56.832941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:10.899 qpair failed and we were unable to recover it. 00:29:11.173 [2024-11-20 16:40:56.833225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.173 [2024-11-20 16:40:56.833234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.173 qpair failed and we were unable to recover it. 00:29:11.173 [2024-11-20 16:40:56.833548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.173 [2024-11-20 16:40:56.833557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.173 qpair failed and we were unable to recover it. 00:29:11.173 [2024-11-20 16:40:56.833901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.173 [2024-11-20 16:40:56.833908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.173 qpair failed and we were unable to recover it. 00:29:11.173 [2024-11-20 16:40:56.834199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.173 [2024-11-20 16:40:56.834207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.173 qpair failed and we were unable to recover it. 00:29:11.173 [2024-11-20 16:40:56.834524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.173 [2024-11-20 16:40:56.834531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.173 qpair failed and we were unable to recover it. 00:29:11.173 [2024-11-20 16:40:56.834842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.173 [2024-11-20 16:40:56.834849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.173 qpair failed and we were unable to recover it. 00:29:11.173 [2024-11-20 16:40:56.835153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.173 [2024-11-20 16:40:56.835161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.173 qpair failed and we were unable to recover it. 00:29:11.173 [2024-11-20 16:40:56.835386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.173 [2024-11-20 16:40:56.835394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.173 qpair failed and we were unable to recover it. 00:29:11.173 [2024-11-20 16:40:56.835598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.173 [2024-11-20 16:40:56.835605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.173 qpair failed and we were unable to recover it. 00:29:11.173 [2024-11-20 16:40:56.835830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.173 [2024-11-20 16:40:56.835837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.173 qpair failed and we were unable to recover it. 00:29:11.173 [2024-11-20 16:40:56.836132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.173 [2024-11-20 16:40:56.836139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.173 qpair failed and we were unable to recover it. 00:29:11.173 [2024-11-20 16:40:56.836330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.173 [2024-11-20 16:40:56.836338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.173 qpair failed and we were unable to recover it. 00:29:11.173 [2024-11-20 16:40:56.836625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.173 [2024-11-20 16:40:56.836633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.173 qpair failed and we were unable to recover it. 00:29:11.173 [2024-11-20 16:40:56.836823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.173 [2024-11-20 16:40:56.836831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.173 qpair failed and we were unable to recover it. 00:29:11.173 [2024-11-20 16:40:56.837146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.173 [2024-11-20 16:40:56.837153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.173 qpair failed and we were unable to recover it. 00:29:11.173 [2024-11-20 16:40:56.837471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.173 [2024-11-20 16:40:56.837478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.173 qpair failed and we were unable to recover it. 00:29:11.173 [2024-11-20 16:40:56.837793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.173 [2024-11-20 16:40:56.837802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.173 qpair failed and we were unable to recover it. 00:29:11.173 [2024-11-20 16:40:56.837965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.173 [2024-11-20 16:40:56.837973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.173 qpair failed and we were unable to recover it. 00:29:11.173 [2024-11-20 16:40:56.838279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.173 [2024-11-20 16:40:56.838286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.173 qpair failed and we were unable to recover it. 00:29:11.173 [2024-11-20 16:40:56.838635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.173 [2024-11-20 16:40:56.838643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.173 qpair failed and we were unable to recover it. 00:29:11.173 [2024-11-20 16:40:56.838917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.173 [2024-11-20 16:40:56.838924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.173 qpair failed and we were unable to recover it. 00:29:11.173 [2024-11-20 16:40:56.839149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.173 [2024-11-20 16:40:56.839157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.173 qpair failed and we were unable to recover it. 00:29:11.173 [2024-11-20 16:40:56.839469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.173 [2024-11-20 16:40:56.839476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.173 qpair failed and we were unable to recover it. 00:29:11.173 [2024-11-20 16:40:56.839791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.173 [2024-11-20 16:40:56.839799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.173 qpair failed and we were unable to recover it. 00:29:11.173 [2024-11-20 16:40:56.840107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.173 [2024-11-20 16:40:56.840114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.173 qpair failed and we were unable to recover it. 00:29:11.173 [2024-11-20 16:40:56.840458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.173 [2024-11-20 16:40:56.840465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.173 qpair failed and we were unable to recover it. 00:29:11.173 [2024-11-20 16:40:56.840767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.173 [2024-11-20 16:40:56.840774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.173 qpair failed and we were unable to recover it. 00:29:11.173 [2024-11-20 16:40:56.840986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.173 [2024-11-20 16:40:56.840993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.173 qpair failed and we were unable to recover it. 00:29:11.173 [2024-11-20 16:40:56.841290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.173 [2024-11-20 16:40:56.841297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.173 qpair failed and we were unable to recover it. 00:29:11.173 [2024-11-20 16:40:56.841610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.173 [2024-11-20 16:40:56.841617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.173 qpair failed and we were unable to recover it. 00:29:11.173 [2024-11-20 16:40:56.841926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.173 [2024-11-20 16:40:56.841933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.173 qpair failed and we were unable to recover it. 00:29:11.174 [2024-11-20 16:40:56.842224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.174 [2024-11-20 16:40:56.842240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.174 qpair failed and we were unable to recover it. 00:29:11.174 [2024-11-20 16:40:56.842541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.174 [2024-11-20 16:40:56.842547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.174 qpair failed and we were unable to recover it. 00:29:11.174 [2024-11-20 16:40:56.842832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.174 [2024-11-20 16:40:56.842839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.174 qpair failed and we were unable to recover it. 00:29:11.174 [2024-11-20 16:40:56.843038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.174 [2024-11-20 16:40:56.843045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.174 qpair failed and we were unable to recover it. 00:29:11.174 [2024-11-20 16:40:56.843231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.174 [2024-11-20 16:40:56.843238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.174 qpair failed and we were unable to recover it. 00:29:11.174 [2024-11-20 16:40:56.843519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.174 [2024-11-20 16:40:56.843526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.174 qpair failed and we were unable to recover it. 00:29:11.174 [2024-11-20 16:40:56.843858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.174 [2024-11-20 16:40:56.843866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.174 qpair failed and we were unable to recover it. 00:29:11.174 [2024-11-20 16:40:56.844180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.174 [2024-11-20 16:40:56.844187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.174 qpair failed and we were unable to recover it. 00:29:11.174 [2024-11-20 16:40:56.844511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.174 [2024-11-20 16:40:56.844518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.174 qpair failed and we were unable to recover it. 00:29:11.174 [2024-11-20 16:40:56.844827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.174 [2024-11-20 16:40:56.844834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.174 qpair failed and we were unable to recover it. 00:29:11.174 [2024-11-20 16:40:56.845152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.174 [2024-11-20 16:40:56.845159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.174 qpair failed and we were unable to recover it. 00:29:11.174 [2024-11-20 16:40:56.845471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.174 [2024-11-20 16:40:56.845478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.174 qpair failed and we were unable to recover it. 00:29:11.174 [2024-11-20 16:40:56.845779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.174 [2024-11-20 16:40:56.845787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.174 qpair failed and we were unable to recover it. 00:29:11.174 [2024-11-20 16:40:56.846090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.174 [2024-11-20 16:40:56.846097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.174 qpair failed and we were unable to recover it. 00:29:11.174 [2024-11-20 16:40:56.846250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.174 [2024-11-20 16:40:56.846258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.174 qpair failed and we were unable to recover it. 00:29:11.174 [2024-11-20 16:40:56.846569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.174 [2024-11-20 16:40:56.846576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.174 qpair failed and we were unable to recover it. 00:29:11.174 [2024-11-20 16:40:56.846879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.174 [2024-11-20 16:40:56.846886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.174 qpair failed and we were unable to recover it. 00:29:11.174 [2024-11-20 16:40:56.847208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.174 [2024-11-20 16:40:56.847215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.174 qpair failed and we were unable to recover it. 00:29:11.174 [2024-11-20 16:40:56.847513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.174 [2024-11-20 16:40:56.847519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.174 qpair failed and we were unable to recover it. 00:29:11.174 [2024-11-20 16:40:56.847806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.174 [2024-11-20 16:40:56.847812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.174 qpair failed and we were unable to recover it. 00:29:11.174 [2024-11-20 16:40:56.848197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.174 [2024-11-20 16:40:56.848204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.174 qpair failed and we were unable to recover it. 00:29:11.174 [2024-11-20 16:40:56.848492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.174 [2024-11-20 16:40:56.848499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.174 qpair failed and we were unable to recover it. 00:29:11.174 [2024-11-20 16:40:56.848700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.174 [2024-11-20 16:40:56.848706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.174 qpair failed and we were unable to recover it. 00:29:11.174 [2024-11-20 16:40:56.849009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.174 [2024-11-20 16:40:56.849016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.174 qpair failed and we were unable to recover it. 00:29:11.174 [2024-11-20 16:40:56.849312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.174 [2024-11-20 16:40:56.849318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.174 qpair failed and we were unable to recover it. 00:29:11.174 [2024-11-20 16:40:56.849540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.174 [2024-11-20 16:40:56.849548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.174 qpair failed and we were unable to recover it. 00:29:11.174 [2024-11-20 16:40:56.849772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.174 [2024-11-20 16:40:56.849779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.174 qpair failed and we were unable to recover it. 00:29:11.174 [2024-11-20 16:40:56.849861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.174 [2024-11-20 16:40:56.849867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.174 qpair failed and we were unable to recover it. 00:29:11.174 [2024-11-20 16:40:56.850176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.174 [2024-11-20 16:40:56.850183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.174 qpair failed and we were unable to recover it. 00:29:11.174 [2024-11-20 16:40:56.850487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.174 [2024-11-20 16:40:56.850493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.174 qpair failed and we were unable to recover it. 00:29:11.174 [2024-11-20 16:40:56.850747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.174 [2024-11-20 16:40:56.850753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.174 qpair failed and we were unable to recover it. 00:29:11.174 [2024-11-20 16:40:56.851077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.174 [2024-11-20 16:40:56.851084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.174 qpair failed and we were unable to recover it. 00:29:11.174 [2024-11-20 16:40:56.851361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.174 [2024-11-20 16:40:56.851368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.174 qpair failed and we were unable to recover it. 00:29:11.174 [2024-11-20 16:40:56.851687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.174 [2024-11-20 16:40:56.851694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.174 qpair failed and we were unable to recover it. 00:29:11.174 [2024-11-20 16:40:56.851902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.174 [2024-11-20 16:40:56.851909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.174 qpair failed and we were unable to recover it. 00:29:11.174 [2024-11-20 16:40:56.852220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.174 [2024-11-20 16:40:56.852227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.174 qpair failed and we were unable to recover it. 00:29:11.174 [2024-11-20 16:40:56.852416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.174 [2024-11-20 16:40:56.852422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.175 qpair failed and we were unable to recover it. 00:29:11.175 [2024-11-20 16:40:56.852720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.175 [2024-11-20 16:40:56.852727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.175 qpair failed and we were unable to recover it. 00:29:11.175 [2024-11-20 16:40:56.852916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.175 [2024-11-20 16:40:56.852923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.175 qpair failed and we were unable to recover it. 00:29:11.175 [2024-11-20 16:40:56.853204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.175 [2024-11-20 16:40:56.853211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.175 qpair failed and we were unable to recover it. 00:29:11.175 [2024-11-20 16:40:56.853519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.175 [2024-11-20 16:40:56.853526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.175 qpair failed and we were unable to recover it. 00:29:11.175 [2024-11-20 16:40:56.853807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.175 [2024-11-20 16:40:56.853814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.175 qpair failed and we were unable to recover it. 00:29:11.175 [2024-11-20 16:40:56.854124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.175 [2024-11-20 16:40:56.854131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.175 qpair failed and we were unable to recover it. 00:29:11.175 [2024-11-20 16:40:56.854435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.175 [2024-11-20 16:40:56.854442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.175 qpair failed and we were unable to recover it. 00:29:11.175 [2024-11-20 16:40:56.854626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.175 [2024-11-20 16:40:56.854633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.175 qpair failed and we were unable to recover it. 00:29:11.175 [2024-11-20 16:40:56.854955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.175 [2024-11-20 16:40:56.854961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.175 qpair failed and we were unable to recover it. 00:29:11.175 [2024-11-20 16:40:56.855290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.175 [2024-11-20 16:40:56.855296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.175 qpair failed and we were unable to recover it. 00:29:11.175 [2024-11-20 16:40:56.855579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.175 [2024-11-20 16:40:56.855585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.175 qpair failed and we were unable to recover it. 00:29:11.175 [2024-11-20 16:40:56.855794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.175 [2024-11-20 16:40:56.855801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.175 qpair failed and we were unable to recover it. 00:29:11.175 [2024-11-20 16:40:56.855973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.175 [2024-11-20 16:40:56.855980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.175 qpair failed and we were unable to recover it. 00:29:11.175 [2024-11-20 16:40:56.856361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.175 [2024-11-20 16:40:56.856368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.175 qpair failed and we were unable to recover it. 00:29:11.175 [2024-11-20 16:40:56.856651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.175 [2024-11-20 16:40:56.856658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.175 qpair failed and we were unable to recover it. 00:29:11.175 [2024-11-20 16:40:56.856967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.175 [2024-11-20 16:40:56.856976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.175 qpair failed and we were unable to recover it. 00:29:11.175 [2024-11-20 16:40:56.857312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.175 [2024-11-20 16:40:56.857319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.175 qpair failed and we were unable to recover it. 00:29:11.175 [2024-11-20 16:40:56.857627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.175 [2024-11-20 16:40:56.857633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.175 qpair failed and we were unable to recover it. 00:29:11.175 [2024-11-20 16:40:56.857963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.175 [2024-11-20 16:40:56.857969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.175 qpair failed and we were unable to recover it. 00:29:11.175 [2024-11-20 16:40:56.858262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.175 [2024-11-20 16:40:56.858269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.175 qpair failed and we were unable to recover it. 00:29:11.175 [2024-11-20 16:40:56.858623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.175 [2024-11-20 16:40:56.858630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.175 qpair failed and we were unable to recover it. 00:29:11.175 [2024-11-20 16:40:56.858939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.175 [2024-11-20 16:40:56.858946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.175 qpair failed and we were unable to recover it. 00:29:11.175 [2024-11-20 16:40:56.859241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.175 [2024-11-20 16:40:56.859248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.175 qpair failed and we were unable to recover it. 00:29:11.175 [2024-11-20 16:40:56.859546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.175 [2024-11-20 16:40:56.859554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.175 qpair failed and we were unable to recover it. 00:29:11.175 [2024-11-20 16:40:56.859845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.175 [2024-11-20 16:40:56.859852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.175 qpair failed and we were unable to recover it. 00:29:11.175 [2024-11-20 16:40:56.860051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.175 [2024-11-20 16:40:56.860058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.175 qpair failed and we were unable to recover it. 00:29:11.175 [2024-11-20 16:40:56.860327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.175 [2024-11-20 16:40:56.860334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.175 qpair failed and we were unable to recover it. 00:29:11.175 [2024-11-20 16:40:56.860651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.175 [2024-11-20 16:40:56.860657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.175 qpair failed and we were unable to recover it. 00:29:11.175 [2024-11-20 16:40:56.860956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.175 [2024-11-20 16:40:56.860963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.175 qpair failed and we were unable to recover it. 00:29:11.175 [2024-11-20 16:40:56.861280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.175 [2024-11-20 16:40:56.861287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.175 qpair failed and we were unable to recover it. 00:29:11.175 [2024-11-20 16:40:56.861576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.175 [2024-11-20 16:40:56.861584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.175 qpair failed and we were unable to recover it. 00:29:11.175 [2024-11-20 16:40:56.861792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.175 [2024-11-20 16:40:56.861799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.175 qpair failed and we were unable to recover it. 00:29:11.175 [2024-11-20 16:40:56.862119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.175 [2024-11-20 16:40:56.862126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.175 qpair failed and we were unable to recover it. 00:29:11.176 [2024-11-20 16:40:56.862291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.176 [2024-11-20 16:40:56.862298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.176 qpair failed and we were unable to recover it. 00:29:11.176 [2024-11-20 16:40:56.862665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.176 [2024-11-20 16:40:56.862672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.176 qpair failed and we were unable to recover it. 00:29:11.176 [2024-11-20 16:40:56.862960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.176 [2024-11-20 16:40:56.862967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.176 qpair failed and we were unable to recover it. 00:29:11.176 [2024-11-20 16:40:56.863292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.176 [2024-11-20 16:40:56.863299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.176 qpair failed and we were unable to recover it. 00:29:11.176 [2024-11-20 16:40:56.863453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.176 [2024-11-20 16:40:56.863460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.176 qpair failed and we were unable to recover it. 00:29:11.176 [2024-11-20 16:40:56.863741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.176 [2024-11-20 16:40:56.863748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.176 qpair failed and we were unable to recover it. 00:29:11.176 [2024-11-20 16:40:56.864050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.176 [2024-11-20 16:40:56.864057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.176 qpair failed and we were unable to recover it. 00:29:11.176 [2024-11-20 16:40:56.864434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.176 [2024-11-20 16:40:56.864440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.176 qpair failed and we were unable to recover it. 00:29:11.176 [2024-11-20 16:40:56.864743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.176 [2024-11-20 16:40:56.864750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.176 qpair failed and we were unable to recover it. 00:29:11.176 [2024-11-20 16:40:56.865038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.176 [2024-11-20 16:40:56.865045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.176 qpair failed and we were unable to recover it. 00:29:11.176 [2024-11-20 16:40:56.865243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.176 [2024-11-20 16:40:56.865249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.176 qpair failed and we were unable to recover it. 00:29:11.176 [2024-11-20 16:40:56.865586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.176 [2024-11-20 16:40:56.865592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.176 qpair failed and we were unable to recover it. 00:29:11.176 [2024-11-20 16:40:56.865874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.176 [2024-11-20 16:40:56.865880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.176 qpair failed and we were unable to recover it. 00:29:11.176 [2024-11-20 16:40:56.866153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.176 [2024-11-20 16:40:56.866160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.176 qpair failed and we were unable to recover it. 00:29:11.176 [2024-11-20 16:40:56.866453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.176 [2024-11-20 16:40:56.866460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.176 qpair failed and we were unable to recover it. 00:29:11.176 [2024-11-20 16:40:56.866795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.176 [2024-11-20 16:40:56.866803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.176 qpair failed and we were unable to recover it. 00:29:11.176 [2024-11-20 16:40:56.867105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.176 [2024-11-20 16:40:56.867112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.176 qpair failed and we were unable to recover it. 00:29:11.176 [2024-11-20 16:40:56.867393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.176 [2024-11-20 16:40:56.867407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.176 qpair failed and we were unable to recover it. 00:29:11.176 [2024-11-20 16:40:56.867595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.176 [2024-11-20 16:40:56.867602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.176 qpair failed and we were unable to recover it. 00:29:11.176 [2024-11-20 16:40:56.867790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.176 [2024-11-20 16:40:56.867797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.176 qpair failed and we were unable to recover it. 00:29:11.176 [2024-11-20 16:40:56.868074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.176 [2024-11-20 16:40:56.868081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.176 qpair failed and we were unable to recover it. 00:29:11.176 [2024-11-20 16:40:56.868236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.176 [2024-11-20 16:40:56.868242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.176 qpair failed and we were unable to recover it. 00:29:11.176 [2024-11-20 16:40:56.868686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.176 [2024-11-20 16:40:56.868694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.176 qpair failed and we were unable to recover it. 00:29:11.176 [2024-11-20 16:40:56.868980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.176 [2024-11-20 16:40:56.868990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.176 qpair failed and we were unable to recover it. 00:29:11.176 [2024-11-20 16:40:56.869270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.176 [2024-11-20 16:40:56.869276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.176 qpair failed and we were unable to recover it. 00:29:11.176 [2024-11-20 16:40:56.869570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.176 [2024-11-20 16:40:56.869577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.176 qpair failed and we were unable to recover it. 00:29:11.176 [2024-11-20 16:40:56.869893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.176 [2024-11-20 16:40:56.869899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.176 qpair failed and we were unable to recover it. 00:29:11.176 [2024-11-20 16:40:56.870086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.176 [2024-11-20 16:40:56.870094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.176 qpair failed and we were unable to recover it. 00:29:11.176 [2024-11-20 16:40:56.870415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.176 [2024-11-20 16:40:56.870422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.176 qpair failed and we were unable to recover it. 00:29:11.176 [2024-11-20 16:40:56.870714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.176 [2024-11-20 16:40:56.870721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.176 qpair failed and we were unable to recover it. 00:29:11.176 [2024-11-20 16:40:56.871038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.176 [2024-11-20 16:40:56.871045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.176 qpair failed and we were unable to recover it. 00:29:11.176 [2024-11-20 16:40:56.871239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.176 [2024-11-20 16:40:56.871246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.176 qpair failed and we were unable to recover it. 00:29:11.176 [2024-11-20 16:40:56.871576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.176 [2024-11-20 16:40:56.871584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.176 qpair failed and we were unable to recover it. 00:29:11.176 [2024-11-20 16:40:56.871777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.176 [2024-11-20 16:40:56.871784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.176 qpair failed and we were unable to recover it. 00:29:11.176 [2024-11-20 16:40:56.872081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.176 [2024-11-20 16:40:56.872087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.176 qpair failed and we were unable to recover it. 00:29:11.176 [2024-11-20 16:40:56.872375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.177 [2024-11-20 16:40:56.872382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.177 qpair failed and we were unable to recover it. 00:29:11.177 [2024-11-20 16:40:56.872705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.177 [2024-11-20 16:40:56.872712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.177 qpair failed and we were unable to recover it. 00:29:11.177 [2024-11-20 16:40:56.873012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.177 [2024-11-20 16:40:56.873019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.177 qpair failed and we were unable to recover it. 00:29:11.177 [2024-11-20 16:40:56.873342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.177 [2024-11-20 16:40:56.873348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.177 qpair failed and we were unable to recover it. 00:29:11.177 [2024-11-20 16:40:56.873642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.177 [2024-11-20 16:40:56.873649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.177 qpair failed and we were unable to recover it. 00:29:11.177 [2024-11-20 16:40:56.873966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.177 [2024-11-20 16:40:56.873972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.177 qpair failed and we were unable to recover it. 00:29:11.177 [2024-11-20 16:40:56.874271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.177 [2024-11-20 16:40:56.874278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.177 qpair failed and we were unable to recover it. 00:29:11.177 [2024-11-20 16:40:56.874591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.177 [2024-11-20 16:40:56.874598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.177 qpair failed and we were unable to recover it. 00:29:11.177 [2024-11-20 16:40:56.874882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.177 [2024-11-20 16:40:56.874895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.177 qpair failed and we were unable to recover it. 00:29:11.177 [2024-11-20 16:40:56.875200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.177 [2024-11-20 16:40:56.875206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.177 qpair failed and we were unable to recover it. 00:29:11.177 [2024-11-20 16:40:56.875510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.177 [2024-11-20 16:40:56.875516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.177 qpair failed and we were unable to recover it. 00:29:11.177 [2024-11-20 16:40:56.875828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.177 [2024-11-20 16:40:56.875834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.177 qpair failed and we were unable to recover it. 00:29:11.177 [2024-11-20 16:40:56.876141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.177 [2024-11-20 16:40:56.876148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.177 qpair failed and we were unable to recover it. 00:29:11.177 [2024-11-20 16:40:56.876458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.177 [2024-11-20 16:40:56.876465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.177 qpair failed and we were unable to recover it. 00:29:11.177 [2024-11-20 16:40:56.876759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.177 [2024-11-20 16:40:56.876766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.177 qpair failed and we were unable to recover it. 00:29:11.177 [2024-11-20 16:40:56.876972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.177 [2024-11-20 16:40:56.876979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.177 qpair failed and we were unable to recover it. 00:29:11.177 [2024-11-20 16:40:56.877304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.177 [2024-11-20 16:40:56.877311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.177 qpair failed and we were unable to recover it. 00:29:11.177 [2024-11-20 16:40:56.877635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.177 [2024-11-20 16:40:56.877641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.177 qpair failed and we were unable to recover it. 00:29:11.177 [2024-11-20 16:40:56.877966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.177 [2024-11-20 16:40:56.877973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.177 qpair failed and we were unable to recover it. 00:29:11.177 [2024-11-20 16:40:56.878258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.177 [2024-11-20 16:40:56.878265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.177 qpair failed and we were unable to recover it. 00:29:11.177 [2024-11-20 16:40:56.878551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.177 [2024-11-20 16:40:56.878558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.177 qpair failed and we were unable to recover it. 00:29:11.177 [2024-11-20 16:40:56.878751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.177 [2024-11-20 16:40:56.878758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.177 qpair failed and we were unable to recover it. 00:29:11.177 [2024-11-20 16:40:56.879078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.177 [2024-11-20 16:40:56.879085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.177 qpair failed and we were unable to recover it. 00:29:11.177 [2024-11-20 16:40:56.879269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.177 [2024-11-20 16:40:56.879276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.177 qpair failed and we were unable to recover it. 00:29:11.177 [2024-11-20 16:40:56.879571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.177 [2024-11-20 16:40:56.879578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.177 qpair failed and we were unable to recover it. 00:29:11.177 [2024-11-20 16:40:56.879868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.177 [2024-11-20 16:40:56.879875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.177 qpair failed and we were unable to recover it. 00:29:11.177 [2024-11-20 16:40:56.880163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.177 [2024-11-20 16:40:56.880169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.177 qpair failed and we were unable to recover it. 00:29:11.177 [2024-11-20 16:40:56.880458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.177 [2024-11-20 16:40:56.880467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.177 qpair failed and we were unable to recover it. 00:29:11.177 [2024-11-20 16:40:56.880755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.177 [2024-11-20 16:40:56.880761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.177 qpair failed and we were unable to recover it. 00:29:11.177 [2024-11-20 16:40:56.881073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.177 [2024-11-20 16:40:56.881080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.177 qpair failed and we were unable to recover it. 00:29:11.177 [2024-11-20 16:40:56.881260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.177 [2024-11-20 16:40:56.881267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.177 qpair failed and we were unable to recover it. 00:29:11.177 [2024-11-20 16:40:56.881538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.177 [2024-11-20 16:40:56.881545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.177 qpair failed and we were unable to recover it. 00:29:11.177 [2024-11-20 16:40:56.881864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.177 [2024-11-20 16:40:56.881871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.177 qpair failed and we were unable to recover it. 00:29:11.178 [2024-11-20 16:40:56.882071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.178 [2024-11-20 16:40:56.882078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.178 qpair failed and we were unable to recover it. 00:29:11.178 [2024-11-20 16:40:56.882454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.178 [2024-11-20 16:40:56.882461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.178 qpair failed and we were unable to recover it. 00:29:11.178 [2024-11-20 16:40:56.882668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.178 [2024-11-20 16:40:56.882676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.178 qpair failed and we were unable to recover it. 00:29:11.178 [2024-11-20 16:40:56.882975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.178 [2024-11-20 16:40:56.882986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.178 qpair failed and we were unable to recover it. 00:29:11.178 [2024-11-20 16:40:56.883266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.178 [2024-11-20 16:40:56.883272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.178 qpair failed and we were unable to recover it. 00:29:11.178 [2024-11-20 16:40:56.883645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.178 [2024-11-20 16:40:56.883652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.178 qpair failed and we were unable to recover it. 00:29:11.178 [2024-11-20 16:40:56.883962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.178 [2024-11-20 16:40:56.883968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.178 qpair failed and we were unable to recover it. 00:29:11.178 [2024-11-20 16:40:56.884278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.178 [2024-11-20 16:40:56.884285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.178 qpair failed and we were unable to recover it. 00:29:11.178 [2024-11-20 16:40:56.884600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.178 [2024-11-20 16:40:56.884607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.178 qpair failed and we were unable to recover it. 00:29:11.178 [2024-11-20 16:40:56.884893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.178 [2024-11-20 16:40:56.884901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.178 qpair failed and we were unable to recover it. 00:29:11.178 [2024-11-20 16:40:56.885208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.178 [2024-11-20 16:40:56.885216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.178 qpair failed and we were unable to recover it. 00:29:11.178 [2024-11-20 16:40:56.885513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.178 [2024-11-20 16:40:56.885519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.178 qpair failed and we were unable to recover it. 00:29:11.178 [2024-11-20 16:40:56.885824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.178 [2024-11-20 16:40:56.885831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.178 qpair failed and we were unable to recover it. 00:29:11.178 [2024-11-20 16:40:56.886141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.178 [2024-11-20 16:40:56.886148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.178 qpair failed and we were unable to recover it. 00:29:11.178 [2024-11-20 16:40:56.886519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.178 [2024-11-20 16:40:56.886525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.178 qpair failed and we were unable to recover it. 00:29:11.178 [2024-11-20 16:40:56.886886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.178 [2024-11-20 16:40:56.886894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.178 qpair failed and we were unable to recover it. 00:29:11.178 [2024-11-20 16:40:56.887031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.178 [2024-11-20 16:40:56.887038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.178 qpair failed and we were unable to recover it. 00:29:11.178 [2024-11-20 16:40:56.887334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.178 [2024-11-20 16:40:56.887340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.178 qpair failed and we were unable to recover it. 00:29:11.178 [2024-11-20 16:40:56.887732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.178 [2024-11-20 16:40:56.887739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.178 qpair failed and we were unable to recover it. 00:29:11.178 [2024-11-20 16:40:56.888030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.178 [2024-11-20 16:40:56.888037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.178 qpair failed and we were unable to recover it. 00:29:11.178 [2024-11-20 16:40:56.888324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.178 [2024-11-20 16:40:56.888330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.178 qpair failed and we were unable to recover it. 00:29:11.178 [2024-11-20 16:40:56.888652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.178 [2024-11-20 16:40:56.888659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.178 qpair failed and we were unable to recover it. 00:29:11.178 [2024-11-20 16:40:56.888967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.178 [2024-11-20 16:40:56.888974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.178 qpair failed and we were unable to recover it. 00:29:11.178 [2024-11-20 16:40:56.889168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.178 [2024-11-20 16:40:56.889175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.178 qpair failed and we were unable to recover it. 00:29:11.178 [2024-11-20 16:40:56.889457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.178 [2024-11-20 16:40:56.889464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.178 qpair failed and we were unable to recover it. 00:29:11.178 [2024-11-20 16:40:56.889653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.178 [2024-11-20 16:40:56.889659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.178 qpair failed and we were unable to recover it. 00:29:11.178 [2024-11-20 16:40:56.889918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.178 [2024-11-20 16:40:56.889925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.178 qpair failed and we were unable to recover it. 00:29:11.178 [2024-11-20 16:40:56.890223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.178 [2024-11-20 16:40:56.890229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.178 qpair failed and we were unable to recover it. 00:29:11.178 [2024-11-20 16:40:56.890547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.178 [2024-11-20 16:40:56.890554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.178 qpair failed and we were unable to recover it. 00:29:11.178 [2024-11-20 16:40:56.890717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.178 [2024-11-20 16:40:56.890725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.178 qpair failed and we were unable to recover it. 00:29:11.178 [2024-11-20 16:40:56.891014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.178 [2024-11-20 16:40:56.891022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.178 qpair failed and we were unable to recover it. 00:29:11.178 [2024-11-20 16:40:56.891299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.178 [2024-11-20 16:40:56.891306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.178 qpair failed and we were unable to recover it. 00:29:11.178 [2024-11-20 16:40:56.891619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.178 [2024-11-20 16:40:56.891625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.178 qpair failed and we were unable to recover it. 00:29:11.178 [2024-11-20 16:40:56.891958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.178 [2024-11-20 16:40:56.891964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.179 qpair failed and we were unable to recover it. 00:29:11.179 [2024-11-20 16:40:56.892269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.179 [2024-11-20 16:40:56.892278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.179 qpair failed and we were unable to recover it. 00:29:11.179 [2024-11-20 16:40:56.892569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.179 [2024-11-20 16:40:56.892575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.179 qpair failed and we were unable to recover it. 00:29:11.179 [2024-11-20 16:40:56.892725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.179 [2024-11-20 16:40:56.892733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.179 qpair failed and we were unable to recover it. 00:29:11.179 [2024-11-20 16:40:56.893004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.179 [2024-11-20 16:40:56.893011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.179 qpair failed and we were unable to recover it. 00:29:11.179 [2024-11-20 16:40:56.893282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.179 [2024-11-20 16:40:56.893288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.179 qpair failed and we were unable to recover it. 00:29:11.179 [2024-11-20 16:40:56.893588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.179 [2024-11-20 16:40:56.893595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.179 qpair failed and we were unable to recover it. 00:29:11.179 [2024-11-20 16:40:56.893897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.179 [2024-11-20 16:40:56.893903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.179 qpair failed and we were unable to recover it. 00:29:11.179 [2024-11-20 16:40:56.894194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.179 [2024-11-20 16:40:56.894202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.179 qpair failed and we were unable to recover it. 00:29:11.179 [2024-11-20 16:40:56.894517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.179 [2024-11-20 16:40:56.894524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.179 qpair failed and we were unable to recover it. 00:29:11.179 [2024-11-20 16:40:56.894811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.179 [2024-11-20 16:40:56.894818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.179 qpair failed and we were unable to recover it. 00:29:11.179 [2024-11-20 16:40:56.895122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.179 [2024-11-20 16:40:56.895129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.179 qpair failed and we were unable to recover it. 00:29:11.179 [2024-11-20 16:40:56.895318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.179 [2024-11-20 16:40:56.895325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.179 qpair failed and we were unable to recover it. 00:29:11.179 [2024-11-20 16:40:56.895700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.179 [2024-11-20 16:40:56.895706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.179 qpair failed and we were unable to recover it. 00:29:11.179 [2024-11-20 16:40:56.896012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.179 [2024-11-20 16:40:56.896019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.179 qpair failed and we were unable to recover it. 00:29:11.179 [2024-11-20 16:40:56.896308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.179 [2024-11-20 16:40:56.896314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.179 qpair failed and we were unable to recover it. 00:29:11.179 [2024-11-20 16:40:56.896599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.179 [2024-11-20 16:40:56.896606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.179 qpair failed and we were unable to recover it. 00:29:11.179 [2024-11-20 16:40:56.896935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.179 [2024-11-20 16:40:56.896941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.179 qpair failed and we were unable to recover it. 00:29:11.179 [2024-11-20 16:40:56.897111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.179 [2024-11-20 16:40:56.897119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.179 qpair failed and we were unable to recover it. 00:29:11.179 [2024-11-20 16:40:56.897423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.179 [2024-11-20 16:40:56.897430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.179 qpair failed and we were unable to recover it. 00:29:11.179 [2024-11-20 16:40:56.897712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.179 [2024-11-20 16:40:56.897719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.179 qpair failed and we were unable to recover it. 00:29:11.179 [2024-11-20 16:40:56.897870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.179 [2024-11-20 16:40:56.897878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.179 qpair failed and we were unable to recover it. 00:29:11.179 [2024-11-20 16:40:56.898162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.179 [2024-11-20 16:40:56.898169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.179 qpair failed and we were unable to recover it. 00:29:11.179 [2024-11-20 16:40:56.898331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.179 [2024-11-20 16:40:56.898339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.179 qpair failed and we were unable to recover it. 00:29:11.179 [2024-11-20 16:40:56.898654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.179 [2024-11-20 16:40:56.898661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.179 qpair failed and we were unable to recover it. 00:29:11.179 [2024-11-20 16:40:56.898968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.179 [2024-11-20 16:40:56.898976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.179 qpair failed and we were unable to recover it. 00:29:11.179 [2024-11-20 16:40:56.899284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.179 [2024-11-20 16:40:56.899291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.179 qpair failed and we were unable to recover it. 00:29:11.179 [2024-11-20 16:40:56.899562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.179 [2024-11-20 16:40:56.899569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.179 qpair failed and we were unable to recover it. 00:29:11.179 [2024-11-20 16:40:56.899905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.179 [2024-11-20 16:40:56.899913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.179 qpair failed and we were unable to recover it. 00:29:11.179 [2024-11-20 16:40:56.900217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.179 [2024-11-20 16:40:56.900225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.179 qpair failed and we were unable to recover it. 00:29:11.179 [2024-11-20 16:40:56.900522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.179 [2024-11-20 16:40:56.900529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.179 qpair failed and we were unable to recover it. 00:29:11.179 [2024-11-20 16:40:56.900842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.179 [2024-11-20 16:40:56.900849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.179 qpair failed and we were unable to recover it. 00:29:11.179 [2024-11-20 16:40:56.901140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.179 [2024-11-20 16:40:56.901147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.179 qpair failed and we were unable to recover it. 00:29:11.179 [2024-11-20 16:40:56.901438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.179 [2024-11-20 16:40:56.901445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.179 qpair failed and we were unable to recover it. 00:29:11.179 [2024-11-20 16:40:56.901661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.179 [2024-11-20 16:40:56.901669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.179 qpair failed and we were unable to recover it. 00:29:11.179 [2024-11-20 16:40:56.901979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.179 [2024-11-20 16:40:56.901992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.179 qpair failed and we were unable to recover it. 00:29:11.179 [2024-11-20 16:40:56.902344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.179 [2024-11-20 16:40:56.902352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.179 qpair failed and we were unable to recover it. 00:29:11.180 [2024-11-20 16:40:56.902647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.180 [2024-11-20 16:40:56.902654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.180 qpair failed and we were unable to recover it. 00:29:11.180 [2024-11-20 16:40:56.902949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.180 [2024-11-20 16:40:56.902956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.180 qpair failed and we were unable to recover it. 00:29:11.180 [2024-11-20 16:40:56.903284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.180 [2024-11-20 16:40:56.903291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.180 qpair failed and we were unable to recover it. 00:29:11.180 [2024-11-20 16:40:56.903500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.180 [2024-11-20 16:40:56.903507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.180 qpair failed and we were unable to recover it. 00:29:11.180 [2024-11-20 16:40:56.903811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.180 [2024-11-20 16:40:56.903820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.180 qpair failed and we were unable to recover it. 00:29:11.180 [2024-11-20 16:40:56.904135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.180 [2024-11-20 16:40:56.904142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.180 qpair failed and we were unable to recover it. 00:29:11.180 [2024-11-20 16:40:56.904503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.180 [2024-11-20 16:40:56.904509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.180 qpair failed and we were unable to recover it. 00:29:11.180 [2024-11-20 16:40:56.904803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.180 [2024-11-20 16:40:56.904811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.180 qpair failed and we were unable to recover it. 00:29:11.180 [2024-11-20 16:40:56.905020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.180 [2024-11-20 16:40:56.905027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.180 qpair failed and we were unable to recover it. 00:29:11.180 [2024-11-20 16:40:56.905418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.180 [2024-11-20 16:40:56.905424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.180 qpair failed and we were unable to recover it. 00:29:11.180 [2024-11-20 16:40:56.905732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.180 [2024-11-20 16:40:56.905739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.180 qpair failed and we were unable to recover it. 00:29:11.180 [2024-11-20 16:40:56.905914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.180 [2024-11-20 16:40:56.905922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.180 qpair failed and we were unable to recover it. 00:29:11.180 [2024-11-20 16:40:56.906233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.180 [2024-11-20 16:40:56.906240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.180 qpair failed and we were unable to recover it. 00:29:11.180 [2024-11-20 16:40:56.906529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.180 [2024-11-20 16:40:56.906535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.180 qpair failed and we were unable to recover it. 00:29:11.180 [2024-11-20 16:40:56.906857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.180 [2024-11-20 16:40:56.906863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.180 qpair failed and we were unable to recover it. 00:29:11.180 [2024-11-20 16:40:56.907155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.180 [2024-11-20 16:40:56.907162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.180 qpair failed and we were unable to recover it. 00:29:11.180 [2024-11-20 16:40:56.907470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.180 [2024-11-20 16:40:56.907477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.180 qpair failed and we were unable to recover it. 00:29:11.180 [2024-11-20 16:40:56.907664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.180 [2024-11-20 16:40:56.907670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.180 qpair failed and we were unable to recover it. 00:29:11.180 [2024-11-20 16:40:56.908019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.180 [2024-11-20 16:40:56.908027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.180 qpair failed and we were unable to recover it. 00:29:11.180 [2024-11-20 16:40:56.908195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.180 [2024-11-20 16:40:56.908203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.180 qpair failed and we were unable to recover it. 00:29:11.180 [2024-11-20 16:40:56.908502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.180 [2024-11-20 16:40:56.908509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.180 qpair failed and we were unable to recover it. 00:29:11.180 [2024-11-20 16:40:56.908679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.180 [2024-11-20 16:40:56.908686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.180 qpair failed and we were unable to recover it. 00:29:11.180 [2024-11-20 16:40:56.908953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.180 [2024-11-20 16:40:56.908959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.180 qpair failed and we were unable to recover it. 00:29:11.180 [2024-11-20 16:40:56.909242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.180 [2024-11-20 16:40:56.909250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.180 qpair failed and we were unable to recover it. 00:29:11.180 [2024-11-20 16:40:56.909573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.180 [2024-11-20 16:40:56.909580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.180 qpair failed and we were unable to recover it. 00:29:11.180 [2024-11-20 16:40:56.909883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.180 [2024-11-20 16:40:56.909890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.180 qpair failed and we were unable to recover it. 00:29:11.180 [2024-11-20 16:40:56.910200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.180 [2024-11-20 16:40:56.910207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.180 qpair failed and we were unable to recover it. 00:29:11.180 [2024-11-20 16:40:56.910569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.180 [2024-11-20 16:40:56.910577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.180 qpair failed and we were unable to recover it. 00:29:11.180 [2024-11-20 16:40:56.910771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.180 [2024-11-20 16:40:56.910778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.180 qpair failed and we were unable to recover it. 00:29:11.180 [2024-11-20 16:40:56.911073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.180 [2024-11-20 16:40:56.911080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.180 qpair failed and we were unable to recover it. 00:29:11.180 [2024-11-20 16:40:56.911285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.180 [2024-11-20 16:40:56.911293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.180 qpair failed and we were unable to recover it. 00:29:11.180 [2024-11-20 16:40:56.911569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.180 [2024-11-20 16:40:56.911575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.180 qpair failed and we were unable to recover it. 00:29:11.180 [2024-11-20 16:40:56.911882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.180 [2024-11-20 16:40:56.911889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.180 qpair failed and we were unable to recover it. 00:29:11.180 [2024-11-20 16:40:56.912083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.180 [2024-11-20 16:40:56.912090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.180 qpair failed and we were unable to recover it. 00:29:11.181 [2024-11-20 16:40:56.912404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.181 [2024-11-20 16:40:56.912411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.181 qpair failed and we were unable to recover it. 00:29:11.181 [2024-11-20 16:40:56.912736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.181 [2024-11-20 16:40:56.912743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.181 qpair failed and we were unable to recover it. 00:29:11.181 [2024-11-20 16:40:56.913049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.181 [2024-11-20 16:40:56.913056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.181 qpair failed and we were unable to recover it. 00:29:11.181 [2024-11-20 16:40:56.913381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.181 [2024-11-20 16:40:56.913388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.181 qpair failed and we were unable to recover it. 00:29:11.181 [2024-11-20 16:40:56.913695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.181 [2024-11-20 16:40:56.913702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.181 qpair failed and we were unable to recover it. 00:29:11.181 [2024-11-20 16:40:56.914014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.181 [2024-11-20 16:40:56.914021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.181 qpair failed and we were unable to recover it. 00:29:11.181 [2024-11-20 16:40:56.914187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.181 [2024-11-20 16:40:56.914194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.181 qpair failed and we were unable to recover it. 00:29:11.181 [2024-11-20 16:40:56.914461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.181 [2024-11-20 16:40:56.914468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.181 qpair failed and we were unable to recover it. 00:29:11.181 [2024-11-20 16:40:56.914733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.181 [2024-11-20 16:40:56.914739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.181 qpair failed and we were unable to recover it. 00:29:11.181 [2024-11-20 16:40:56.914953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.181 [2024-11-20 16:40:56.914960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.181 qpair failed and we were unable to recover it. 00:29:11.181 [2024-11-20 16:40:56.915289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.181 [2024-11-20 16:40:56.915298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.181 qpair failed and we were unable to recover it. 00:29:11.181 [2024-11-20 16:40:56.915484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.181 [2024-11-20 16:40:56.915491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.181 qpair failed and we were unable to recover it. 00:29:11.181 [2024-11-20 16:40:56.915805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.181 [2024-11-20 16:40:56.915812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.181 qpair failed and we were unable to recover it. 00:29:11.181 [2024-11-20 16:40:56.916016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.181 [2024-11-20 16:40:56.916024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.181 qpair failed and we were unable to recover it. 00:29:11.181 [2024-11-20 16:40:56.916289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.181 [2024-11-20 16:40:56.916296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.181 qpair failed and we were unable to recover it. 00:29:11.181 [2024-11-20 16:40:56.916578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.181 [2024-11-20 16:40:56.916585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.181 qpair failed and we were unable to recover it. 00:29:11.181 [2024-11-20 16:40:56.916897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.181 [2024-11-20 16:40:56.916903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.181 qpair failed and we were unable to recover it. 00:29:11.181 [2024-11-20 16:40:56.917184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.181 [2024-11-20 16:40:56.917191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.181 qpair failed and we were unable to recover it. 00:29:11.181 [2024-11-20 16:40:56.917399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.181 [2024-11-20 16:40:56.917405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.181 qpair failed and we were unable to recover it. 00:29:11.181 [2024-11-20 16:40:56.917705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.181 [2024-11-20 16:40:56.917712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.181 qpair failed and we were unable to recover it. 00:29:11.181 [2024-11-20 16:40:56.918024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.181 [2024-11-20 16:40:56.918031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.181 qpair failed and we were unable to recover it. 00:29:11.181 [2024-11-20 16:40:56.918214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.181 [2024-11-20 16:40:56.918221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.181 qpair failed and we were unable to recover it. 00:29:11.181 [2024-11-20 16:40:56.918588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.181 [2024-11-20 16:40:56.918595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.181 qpair failed and we were unable to recover it. 00:29:11.181 [2024-11-20 16:40:56.918906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.181 [2024-11-20 16:40:56.918913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.181 qpair failed and we were unable to recover it. 00:29:11.181 [2024-11-20 16:40:56.919089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.181 [2024-11-20 16:40:56.919096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.181 qpair failed and we were unable to recover it. 00:29:11.181 [2024-11-20 16:40:56.919404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.181 [2024-11-20 16:40:56.919411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.181 qpair failed and we were unable to recover it. 00:29:11.181 [2024-11-20 16:40:56.919727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.181 [2024-11-20 16:40:56.919734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.181 qpair failed and we were unable to recover it. 00:29:11.181 [2024-11-20 16:40:56.920018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.181 [2024-11-20 16:40:56.920025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.181 qpair failed and we were unable to recover it. 00:29:11.181 [2024-11-20 16:40:56.920159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.181 [2024-11-20 16:40:56.920166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.181 qpair failed and we were unable to recover it. 00:29:11.181 [2024-11-20 16:40:56.920458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.181 [2024-11-20 16:40:56.920464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.181 qpair failed and we were unable to recover it. 00:29:11.181 [2024-11-20 16:40:56.920755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.181 [2024-11-20 16:40:56.920762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.181 qpair failed and we were unable to recover it. 00:29:11.181 [2024-11-20 16:40:56.921009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.181 [2024-11-20 16:40:56.921017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.181 qpair failed and we were unable to recover it. 00:29:11.181 [2024-11-20 16:40:56.921333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.181 [2024-11-20 16:40:56.921340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.181 qpair failed and we were unable to recover it. 00:29:11.181 [2024-11-20 16:40:56.921616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.181 [2024-11-20 16:40:56.921623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.181 qpair failed and we were unable to recover it. 00:29:11.181 [2024-11-20 16:40:56.921917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.181 [2024-11-20 16:40:56.921924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.181 qpair failed and we were unable to recover it. 00:29:11.181 [2024-11-20 16:40:56.921994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.181 [2024-11-20 16:40:56.922000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.181 qpair failed and we were unable to recover it. 00:29:11.181 [2024-11-20 16:40:56.922302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.181 [2024-11-20 16:40:56.922309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.181 qpair failed and we were unable to recover it. 00:29:11.181 [2024-11-20 16:40:56.922619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.181 [2024-11-20 16:40:56.922626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.181 qpair failed and we were unable to recover it. 00:29:11.182 [2024-11-20 16:40:56.922815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.182 [2024-11-20 16:40:56.922821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.182 qpair failed and we were unable to recover it. 00:29:11.182 [2024-11-20 16:40:56.923151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.182 [2024-11-20 16:40:56.923158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.182 qpair failed and we were unable to recover it. 00:29:11.182 [2024-11-20 16:40:56.923474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.182 [2024-11-20 16:40:56.923481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.182 qpair failed and we were unable to recover it. 00:29:11.182 [2024-11-20 16:40:56.923778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.182 [2024-11-20 16:40:56.923785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.182 qpair failed and we were unable to recover it. 00:29:11.182 [2024-11-20 16:40:56.923979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.182 [2024-11-20 16:40:56.923993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.182 qpair failed and we were unable to recover it. 00:29:11.182 [2024-11-20 16:40:56.924186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.182 [2024-11-20 16:40:56.924193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.182 qpair failed and we were unable to recover it. 00:29:11.182 [2024-11-20 16:40:56.924550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.182 [2024-11-20 16:40:56.924556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.182 qpair failed and we were unable to recover it. 00:29:11.182 [2024-11-20 16:40:56.924866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.182 [2024-11-20 16:40:56.924872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.182 qpair failed and we were unable to recover it. 00:29:11.182 [2024-11-20 16:40:56.925176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.182 [2024-11-20 16:40:56.925183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.182 qpair failed and we were unable to recover it. 00:29:11.182 [2024-11-20 16:40:56.925406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.182 [2024-11-20 16:40:56.925413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.182 qpair failed and we were unable to recover it. 00:29:11.182 [2024-11-20 16:40:56.925777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.182 [2024-11-20 16:40:56.925784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.182 qpair failed and we were unable to recover it. 00:29:11.182 [2024-11-20 16:40:56.926103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.182 [2024-11-20 16:40:56.926110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.182 qpair failed and we were unable to recover it. 00:29:11.182 [2024-11-20 16:40:56.926440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.182 [2024-11-20 16:40:56.926449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.182 qpair failed and we were unable to recover it. 00:29:11.182 [2024-11-20 16:40:56.926762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.182 [2024-11-20 16:40:56.926770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.182 qpair failed and we were unable to recover it. 00:29:11.182 [2024-11-20 16:40:56.927101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.182 [2024-11-20 16:40:56.927109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.182 qpair failed and we were unable to recover it. 00:29:11.182 [2024-11-20 16:40:56.927395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.182 [2024-11-20 16:40:56.927402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.182 qpair failed and we were unable to recover it. 00:29:11.182 [2024-11-20 16:40:56.927737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.182 [2024-11-20 16:40:56.927745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.182 qpair failed and we were unable to recover it. 00:29:11.182 [2024-11-20 16:40:56.928048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.182 [2024-11-20 16:40:56.928055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.182 qpair failed and we were unable to recover it. 00:29:11.182 [2024-11-20 16:40:56.928362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.182 [2024-11-20 16:40:56.928369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.182 qpair failed and we were unable to recover it. 00:29:11.182 [2024-11-20 16:40:56.928765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.182 [2024-11-20 16:40:56.928771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.182 qpair failed and we were unable to recover it. 00:29:11.182 [2024-11-20 16:40:56.929086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.182 [2024-11-20 16:40:56.929093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.182 qpair failed and we were unable to recover it. 00:29:11.182 [2024-11-20 16:40:56.929392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.182 [2024-11-20 16:40:56.929398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.182 qpair failed and we were unable to recover it. 00:29:11.182 [2024-11-20 16:40:56.929677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.182 [2024-11-20 16:40:56.929684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.182 qpair failed and we were unable to recover it. 00:29:11.182 [2024-11-20 16:40:56.929994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.182 [2024-11-20 16:40:56.930001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.182 qpair failed and we were unable to recover it. 00:29:11.182 [2024-11-20 16:40:56.930292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.182 [2024-11-20 16:40:56.930299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.182 qpair failed and we were unable to recover it. 00:29:11.182 [2024-11-20 16:40:56.930614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.182 [2024-11-20 16:40:56.930621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.182 qpair failed and we were unable to recover it. 00:29:11.182 [2024-11-20 16:40:56.930796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.182 [2024-11-20 16:40:56.930803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.182 qpair failed and we were unable to recover it. 00:29:11.182 [2024-11-20 16:40:56.931146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.182 [2024-11-20 16:40:56.931153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.182 qpair failed and we were unable to recover it. 00:29:11.182 [2024-11-20 16:40:56.931362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.182 [2024-11-20 16:40:56.931368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.182 qpair failed and we were unable to recover it. 00:29:11.182 [2024-11-20 16:40:56.931658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.182 [2024-11-20 16:40:56.931671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.182 qpair failed and we were unable to recover it. 00:29:11.182 [2024-11-20 16:40:56.931954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.182 [2024-11-20 16:40:56.931961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.182 qpair failed and we were unable to recover it. 00:29:11.182 [2024-11-20 16:40:56.932275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.183 [2024-11-20 16:40:56.932283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.183 qpair failed and we were unable to recover it. 00:29:11.183 [2024-11-20 16:40:56.932606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.183 [2024-11-20 16:40:56.932613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.183 qpair failed and we were unable to recover it. 00:29:11.183 [2024-11-20 16:40:56.932917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.183 [2024-11-20 16:40:56.932924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.183 qpair failed and we were unable to recover it. 00:29:11.183 [2024-11-20 16:40:56.933102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.183 [2024-11-20 16:40:56.933109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.183 qpair failed and we were unable to recover it. 00:29:11.183 [2024-11-20 16:40:56.933291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.183 [2024-11-20 16:40:56.933298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.183 qpair failed and we were unable to recover it. 00:29:11.183 [2024-11-20 16:40:56.933638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.183 [2024-11-20 16:40:56.933645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.183 qpair failed and we were unable to recover it. 00:29:11.183 [2024-11-20 16:40:56.933941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.183 [2024-11-20 16:40:56.933949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.183 qpair failed and we were unable to recover it. 00:29:11.183 [2024-11-20 16:40:56.934257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.183 [2024-11-20 16:40:56.934264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.183 qpair failed and we were unable to recover it. 00:29:11.183 [2024-11-20 16:40:56.934534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.183 [2024-11-20 16:40:56.934541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.183 qpair failed and we were unable to recover it. 00:29:11.183 [2024-11-20 16:40:56.934899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.183 [2024-11-20 16:40:56.934906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.183 qpair failed and we were unable to recover it. 00:29:11.183 [2024-11-20 16:40:56.935214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.183 [2024-11-20 16:40:56.935220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.183 qpair failed and we were unable to recover it. 00:29:11.183 [2024-11-20 16:40:56.935500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.183 [2024-11-20 16:40:56.935507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.183 qpair failed and we were unable to recover it. 00:29:11.183 [2024-11-20 16:40:56.935701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.183 [2024-11-20 16:40:56.935708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.183 qpair failed and we were unable to recover it. 00:29:11.183 [2024-11-20 16:40:56.935888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.183 [2024-11-20 16:40:56.935896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.183 qpair failed and we were unable to recover it. 00:29:11.183 [2024-11-20 16:40:56.936211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.183 [2024-11-20 16:40:56.936220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.183 qpair failed and we were unable to recover it. 00:29:11.183 [2024-11-20 16:40:56.936506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.183 [2024-11-20 16:40:56.936513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.183 qpair failed and we were unable to recover it. 00:29:11.183 [2024-11-20 16:40:56.936825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.183 [2024-11-20 16:40:56.936833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.183 qpair failed and we were unable to recover it. 00:29:11.183 [2024-11-20 16:40:56.937033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.183 [2024-11-20 16:40:56.937040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.183 qpair failed and we were unable to recover it. 00:29:11.183 [2024-11-20 16:40:56.937317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.183 [2024-11-20 16:40:56.937324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.183 qpair failed and we were unable to recover it. 00:29:11.183 [2024-11-20 16:40:56.937640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.183 [2024-11-20 16:40:56.937647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.183 qpair failed and we were unable to recover it. 00:29:11.183 [2024-11-20 16:40:56.937954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.183 [2024-11-20 16:40:56.937961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.183 qpair failed and we were unable to recover it. 00:29:11.183 [2024-11-20 16:40:56.938252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.183 [2024-11-20 16:40:56.938261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.183 qpair failed and we were unable to recover it. 00:29:11.183 [2024-11-20 16:40:56.938579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.183 [2024-11-20 16:40:56.938586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.183 qpair failed and we were unable to recover it. 00:29:11.183 [2024-11-20 16:40:56.938907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.183 [2024-11-20 16:40:56.938914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.183 qpair failed and we were unable to recover it. 00:29:11.183 [2024-11-20 16:40:56.939196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.183 [2024-11-20 16:40:56.939203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.183 qpair failed and we were unable to recover it. 00:29:11.183 [2024-11-20 16:40:56.939518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.183 [2024-11-20 16:40:56.939526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.183 qpair failed and we were unable to recover it. 00:29:11.183 [2024-11-20 16:40:56.939833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.183 [2024-11-20 16:40:56.939840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.183 qpair failed and we were unable to recover it. 00:29:11.183 [2024-11-20 16:40:56.940147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.183 [2024-11-20 16:40:56.940154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.183 qpair failed and we were unable to recover it. 00:29:11.183 [2024-11-20 16:40:56.940491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.183 [2024-11-20 16:40:56.940498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.183 qpair failed and we were unable to recover it. 00:29:11.183 [2024-11-20 16:40:56.940792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.183 [2024-11-20 16:40:56.940799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.183 qpair failed and we were unable to recover it. 00:29:11.183 [2024-11-20 16:40:56.941101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.184 [2024-11-20 16:40:56.941108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.184 qpair failed and we were unable to recover it. 00:29:11.184 [2024-11-20 16:40:56.941409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.184 [2024-11-20 16:40:56.941415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.184 qpair failed and we were unable to recover it. 00:29:11.184 [2024-11-20 16:40:56.941721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.184 [2024-11-20 16:40:56.941728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.184 qpair failed and we were unable to recover it. 00:29:11.184 [2024-11-20 16:40:56.942052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.184 [2024-11-20 16:40:56.942059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.184 qpair failed and we were unable to recover it. 00:29:11.184 [2024-11-20 16:40:56.942357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.184 [2024-11-20 16:40:56.942364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.184 qpair failed and we were unable to recover it. 00:29:11.184 [2024-11-20 16:40:56.942665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.184 [2024-11-20 16:40:56.942671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.184 qpair failed and we were unable to recover it. 00:29:11.184 [2024-11-20 16:40:56.942978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.184 [2024-11-20 16:40:56.942989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.184 qpair failed and we were unable to recover it. 00:29:11.184 [2024-11-20 16:40:56.943141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.184 [2024-11-20 16:40:56.943154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.184 qpair failed and we were unable to recover it. 00:29:11.184 [2024-11-20 16:40:56.943463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.184 [2024-11-20 16:40:56.943470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.184 qpair failed and we were unable to recover it. 00:29:11.184 [2024-11-20 16:40:56.943759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.184 [2024-11-20 16:40:56.943766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.184 qpair failed and we were unable to recover it. 00:29:11.184 [2024-11-20 16:40:56.944081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.184 [2024-11-20 16:40:56.944088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.184 qpair failed and we were unable to recover it. 00:29:11.184 [2024-11-20 16:40:56.944413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.184 [2024-11-20 16:40:56.944420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.184 qpair failed and we were unable to recover it. 00:29:11.184 [2024-11-20 16:40:56.944728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.184 [2024-11-20 16:40:56.944735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.184 qpair failed and we were unable to recover it. 00:29:11.184 [2024-11-20 16:40:56.944908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.184 [2024-11-20 16:40:56.944915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.184 qpair failed and we were unable to recover it. 00:29:11.184 [2024-11-20 16:40:56.945143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.184 [2024-11-20 16:40:56.945150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.184 qpair failed and we were unable to recover it. 00:29:11.184 [2024-11-20 16:40:56.945467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.184 [2024-11-20 16:40:56.945474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.184 qpair failed and we were unable to recover it. 00:29:11.184 [2024-11-20 16:40:56.945791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.184 [2024-11-20 16:40:56.945799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.184 qpair failed and we were unable to recover it. 00:29:11.184 [2024-11-20 16:40:56.946111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.184 [2024-11-20 16:40:56.946118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.184 qpair failed and we were unable to recover it. 00:29:11.184 [2024-11-20 16:40:56.946436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.184 [2024-11-20 16:40:56.946443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.184 qpair failed and we were unable to recover it. 00:29:11.184 [2024-11-20 16:40:56.946770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.184 [2024-11-20 16:40:56.946777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.184 qpair failed and we were unable to recover it. 00:29:11.184 [2024-11-20 16:40:56.947088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.184 [2024-11-20 16:40:56.947095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.184 qpair failed and we were unable to recover it. 00:29:11.184 [2024-11-20 16:40:56.947397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.184 [2024-11-20 16:40:56.947404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.184 qpair failed and we were unable to recover it. 00:29:11.184 [2024-11-20 16:40:56.947718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.184 [2024-11-20 16:40:56.947726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.184 qpair failed and we were unable to recover it. 00:29:11.184 [2024-11-20 16:40:56.948058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.184 [2024-11-20 16:40:56.948065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.184 qpair failed and we were unable to recover it. 00:29:11.184 [2024-11-20 16:40:56.948309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.184 [2024-11-20 16:40:56.948317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.184 qpair failed and we were unable to recover it. 00:29:11.184 [2024-11-20 16:40:56.948630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.184 [2024-11-20 16:40:56.948638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.184 qpair failed and we were unable to recover it. 00:29:11.184 [2024-11-20 16:40:56.949027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.184 [2024-11-20 16:40:56.949035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.184 qpair failed and we were unable to recover it. 00:29:11.184 [2024-11-20 16:40:56.949321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.184 [2024-11-20 16:40:56.949328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.184 qpair failed and we were unable to recover it. 00:29:11.184 [2024-11-20 16:40:56.949504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.184 [2024-11-20 16:40:56.949511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.184 qpair failed and we were unable to recover it. 00:29:11.184 [2024-11-20 16:40:56.949774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.184 [2024-11-20 16:40:56.949781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.184 qpair failed and we were unable to recover it. 00:29:11.184 [2024-11-20 16:40:56.950077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.184 [2024-11-20 16:40:56.950084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.184 qpair failed and we were unable to recover it. 00:29:11.184 [2024-11-20 16:40:56.950414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.184 [2024-11-20 16:40:56.950424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.184 qpair failed and we were unable to recover it. 00:29:11.184 [2024-11-20 16:40:56.950703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.184 [2024-11-20 16:40:56.950717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.184 qpair failed and we were unable to recover it. 00:29:11.184 [2024-11-20 16:40:56.951024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.184 [2024-11-20 16:40:56.951031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.184 qpair failed and we were unable to recover it. 00:29:11.184 [2024-11-20 16:40:56.951285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.184 [2024-11-20 16:40:56.951292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.184 qpair failed and we were unable to recover it. 00:29:11.184 [2024-11-20 16:40:56.951463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.184 [2024-11-20 16:40:56.951470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.184 qpair failed and we were unable to recover it. 00:29:11.184 [2024-11-20 16:40:56.951700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.184 [2024-11-20 16:40:56.951707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.184 qpair failed and we were unable to recover it. 00:29:11.184 [2024-11-20 16:40:56.951992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.184 [2024-11-20 16:40:56.951999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.184 qpair failed and we were unable to recover it. 00:29:11.185 [2024-11-20 16:40:56.952336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.185 [2024-11-20 16:40:56.952344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.185 qpair failed and we were unable to recover it. 00:29:11.185 [2024-11-20 16:40:56.952650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.185 [2024-11-20 16:40:56.952658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.185 qpair failed and we were unable to recover it. 00:29:11.185 [2024-11-20 16:40:56.952868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.185 [2024-11-20 16:40:56.952875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.185 qpair failed and we were unable to recover it. 00:29:11.185 [2024-11-20 16:40:56.953172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.185 [2024-11-20 16:40:56.953179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.185 qpair failed and we were unable to recover it. 00:29:11.185 [2024-11-20 16:40:56.953498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.185 [2024-11-20 16:40:56.953505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.185 qpair failed and we were unable to recover it. 00:29:11.185 [2024-11-20 16:40:56.953659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.185 [2024-11-20 16:40:56.953666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.185 qpair failed and we were unable to recover it. 00:29:11.185 [2024-11-20 16:40:56.953937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.185 [2024-11-20 16:40:56.953945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.185 qpair failed and we were unable to recover it. 00:29:11.185 [2024-11-20 16:40:56.954218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.185 [2024-11-20 16:40:56.954226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.185 qpair failed and we were unable to recover it. 00:29:11.185 [2024-11-20 16:40:56.954530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.185 [2024-11-20 16:40:56.954538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.185 qpair failed and we were unable to recover it. 00:29:11.185 [2024-11-20 16:40:56.954829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.185 [2024-11-20 16:40:56.954837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.185 qpair failed and we were unable to recover it. 00:29:11.185 [2024-11-20 16:40:56.955150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.185 [2024-11-20 16:40:56.955157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.185 qpair failed and we were unable to recover it. 00:29:11.185 [2024-11-20 16:40:56.955454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.185 [2024-11-20 16:40:56.955462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.185 qpair failed and we were unable to recover it. 00:29:11.185 [2024-11-20 16:40:56.955769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.185 [2024-11-20 16:40:56.955775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.185 qpair failed and we were unable to recover it. 00:29:11.185 [2024-11-20 16:40:56.956065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.185 [2024-11-20 16:40:56.956072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.185 qpair failed and we were unable to recover it. 00:29:11.185 [2024-11-20 16:40:56.956434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.185 [2024-11-20 16:40:56.956441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.185 qpair failed and we were unable to recover it. 00:29:11.185 [2024-11-20 16:40:56.956728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.185 [2024-11-20 16:40:56.956736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.185 qpair failed and we were unable to recover it. 00:29:11.185 [2024-11-20 16:40:56.957048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.185 [2024-11-20 16:40:56.957055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.185 qpair failed and we were unable to recover it. 00:29:11.185 [2024-11-20 16:40:56.957366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.185 [2024-11-20 16:40:56.957374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.185 qpair failed and we were unable to recover it. 00:29:11.185 [2024-11-20 16:40:56.957676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.185 [2024-11-20 16:40:56.957683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.185 qpair failed and we were unable to recover it. 00:29:11.185 [2024-11-20 16:40:56.957973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.185 [2024-11-20 16:40:56.957979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.185 qpair failed and we were unable to recover it. 00:29:11.185 [2024-11-20 16:40:56.958326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.185 [2024-11-20 16:40:56.958334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.185 qpair failed and we were unable to recover it. 00:29:11.185 [2024-11-20 16:40:56.958518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.185 [2024-11-20 16:40:56.958526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.185 qpair failed and we were unable to recover it. 00:29:11.185 [2024-11-20 16:40:56.958843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.185 [2024-11-20 16:40:56.958851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.185 qpair failed and we were unable to recover it. 00:29:11.185 [2024-11-20 16:40:56.959155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.185 [2024-11-20 16:40:56.959162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.185 qpair failed and we were unable to recover it. 00:29:11.185 [2024-11-20 16:40:56.959440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.185 [2024-11-20 16:40:56.959448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.185 qpair failed and we were unable to recover it. 00:29:11.185 [2024-11-20 16:40:56.959805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.185 [2024-11-20 16:40:56.959812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.185 qpair failed and we were unable to recover it. 00:29:11.185 [2024-11-20 16:40:56.960091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.185 [2024-11-20 16:40:56.960098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.185 qpair failed and we were unable to recover it. 00:29:11.185 [2024-11-20 16:40:56.960410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.185 [2024-11-20 16:40:56.960416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.185 qpair failed and we were unable to recover it. 00:29:11.185 [2024-11-20 16:40:56.960616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.185 [2024-11-20 16:40:56.960623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.185 qpair failed and we were unable to recover it. 00:29:11.185 [2024-11-20 16:40:56.960958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.185 [2024-11-20 16:40:56.960965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.185 qpair failed and we were unable to recover it. 00:29:11.186 [2024-11-20 16:40:56.961272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.186 [2024-11-20 16:40:56.961279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.186 qpair failed and we were unable to recover it. 00:29:11.186 [2024-11-20 16:40:56.961593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.186 [2024-11-20 16:40:56.961600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.186 qpair failed and we were unable to recover it. 00:29:11.186 [2024-11-20 16:40:56.961901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.186 [2024-11-20 16:40:56.961908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.186 qpair failed and we were unable to recover it. 00:29:11.186 [2024-11-20 16:40:56.962218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.186 [2024-11-20 16:40:56.962226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.186 qpair failed and we were unable to recover it. 00:29:11.186 [2024-11-20 16:40:56.962534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.186 [2024-11-20 16:40:56.962540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.186 qpair failed and we were unable to recover it. 00:29:11.186 [2024-11-20 16:40:56.962838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.186 [2024-11-20 16:40:56.962846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.186 qpair failed and we were unable to recover it. 00:29:11.186 [2024-11-20 16:40:56.963180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.186 [2024-11-20 16:40:56.963187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.186 qpair failed and we were unable to recover it. 00:29:11.186 [2024-11-20 16:40:56.963552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.186 [2024-11-20 16:40:56.963559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.186 qpair failed and we were unable to recover it. 00:29:11.186 [2024-11-20 16:40:56.963866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.186 [2024-11-20 16:40:56.963873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.186 qpair failed and we were unable to recover it. 00:29:11.186 [2024-11-20 16:40:56.964259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.186 [2024-11-20 16:40:56.964266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.186 qpair failed and we were unable to recover it. 00:29:11.186 [2024-11-20 16:40:56.964561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.186 [2024-11-20 16:40:56.964568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.186 qpair failed and we were unable to recover it. 00:29:11.186 [2024-11-20 16:40:56.964903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.186 [2024-11-20 16:40:56.964911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.186 qpair failed and we were unable to recover it. 00:29:11.186 [2024-11-20 16:40:56.965118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.186 [2024-11-20 16:40:56.965125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.186 qpair failed and we were unable to recover it. 00:29:11.186 [2024-11-20 16:40:56.965453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.186 [2024-11-20 16:40:56.965460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.186 qpair failed and we were unable to recover it. 00:29:11.186 [2024-11-20 16:40:56.965803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.186 [2024-11-20 16:40:56.965810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.186 qpair failed and we were unable to recover it. 00:29:11.186 [2024-11-20 16:40:56.966095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.186 [2024-11-20 16:40:56.966102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.186 qpair failed and we were unable to recover it. 00:29:11.186 [2024-11-20 16:40:56.966399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.186 [2024-11-20 16:40:56.966414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.186 qpair failed and we were unable to recover it. 00:29:11.186 [2024-11-20 16:40:56.966700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.186 [2024-11-20 16:40:56.966707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.186 qpair failed and we were unable to recover it. 00:29:11.186 [2024-11-20 16:40:56.966893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.186 [2024-11-20 16:40:56.966900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.186 qpair failed and we were unable to recover it. 00:29:11.186 [2024-11-20 16:40:56.967186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.186 [2024-11-20 16:40:56.967194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.186 qpair failed and we were unable to recover it. 00:29:11.186 [2024-11-20 16:40:56.967472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.186 [2024-11-20 16:40:56.967479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.186 qpair failed and we were unable to recover it. 00:29:11.186 [2024-11-20 16:40:56.967765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.186 [2024-11-20 16:40:56.967772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.186 qpair failed and we were unable to recover it. 00:29:11.186 [2024-11-20 16:40:56.967949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.186 [2024-11-20 16:40:56.967957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.186 qpair failed and we were unable to recover it. 00:29:11.186 [2024-11-20 16:40:56.968136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.186 [2024-11-20 16:40:56.968143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.186 qpair failed and we were unable to recover it. 00:29:11.186 [2024-11-20 16:40:56.968542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.186 [2024-11-20 16:40:56.968550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.186 qpair failed and we were unable to recover it. 00:29:11.186 [2024-11-20 16:40:56.968718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.186 [2024-11-20 16:40:56.968726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.186 qpair failed and we were unable to recover it. 00:29:11.186 [2024-11-20 16:40:56.969020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.186 [2024-11-20 16:40:56.969028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.186 qpair failed and we were unable to recover it. 00:29:11.186 [2024-11-20 16:40:56.969356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.186 [2024-11-20 16:40:56.969364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.186 qpair failed and we were unable to recover it. 00:29:11.186 [2024-11-20 16:40:56.969667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.186 [2024-11-20 16:40:56.969674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.186 qpair failed and we were unable to recover it. 00:29:11.186 [2024-11-20 16:40:56.969998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.187 [2024-11-20 16:40:56.970005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.187 qpair failed and we were unable to recover it. 00:29:11.187 [2024-11-20 16:40:56.970213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.187 [2024-11-20 16:40:56.970220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.187 qpair failed and we were unable to recover it. 00:29:11.187 [2024-11-20 16:40:56.970552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.187 [2024-11-20 16:40:56.970559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.187 qpair failed and we were unable to recover it. 00:29:11.187 [2024-11-20 16:40:56.970842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.187 [2024-11-20 16:40:56.970849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.187 qpair failed and we were unable to recover it. 00:29:11.187 [2024-11-20 16:40:56.971145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.187 [2024-11-20 16:40:56.971152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.187 qpair failed and we were unable to recover it. 00:29:11.187 [2024-11-20 16:40:56.971460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.187 [2024-11-20 16:40:56.971467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.187 qpair failed and we were unable to recover it. 00:29:11.187 [2024-11-20 16:40:56.971755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.187 [2024-11-20 16:40:56.971761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.187 qpair failed and we were unable to recover it. 00:29:11.187 [2024-11-20 16:40:56.972041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.187 [2024-11-20 16:40:56.972048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.187 qpair failed and we were unable to recover it. 00:29:11.187 [2024-11-20 16:40:56.972383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.187 [2024-11-20 16:40:56.972389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.187 qpair failed and we were unable to recover it. 00:29:11.187 [2024-11-20 16:40:56.972603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.187 [2024-11-20 16:40:56.972610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.187 qpair failed and we were unable to recover it. 00:29:11.187 [2024-11-20 16:40:56.972909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.187 [2024-11-20 16:40:56.972917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.187 qpair failed and we were unable to recover it. 00:29:11.187 [2024-11-20 16:40:56.973197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.187 [2024-11-20 16:40:56.973205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.187 qpair failed and we were unable to recover it. 00:29:11.187 [2024-11-20 16:40:56.973494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.187 [2024-11-20 16:40:56.973502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.187 qpair failed and we were unable to recover it. 00:29:11.187 [2024-11-20 16:40:56.973690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.187 [2024-11-20 16:40:56.973697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.187 qpair failed and we were unable to recover it. 00:29:11.187 [2024-11-20 16:40:56.973877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.187 [2024-11-20 16:40:56.973885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.187 qpair failed and we were unable to recover it. 00:29:11.187 [2024-11-20 16:40:56.974198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.187 [2024-11-20 16:40:56.974205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.187 qpair failed and we were unable to recover it. 00:29:11.187 [2024-11-20 16:40:56.974533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.187 [2024-11-20 16:40:56.974540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.187 qpair failed and we were unable to recover it. 00:29:11.187 [2024-11-20 16:40:56.974711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.187 [2024-11-20 16:40:56.974717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.187 qpair failed and we were unable to recover it. 00:29:11.187 [2024-11-20 16:40:56.975008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.187 [2024-11-20 16:40:56.975015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.187 qpair failed and we were unable to recover it. 00:29:11.187 [2024-11-20 16:40:56.975331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.187 [2024-11-20 16:40:56.975337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.187 qpair failed and we were unable to recover it. 00:29:11.187 [2024-11-20 16:40:56.975620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.187 [2024-11-20 16:40:56.975627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.187 qpair failed and we were unable to recover it. 00:29:11.187 [2024-11-20 16:40:56.975933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.187 [2024-11-20 16:40:56.975939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.187 qpair failed and we were unable to recover it. 00:29:11.187 [2024-11-20 16:40:56.976216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.187 [2024-11-20 16:40:56.976223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.187 qpair failed and we were unable to recover it. 00:29:11.187 [2024-11-20 16:40:56.976537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.187 [2024-11-20 16:40:56.976543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.187 qpair failed and we were unable to recover it. 00:29:11.187 [2024-11-20 16:40:56.976838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.187 [2024-11-20 16:40:56.976844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.187 qpair failed and we were unable to recover it. 00:29:11.187 [2024-11-20 16:40:56.977242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.187 [2024-11-20 16:40:56.977249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.187 qpair failed and we were unable to recover it. 00:29:11.187 [2024-11-20 16:40:56.977538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.187 [2024-11-20 16:40:56.977545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.187 qpair failed and we were unable to recover it. 00:29:11.187 [2024-11-20 16:40:56.977844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.187 [2024-11-20 16:40:56.977851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.187 qpair failed and we were unable to recover it. 00:29:11.187 [2024-11-20 16:40:56.978220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.187 [2024-11-20 16:40:56.978226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.187 qpair failed and we were unable to recover it. 00:29:11.187 [2024-11-20 16:40:56.978534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.187 [2024-11-20 16:40:56.978541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.187 qpair failed and we were unable to recover it. 00:29:11.187 [2024-11-20 16:40:56.978861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.187 [2024-11-20 16:40:56.978868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.187 qpair failed and we were unable to recover it. 00:29:11.187 [2024-11-20 16:40:56.979072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.187 [2024-11-20 16:40:56.979079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.187 qpair failed and we were unable to recover it. 00:29:11.187 [2024-11-20 16:40:56.979395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.187 [2024-11-20 16:40:56.979401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.187 qpair failed and we were unable to recover it. 00:29:11.187 [2024-11-20 16:40:56.979587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.187 [2024-11-20 16:40:56.979595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.187 qpair failed and we were unable to recover it. 00:29:11.187 [2024-11-20 16:40:56.979900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.187 [2024-11-20 16:40:56.979907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.187 qpair failed and we were unable to recover it. 00:29:11.187 [2024-11-20 16:40:56.980189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.187 [2024-11-20 16:40:56.980196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.187 qpair failed and we were unable to recover it. 00:29:11.187 [2024-11-20 16:40:56.980549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.187 [2024-11-20 16:40:56.980557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.187 qpair failed and we were unable to recover it. 00:29:11.187 [2024-11-20 16:40:56.980850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.188 [2024-11-20 16:40:56.980858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.188 qpair failed and we were unable to recover it. 00:29:11.188 [2024-11-20 16:40:56.981166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.188 [2024-11-20 16:40:56.981173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.188 qpair failed and we were unable to recover it. 00:29:11.188 [2024-11-20 16:40:56.981481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.188 [2024-11-20 16:40:56.981488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.188 qpair failed and we were unable to recover it. 00:29:11.188 [2024-11-20 16:40:56.981780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.188 [2024-11-20 16:40:56.981786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.188 qpair failed and we were unable to recover it. 00:29:11.188 [2024-11-20 16:40:56.982098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.188 [2024-11-20 16:40:56.982106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.188 qpair failed and we were unable to recover it. 00:29:11.188 [2024-11-20 16:40:56.982422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.188 [2024-11-20 16:40:56.982429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.188 qpair failed and we were unable to recover it. 00:29:11.188 [2024-11-20 16:40:56.982745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.188 [2024-11-20 16:40:56.982751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.188 qpair failed and we were unable to recover it. 00:29:11.188 [2024-11-20 16:40:56.983049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.188 [2024-11-20 16:40:56.983055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.188 qpair failed and we were unable to recover it. 00:29:11.188 [2024-11-20 16:40:56.983367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.188 [2024-11-20 16:40:56.983373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.188 qpair failed and we were unable to recover it. 00:29:11.188 [2024-11-20 16:40:56.983579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.188 [2024-11-20 16:40:56.983585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.188 qpair failed and we were unable to recover it. 00:29:11.188 [2024-11-20 16:40:56.983867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.188 [2024-11-20 16:40:56.983874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.188 qpair failed and we were unable to recover it. 00:29:11.188 [2024-11-20 16:40:56.984195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.188 [2024-11-20 16:40:56.984201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.188 qpair failed and we were unable to recover it. 00:29:11.188 [2024-11-20 16:40:56.984510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.188 [2024-11-20 16:40:56.984516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.188 qpair failed and we were unable to recover it. 00:29:11.188 [2024-11-20 16:40:56.984812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.188 [2024-11-20 16:40:56.984818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.188 qpair failed and we were unable to recover it. 00:29:11.188 [2024-11-20 16:40:56.985106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.188 [2024-11-20 16:40:56.985113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.188 qpair failed and we were unable to recover it. 00:29:11.188 [2024-11-20 16:40:56.985438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.188 [2024-11-20 16:40:56.985445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.188 qpair failed and we were unable to recover it. 00:29:11.188 [2024-11-20 16:40:56.985754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.188 [2024-11-20 16:40:56.985761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.188 qpair failed and we were unable to recover it. 00:29:11.188 [2024-11-20 16:40:56.986055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.188 [2024-11-20 16:40:56.986063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.188 qpair failed and we were unable to recover it. 00:29:11.188 [2024-11-20 16:40:56.986372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.188 [2024-11-20 16:40:56.986379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.188 qpair failed and we were unable to recover it. 00:29:11.188 [2024-11-20 16:40:56.986666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.188 [2024-11-20 16:40:56.986673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.188 qpair failed and we were unable to recover it. 00:29:11.188 [2024-11-20 16:40:56.986849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.188 [2024-11-20 16:40:56.986857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.188 qpair failed and we were unable to recover it. 00:29:11.188 [2024-11-20 16:40:56.987128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.188 [2024-11-20 16:40:56.987135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.188 qpair failed and we were unable to recover it. 00:29:11.188 [2024-11-20 16:40:56.987296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.188 [2024-11-20 16:40:56.987303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.188 qpair failed and we were unable to recover it. 00:29:11.188 [2024-11-20 16:40:56.987518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.188 [2024-11-20 16:40:56.987524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.188 qpair failed and we were unable to recover it. 00:29:11.188 [2024-11-20 16:40:56.987828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.188 [2024-11-20 16:40:56.987835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.188 qpair failed and we were unable to recover it. 00:29:11.188 [2024-11-20 16:40:56.988144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.188 [2024-11-20 16:40:56.988150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.188 qpair failed and we were unable to recover it. 00:29:11.188 [2024-11-20 16:40:56.988313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.188 [2024-11-20 16:40:56.988320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.188 qpair failed and we were unable to recover it. 00:29:11.188 [2024-11-20 16:40:56.988533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.188 [2024-11-20 16:40:56.988540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.188 qpair failed and we were unable to recover it. 00:29:11.188 [2024-11-20 16:40:56.988841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.188 [2024-11-20 16:40:56.988847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.188 qpair failed and we were unable to recover it. 00:29:11.188 [2024-11-20 16:40:56.989137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.188 [2024-11-20 16:40:56.989144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.188 qpair failed and we were unable to recover it. 00:29:11.188 [2024-11-20 16:40:56.989455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.188 [2024-11-20 16:40:56.989462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.188 qpair failed and we were unable to recover it. 00:29:11.189 [2024-11-20 16:40:56.989759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.189 [2024-11-20 16:40:56.989766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.189 qpair failed and we were unable to recover it. 00:29:11.189 [2024-11-20 16:40:56.990076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.189 [2024-11-20 16:40:56.990083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.189 qpair failed and we were unable to recover it. 00:29:11.189 [2024-11-20 16:40:56.990364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.189 [2024-11-20 16:40:56.990371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.189 qpair failed and we were unable to recover it. 00:29:11.189 [2024-11-20 16:40:56.990700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.189 [2024-11-20 16:40:56.990707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.189 qpair failed and we were unable to recover it. 00:29:11.189 [2024-11-20 16:40:56.991022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.189 [2024-11-20 16:40:56.991030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.189 qpair failed and we were unable to recover it. 00:29:11.189 [2024-11-20 16:40:56.991342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.189 [2024-11-20 16:40:56.991349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.189 qpair failed and we were unable to recover it. 00:29:11.189 [2024-11-20 16:40:56.991646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.189 [2024-11-20 16:40:56.991653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.189 qpair failed and we were unable to recover it. 00:29:11.189 [2024-11-20 16:40:56.991858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.189 [2024-11-20 16:40:56.991865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.189 qpair failed and we were unable to recover it. 00:29:11.189 [2024-11-20 16:40:56.992201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.189 [2024-11-20 16:40:56.992208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.189 qpair failed and we were unable to recover it. 00:29:11.189 [2024-11-20 16:40:56.992505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.189 [2024-11-20 16:40:56.992511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.189 qpair failed and we were unable to recover it. 00:29:11.189 [2024-11-20 16:40:56.992801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.189 [2024-11-20 16:40:56.992809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.189 qpair failed and we were unable to recover it. 00:29:11.189 [2024-11-20 16:40:56.993117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.189 [2024-11-20 16:40:56.993124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.189 qpair failed and we were unable to recover it. 00:29:11.189 [2024-11-20 16:40:56.993414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.189 [2024-11-20 16:40:56.993421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.189 qpair failed and we were unable to recover it. 00:29:11.189 [2024-11-20 16:40:56.993736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.189 [2024-11-20 16:40:56.993742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.189 qpair failed and we were unable to recover it. 00:29:11.189 [2024-11-20 16:40:56.993907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.189 [2024-11-20 16:40:56.993914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.189 qpair failed and we were unable to recover it. 00:29:11.189 [2024-11-20 16:40:56.994265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.189 [2024-11-20 16:40:56.994271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.189 qpair failed and we were unable to recover it. 00:29:11.189 [2024-11-20 16:40:56.994581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.189 [2024-11-20 16:40:56.994588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.189 qpair failed and we were unable to recover it. 00:29:11.189 [2024-11-20 16:40:56.994918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.189 [2024-11-20 16:40:56.994924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.189 qpair failed and we were unable to recover it. 00:29:11.189 [2024-11-20 16:40:56.995228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.189 [2024-11-20 16:40:56.995234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.189 qpair failed and we were unable to recover it. 00:29:11.189 [2024-11-20 16:40:56.995544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.189 [2024-11-20 16:40:56.995550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.189 qpair failed and we were unable to recover it. 00:29:11.189 [2024-11-20 16:40:56.995857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.189 [2024-11-20 16:40:56.995864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.189 qpair failed and we were unable to recover it. 00:29:11.189 [2024-11-20 16:40:56.996178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.189 [2024-11-20 16:40:56.996185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.189 qpair failed and we were unable to recover it. 00:29:11.189 [2024-11-20 16:40:56.996475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.189 [2024-11-20 16:40:56.996483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.189 qpair failed and we were unable to recover it. 00:29:11.189 [2024-11-20 16:40:56.996781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.189 [2024-11-20 16:40:56.996788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.189 qpair failed and we were unable to recover it. 00:29:11.189 [2024-11-20 16:40:56.997095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.189 [2024-11-20 16:40:56.997102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.189 qpair failed and we were unable to recover it. 00:29:11.189 [2024-11-20 16:40:56.997390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.189 [2024-11-20 16:40:56.997397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.189 qpair failed and we were unable to recover it. 00:29:11.189 [2024-11-20 16:40:56.997717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.189 [2024-11-20 16:40:56.997724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.189 qpair failed and we were unable to recover it. 00:29:11.189 [2024-11-20 16:40:56.998026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.189 [2024-11-20 16:40:56.998033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.189 qpair failed and we were unable to recover it. 00:29:11.189 [2024-11-20 16:40:56.998328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.189 [2024-11-20 16:40:56.998335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.189 qpair failed and we were unable to recover it. 00:29:11.189 [2024-11-20 16:40:56.998528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.189 [2024-11-20 16:40:56.998535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.189 qpair failed and we were unable to recover it. 00:29:11.189 [2024-11-20 16:40:56.998873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.189 [2024-11-20 16:40:56.998880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.189 qpair failed and we were unable to recover it. 00:29:11.189 [2024-11-20 16:40:56.999194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.189 [2024-11-20 16:40:56.999209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.189 qpair failed and we were unable to recover it. 00:29:11.189 [2024-11-20 16:40:56.999492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.189 [2024-11-20 16:40:56.999498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.189 qpair failed and we were unable to recover it. 00:29:11.189 [2024-11-20 16:40:56.999779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.189 [2024-11-20 16:40:56.999794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.189 qpair failed and we were unable to recover it. 00:29:11.189 [2024-11-20 16:40:57.000112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.189 [2024-11-20 16:40:57.000119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.189 qpair failed and we were unable to recover it. 00:29:11.189 [2024-11-20 16:40:57.000428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.189 [2024-11-20 16:40:57.000434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.189 qpair failed and we were unable to recover it. 00:29:11.189 [2024-11-20 16:40:57.000640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.189 [2024-11-20 16:40:57.000648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.189 qpair failed and we were unable to recover it. 00:29:11.189 [2024-11-20 16:40:57.000824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.190 [2024-11-20 16:40:57.000832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.190 qpair failed and we were unable to recover it. 00:29:11.190 [2024-11-20 16:40:57.001128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.190 [2024-11-20 16:40:57.001135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.190 qpair failed and we were unable to recover it. 00:29:11.190 [2024-11-20 16:40:57.001448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.190 [2024-11-20 16:40:57.001455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.190 qpair failed and we were unable to recover it. 00:29:11.190 [2024-11-20 16:40:57.001828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.190 [2024-11-20 16:40:57.001835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.190 qpair failed and we were unable to recover it. 00:29:11.190 [2024-11-20 16:40:57.002113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.190 [2024-11-20 16:40:57.002120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.190 qpair failed and we were unable to recover it. 00:29:11.190 [2024-11-20 16:40:57.002297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.190 [2024-11-20 16:40:57.002304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.190 qpair failed and we were unable to recover it. 00:29:11.190 [2024-11-20 16:40:57.002623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.190 [2024-11-20 16:40:57.002630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.190 qpair failed and we were unable to recover it. 00:29:11.190 [2024-11-20 16:40:57.002809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.190 [2024-11-20 16:40:57.002817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.190 qpair failed and we were unable to recover it. 00:29:11.190 [2024-11-20 16:40:57.003112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.190 [2024-11-20 16:40:57.003119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.190 qpair failed and we were unable to recover it. 00:29:11.190 [2024-11-20 16:40:57.003408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.190 [2024-11-20 16:40:57.003415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.190 qpair failed and we were unable to recover it. 00:29:11.190 [2024-11-20 16:40:57.003579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.190 [2024-11-20 16:40:57.003587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.190 qpair failed and we were unable to recover it. 00:29:11.190 [2024-11-20 16:40:57.003885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.190 [2024-11-20 16:40:57.003892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.190 qpair failed and we were unable to recover it. 00:29:11.190 [2024-11-20 16:40:57.004202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.190 [2024-11-20 16:40:57.004209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.190 qpair failed and we were unable to recover it. 00:29:11.190 [2024-11-20 16:40:57.004503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.190 [2024-11-20 16:40:57.004509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.190 qpair failed and we were unable to recover it. 00:29:11.190 [2024-11-20 16:40:57.004819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.190 [2024-11-20 16:40:57.004825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.190 qpair failed and we were unable to recover it. 00:29:11.190 [2024-11-20 16:40:57.005117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.190 [2024-11-20 16:40:57.005124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.190 qpair failed and we were unable to recover it. 00:29:11.190 [2024-11-20 16:40:57.005317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.190 [2024-11-20 16:40:57.005326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.190 qpair failed and we were unable to recover it. 00:29:11.190 [2024-11-20 16:40:57.005676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.190 [2024-11-20 16:40:57.005682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.190 qpair failed and we were unable to recover it. 00:29:11.190 [2024-11-20 16:40:57.005976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.190 [2024-11-20 16:40:57.005985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.190 qpair failed and we were unable to recover it. 00:29:11.190 [2024-11-20 16:40:57.006281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.190 [2024-11-20 16:40:57.006287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.190 qpair failed and we were unable to recover it. 00:29:11.190 [2024-11-20 16:40:57.006595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.190 [2024-11-20 16:40:57.006602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.190 qpair failed and we were unable to recover it. 00:29:11.190 [2024-11-20 16:40:57.006896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.190 [2024-11-20 16:40:57.006902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.190 qpair failed and we were unable to recover it. 00:29:11.190 [2024-11-20 16:40:57.007216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.190 [2024-11-20 16:40:57.007223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.190 qpair failed and we were unable to recover it. 00:29:11.190 [2024-11-20 16:40:57.007511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.190 [2024-11-20 16:40:57.007518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.190 qpair failed and we were unable to recover it. 00:29:11.190 [2024-11-20 16:40:57.007792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.190 [2024-11-20 16:40:57.007799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.190 qpair failed and we were unable to recover it. 00:29:11.190 [2024-11-20 16:40:57.008109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.190 [2024-11-20 16:40:57.008115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.190 qpair failed and we were unable to recover it. 00:29:11.190 [2024-11-20 16:40:57.008380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.190 [2024-11-20 16:40:57.008387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.190 qpair failed and we were unable to recover it. 00:29:11.190 [2024-11-20 16:40:57.008571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.190 [2024-11-20 16:40:57.008579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.190 qpair failed and we were unable to recover it. 00:29:11.190 [2024-11-20 16:40:57.008888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.190 [2024-11-20 16:40:57.008896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.190 qpair failed and we were unable to recover it. 00:29:11.190 [2024-11-20 16:40:57.009063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.190 [2024-11-20 16:40:57.009070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.190 qpair failed and we were unable to recover it. 00:29:11.190 [2024-11-20 16:40:57.009327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.190 [2024-11-20 16:40:57.009334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.190 qpair failed and we were unable to recover it. 00:29:11.190 [2024-11-20 16:40:57.009621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.190 [2024-11-20 16:40:57.009628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.190 qpair failed and we were unable to recover it. 00:29:11.190 [2024-11-20 16:40:57.009935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.190 [2024-11-20 16:40:57.009942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.190 qpair failed and we were unable to recover it. 00:29:11.190 [2024-11-20 16:40:57.010243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.190 [2024-11-20 16:40:57.010250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.190 qpair failed and we were unable to recover it. 00:29:11.190 [2024-11-20 16:40:57.010518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.190 [2024-11-20 16:40:57.010525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.190 qpair failed and we were unable to recover it. 00:29:11.190 [2024-11-20 16:40:57.010851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.190 [2024-11-20 16:40:57.010858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.190 qpair failed and we were unable to recover it. 00:29:11.190 [2024-11-20 16:40:57.011179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.190 [2024-11-20 16:40:57.011185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.190 qpair failed and we were unable to recover it. 00:29:11.190 [2024-11-20 16:40:57.011381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.190 [2024-11-20 16:40:57.011388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.190 qpair failed and we were unable to recover it. 00:29:11.190 [2024-11-20 16:40:57.011569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.191 [2024-11-20 16:40:57.011584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.191 qpair failed and we were unable to recover it. 00:29:11.191 [2024-11-20 16:40:57.011763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.191 [2024-11-20 16:40:57.011770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.191 qpair failed and we were unable to recover it. 00:29:11.191 [2024-11-20 16:40:57.012050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.191 [2024-11-20 16:40:57.012057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.191 qpair failed and we were unable to recover it. 00:29:11.191 [2024-11-20 16:40:57.012386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.191 [2024-11-20 16:40:57.012392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.191 qpair failed and we were unable to recover it. 00:29:11.191 [2024-11-20 16:40:57.012775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.191 [2024-11-20 16:40:57.012781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.191 qpair failed and we were unable to recover it. 00:29:11.191 [2024-11-20 16:40:57.013052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.191 [2024-11-20 16:40:57.013059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.191 qpair failed and we were unable to recover it. 00:29:11.191 [2024-11-20 16:40:57.013413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.191 [2024-11-20 16:40:57.013420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.191 qpair failed and we were unable to recover it. 00:29:11.191 [2024-11-20 16:40:57.013703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.191 [2024-11-20 16:40:57.013711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.191 qpair failed and we were unable to recover it. 00:29:11.191 [2024-11-20 16:40:57.014012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.191 [2024-11-20 16:40:57.014019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.191 qpair failed and we were unable to recover it. 00:29:11.191 [2024-11-20 16:40:57.014310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.191 [2024-11-20 16:40:57.014317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.191 qpair failed and we were unable to recover it. 00:29:11.191 [2024-11-20 16:40:57.014627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.191 [2024-11-20 16:40:57.014634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.191 qpair failed and we were unable to recover it. 00:29:11.191 [2024-11-20 16:40:57.014926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.191 [2024-11-20 16:40:57.014932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.191 qpair failed and we were unable to recover it. 00:29:11.191 [2024-11-20 16:40:57.015244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.191 [2024-11-20 16:40:57.015251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.191 qpair failed and we were unable to recover it. 00:29:11.191 [2024-11-20 16:40:57.015544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.191 [2024-11-20 16:40:57.015550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.191 qpair failed and we were unable to recover it. 00:29:11.191 [2024-11-20 16:40:57.015860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.191 [2024-11-20 16:40:57.015867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.191 qpair failed and we were unable to recover it. 00:29:11.191 [2024-11-20 16:40:57.016108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.191 [2024-11-20 16:40:57.016115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.191 qpair failed and we were unable to recover it. 00:29:11.191 [2024-11-20 16:40:57.016409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.191 [2024-11-20 16:40:57.016416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.191 qpair failed and we were unable to recover it. 00:29:11.191 [2024-11-20 16:40:57.016716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.191 [2024-11-20 16:40:57.016722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.191 qpair failed and we were unable to recover it. 00:29:11.191 [2024-11-20 16:40:57.017034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.191 [2024-11-20 16:40:57.017043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.191 qpair failed and we were unable to recover it. 00:29:11.191 [2024-11-20 16:40:57.017290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.191 [2024-11-20 16:40:57.017297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.191 qpair failed and we were unable to recover it. 00:29:11.191 [2024-11-20 16:40:57.017617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.191 [2024-11-20 16:40:57.017624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.191 qpair failed and we were unable to recover it. 00:29:11.191 [2024-11-20 16:40:57.017933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.191 [2024-11-20 16:40:57.017940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.191 qpair failed and we were unable to recover it. 00:29:11.191 [2024-11-20 16:40:57.018148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.191 [2024-11-20 16:40:57.018155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.191 qpair failed and we were unable to recover it. 00:29:11.191 [2024-11-20 16:40:57.018466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.191 [2024-11-20 16:40:57.018472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.191 qpair failed and we were unable to recover it. 00:29:11.191 [2024-11-20 16:40:57.018785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.191 [2024-11-20 16:40:57.018792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.191 qpair failed and we were unable to recover it. 00:29:11.191 [2024-11-20 16:40:57.019094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.191 [2024-11-20 16:40:57.019101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.191 qpair failed and we were unable to recover it. 00:29:11.191 [2024-11-20 16:40:57.019396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.191 [2024-11-20 16:40:57.019404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.191 qpair failed and we were unable to recover it. 00:29:11.191 [2024-11-20 16:40:57.019714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.191 [2024-11-20 16:40:57.019722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.191 qpair failed and we were unable to recover it. 00:29:11.191 [2024-11-20 16:40:57.020065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.191 [2024-11-20 16:40:57.020072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.191 qpair failed and we were unable to recover it. 00:29:11.191 [2024-11-20 16:40:57.020385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.191 [2024-11-20 16:40:57.020392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.191 qpair failed and we were unable to recover it. 00:29:11.191 [2024-11-20 16:40:57.020767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.191 [2024-11-20 16:40:57.020773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.191 qpair failed and we were unable to recover it. 00:29:11.191 [2024-11-20 16:40:57.020972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.191 [2024-11-20 16:40:57.020979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.191 qpair failed and we were unable to recover it. 00:29:11.191 [2024-11-20 16:40:57.021156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.191 [2024-11-20 16:40:57.021164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.191 qpair failed and we were unable to recover it. 00:29:11.191 [2024-11-20 16:40:57.021527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.191 [2024-11-20 16:40:57.021533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.191 qpair failed and we were unable to recover it. 00:29:11.191 [2024-11-20 16:40:57.021813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.191 [2024-11-20 16:40:57.021827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.191 qpair failed and we were unable to recover it. 00:29:11.191 [2024-11-20 16:40:57.022109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.191 [2024-11-20 16:40:57.022116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.191 qpair failed and we were unable to recover it. 00:29:11.191 [2024-11-20 16:40:57.022428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.191 [2024-11-20 16:40:57.022435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.191 qpair failed and we were unable to recover it. 00:29:11.191 [2024-11-20 16:40:57.022758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.191 [2024-11-20 16:40:57.022765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.191 qpair failed and we were unable to recover it. 00:29:11.191 [2024-11-20 16:40:57.022924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.192 [2024-11-20 16:40:57.022932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.192 qpair failed and we were unable to recover it. 00:29:11.192 [2024-11-20 16:40:57.023200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.192 [2024-11-20 16:40:57.023207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.192 qpair failed and we were unable to recover it. 00:29:11.192 [2024-11-20 16:40:57.023528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.192 [2024-11-20 16:40:57.023534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.192 qpair failed and we were unable to recover it. 00:29:11.192 [2024-11-20 16:40:57.023906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.192 [2024-11-20 16:40:57.023913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.192 qpair failed and we were unable to recover it. 00:29:11.192 [2024-11-20 16:40:57.024219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.192 [2024-11-20 16:40:57.024226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.192 qpair failed and we were unable to recover it. 00:29:11.192 [2024-11-20 16:40:57.024519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.192 [2024-11-20 16:40:57.024526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.192 qpair failed and we were unable to recover it. 00:29:11.192 [2024-11-20 16:40:57.024886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.192 [2024-11-20 16:40:57.024892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.192 qpair failed and we were unable to recover it. 00:29:11.192 [2024-11-20 16:40:57.025178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.192 [2024-11-20 16:40:57.025185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.192 qpair failed and we were unable to recover it. 00:29:11.192 [2024-11-20 16:40:57.025497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.192 [2024-11-20 16:40:57.025503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.192 qpair failed and we were unable to recover it. 00:29:11.192 [2024-11-20 16:40:57.025787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.192 [2024-11-20 16:40:57.025801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.192 qpair failed and we were unable to recover it. 00:29:11.192 [2024-11-20 16:40:57.026106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.192 [2024-11-20 16:40:57.026113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.192 qpair failed and we were unable to recover it. 00:29:11.192 [2024-11-20 16:40:57.026437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.192 [2024-11-20 16:40:57.026444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.192 qpair failed and we were unable to recover it. 00:29:11.192 [2024-11-20 16:40:57.026750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.192 [2024-11-20 16:40:57.026756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.192 qpair failed and we were unable to recover it. 00:29:11.192 [2024-11-20 16:40:57.027073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.192 [2024-11-20 16:40:57.027081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.192 qpair failed and we were unable to recover it. 00:29:11.192 [2024-11-20 16:40:57.027424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.192 [2024-11-20 16:40:57.027431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.192 qpair failed and we were unable to recover it. 00:29:11.192 [2024-11-20 16:40:57.027634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.192 [2024-11-20 16:40:57.027641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.192 qpair failed and we were unable to recover it. 00:29:11.192 [2024-11-20 16:40:57.027953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.192 [2024-11-20 16:40:57.027960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.192 qpair failed and we were unable to recover it. 00:29:11.192 [2024-11-20 16:40:57.028259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.192 [2024-11-20 16:40:57.028266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.192 qpair failed and we were unable to recover it. 00:29:11.192 [2024-11-20 16:40:57.028435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.192 [2024-11-20 16:40:57.028442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.192 qpair failed and we were unable to recover it. 00:29:11.192 [2024-11-20 16:40:57.028754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.192 [2024-11-20 16:40:57.028762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.192 qpair failed and we were unable to recover it. 00:29:11.192 [2024-11-20 16:40:57.029068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.192 [2024-11-20 16:40:57.029078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.192 qpair failed and we were unable to recover it. 00:29:11.192 [2024-11-20 16:40:57.029368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.192 [2024-11-20 16:40:57.029375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.192 qpair failed and we were unable to recover it. 00:29:11.192 [2024-11-20 16:40:57.029581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.192 [2024-11-20 16:40:57.029588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.192 qpair failed and we were unable to recover it. 00:29:11.192 [2024-11-20 16:40:57.029989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.192 [2024-11-20 16:40:57.029997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.192 qpair failed and we were unable to recover it. 00:29:11.192 [2024-11-20 16:40:57.030275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.192 [2024-11-20 16:40:57.030282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.192 qpair failed and we were unable to recover it. 00:29:11.192 [2024-11-20 16:40:57.030595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.192 [2024-11-20 16:40:57.030603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.192 qpair failed and we were unable to recover it. 00:29:11.192 [2024-11-20 16:40:57.030766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.192 [2024-11-20 16:40:57.030774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.192 qpair failed and we were unable to recover it. 00:29:11.192 [2024-11-20 16:40:57.031151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.192 [2024-11-20 16:40:57.031158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.192 qpair failed and we were unable to recover it. 00:29:11.192 [2024-11-20 16:40:57.031466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.192 [2024-11-20 16:40:57.031473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.192 qpair failed and we were unable to recover it. 00:29:11.192 [2024-11-20 16:40:57.031775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.192 [2024-11-20 16:40:57.031781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.192 qpair failed and we were unable to recover it. 00:29:11.192 [2024-11-20 16:40:57.032094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.192 [2024-11-20 16:40:57.032101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.192 qpair failed and we were unable to recover it. 00:29:11.192 [2024-11-20 16:40:57.032406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.193 [2024-11-20 16:40:57.032412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.193 qpair failed and we were unable to recover it. 00:29:11.193 [2024-11-20 16:40:57.032693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.193 [2024-11-20 16:40:57.032699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.193 qpair failed and we were unable to recover it. 00:29:11.193 [2024-11-20 16:40:57.033004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.193 [2024-11-20 16:40:57.033011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.193 qpair failed and we were unable to recover it. 00:29:11.193 [2024-11-20 16:40:57.033367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.193 [2024-11-20 16:40:57.033374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.193 qpair failed and we were unable to recover it. 00:29:11.193 [2024-11-20 16:40:57.033696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.193 [2024-11-20 16:40:57.033703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.193 qpair failed and we were unable to recover it. 00:29:11.193 [2024-11-20 16:40:57.034012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.193 [2024-11-20 16:40:57.034019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.193 qpair failed and we were unable to recover it. 00:29:11.193 [2024-11-20 16:40:57.034359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.193 [2024-11-20 16:40:57.034366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.193 qpair failed and we were unable to recover it. 00:29:11.193 [2024-11-20 16:40:57.034668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.193 [2024-11-20 16:40:57.034675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.193 qpair failed and we were unable to recover it. 00:29:11.193 [2024-11-20 16:40:57.034988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.193 [2024-11-20 16:40:57.034995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.193 qpair failed and we were unable to recover it. 00:29:11.193 [2024-11-20 16:40:57.035352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.193 [2024-11-20 16:40:57.035359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.193 qpair failed and we were unable to recover it. 00:29:11.193 [2024-11-20 16:40:57.035560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.193 [2024-11-20 16:40:57.035567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.193 qpair failed and we were unable to recover it. 00:29:11.193 [2024-11-20 16:40:57.035835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.193 [2024-11-20 16:40:57.035842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.193 qpair failed and we were unable to recover it. 00:29:11.193 [2024-11-20 16:40:57.036234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.193 [2024-11-20 16:40:57.036241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.193 qpair failed and we were unable to recover it. 00:29:11.193 [2024-11-20 16:40:57.036533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.193 [2024-11-20 16:40:57.036539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.193 qpair failed and we were unable to recover it. 00:29:11.193 [2024-11-20 16:40:57.036839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.193 [2024-11-20 16:40:57.036845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.193 qpair failed and we were unable to recover it. 00:29:11.193 [2024-11-20 16:40:57.037166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.193 [2024-11-20 16:40:57.037172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.193 qpair failed and we were unable to recover it. 00:29:11.193 [2024-11-20 16:40:57.037479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.193 [2024-11-20 16:40:57.037486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.193 qpair failed and we were unable to recover it. 00:29:11.193 [2024-11-20 16:40:57.037690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.193 [2024-11-20 16:40:57.037696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.193 qpair failed and we were unable to recover it. 00:29:11.193 [2024-11-20 16:40:57.038010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.193 [2024-11-20 16:40:57.038017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.193 qpair failed and we were unable to recover it. 00:29:11.193 [2024-11-20 16:40:57.038356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.193 [2024-11-20 16:40:57.038363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.193 qpair failed and we were unable to recover it. 00:29:11.193 [2024-11-20 16:40:57.038553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.193 [2024-11-20 16:40:57.038560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.193 qpair failed and we were unable to recover it. 00:29:11.193 [2024-11-20 16:40:57.038826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.193 [2024-11-20 16:40:57.038833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.193 qpair failed and we were unable to recover it. 00:29:11.193 [2024-11-20 16:40:57.039135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.193 [2024-11-20 16:40:57.039143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.193 qpair failed and we were unable to recover it. 00:29:11.193 [2024-11-20 16:40:57.039329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.193 [2024-11-20 16:40:57.039336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.193 qpair failed and we were unable to recover it. 00:29:11.193 [2024-11-20 16:40:57.039666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.193 [2024-11-20 16:40:57.039674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.193 qpair failed and we were unable to recover it. 00:29:11.193 [2024-11-20 16:40:57.039827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.193 [2024-11-20 16:40:57.039836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.193 qpair failed and we were unable to recover it. 00:29:11.193 [2024-11-20 16:40:57.040140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.193 [2024-11-20 16:40:57.040147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.193 qpair failed and we were unable to recover it. 00:29:11.193 [2024-11-20 16:40:57.040512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.193 [2024-11-20 16:40:57.040519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.193 qpair failed and we were unable to recover it. 00:29:11.193 [2024-11-20 16:40:57.040813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.193 [2024-11-20 16:40:57.040820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.193 qpair failed and we were unable to recover it. 00:29:11.193 [2024-11-20 16:40:57.041127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.193 [2024-11-20 16:40:57.041136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.193 qpair failed and we were unable to recover it. 00:29:11.193 [2024-11-20 16:40:57.041227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.193 [2024-11-20 16:40:57.041234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.193 qpair failed and we were unable to recover it. 00:29:11.193 [2024-11-20 16:40:57.041515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.193 [2024-11-20 16:40:57.041522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.193 qpair failed and we were unable to recover it. 00:29:11.193 [2024-11-20 16:40:57.041691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.193 [2024-11-20 16:40:57.041698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.193 qpair failed and we were unable to recover it. 00:29:11.193 [2024-11-20 16:40:57.041880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.193 [2024-11-20 16:40:57.041888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.193 qpair failed and we were unable to recover it. 00:29:11.193 [2024-11-20 16:40:57.042135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.193 [2024-11-20 16:40:57.042142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.193 qpair failed and we were unable to recover it. 00:29:11.193 [2024-11-20 16:40:57.042340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.193 [2024-11-20 16:40:57.042347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.193 qpair failed and we were unable to recover it. 00:29:11.193 [2024-11-20 16:40:57.042530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.193 [2024-11-20 16:40:57.042537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.193 qpair failed and we were unable to recover it. 00:29:11.193 [2024-11-20 16:40:57.042837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.193 [2024-11-20 16:40:57.042843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.193 qpair failed and we were unable to recover it. 00:29:11.193 [2024-11-20 16:40:57.043096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.194 [2024-11-20 16:40:57.043104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.194 qpair failed and we were unable to recover it. 00:29:11.194 [2024-11-20 16:40:57.043330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.194 [2024-11-20 16:40:57.043337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.194 qpair failed and we were unable to recover it. 00:29:11.194 [2024-11-20 16:40:57.043618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.194 [2024-11-20 16:40:57.043625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.194 qpair failed and we were unable to recover it. 00:29:11.194 [2024-11-20 16:40:57.043828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.194 [2024-11-20 16:40:57.043835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.194 qpair failed and we were unable to recover it. 00:29:11.194 [2024-11-20 16:40:57.044150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.194 [2024-11-20 16:40:57.044157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.194 qpair failed and we were unable to recover it. 00:29:11.194 [2024-11-20 16:40:57.044453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.194 [2024-11-20 16:40:57.044460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.194 qpair failed and we were unable to recover it. 00:29:11.194 [2024-11-20 16:40:57.044765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.194 [2024-11-20 16:40:57.044772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.194 qpair failed and we were unable to recover it. 00:29:11.194 [2024-11-20 16:40:57.045046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.194 [2024-11-20 16:40:57.045053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.194 qpair failed and we were unable to recover it. 00:29:11.194 [2024-11-20 16:40:57.045344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.194 [2024-11-20 16:40:57.045350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.194 qpair failed and we were unable to recover it. 00:29:11.194 [2024-11-20 16:40:57.045489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.194 [2024-11-20 16:40:57.045496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.194 qpair failed and we were unable to recover it. 00:29:11.194 [2024-11-20 16:40:57.045790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.194 [2024-11-20 16:40:57.045796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.194 qpair failed and we were unable to recover it. 00:29:11.194 [2024-11-20 16:40:57.046133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.194 [2024-11-20 16:40:57.046140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.194 qpair failed and we were unable to recover it. 00:29:11.194 [2024-11-20 16:40:57.046408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.194 [2024-11-20 16:40:57.046415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.194 qpair failed and we were unable to recover it. 00:29:11.194 [2024-11-20 16:40:57.046749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.194 [2024-11-20 16:40:57.046756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.194 qpair failed and we were unable to recover it. 00:29:11.194 [2024-11-20 16:40:57.047058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.194 [2024-11-20 16:40:57.047065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.194 qpair failed and we were unable to recover it. 00:29:11.194 [2024-11-20 16:40:57.047278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.194 [2024-11-20 16:40:57.047285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.194 qpair failed and we were unable to recover it. 00:29:11.194 [2024-11-20 16:40:57.047472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.194 [2024-11-20 16:40:57.047479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.194 qpair failed and we were unable to recover it. 00:29:11.194 [2024-11-20 16:40:57.047772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.194 [2024-11-20 16:40:57.047779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.194 qpair failed and we were unable to recover it. 00:29:11.194 [2024-11-20 16:40:57.048091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.194 [2024-11-20 16:40:57.048098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.194 qpair failed and we were unable to recover it. 00:29:11.194 [2024-11-20 16:40:57.048291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.194 [2024-11-20 16:40:57.048298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.194 qpair failed and we were unable to recover it. 00:29:11.194 [2024-11-20 16:40:57.048550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.194 [2024-11-20 16:40:57.048556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.194 qpair failed and we were unable to recover it. 00:29:11.194 [2024-11-20 16:40:57.048865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.194 [2024-11-20 16:40:57.048872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.194 qpair failed and we were unable to recover it. 00:29:11.194 [2024-11-20 16:40:57.049153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.194 [2024-11-20 16:40:57.049160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.194 qpair failed and we were unable to recover it. 00:29:11.194 [2024-11-20 16:40:57.049517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.194 [2024-11-20 16:40:57.049524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.194 qpair failed and we were unable to recover it. 00:29:11.194 [2024-11-20 16:40:57.049799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.194 [2024-11-20 16:40:57.049806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.194 qpair failed and we were unable to recover it. 00:29:11.194 [2024-11-20 16:40:57.050132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.194 [2024-11-20 16:40:57.050139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.194 qpair failed and we were unable to recover it. 00:29:11.194 [2024-11-20 16:40:57.050441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.194 [2024-11-20 16:40:57.050448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.194 qpair failed and we were unable to recover it. 00:29:11.194 [2024-11-20 16:40:57.050774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.194 [2024-11-20 16:40:57.050781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.194 qpair failed and we were unable to recover it. 00:29:11.194 [2024-11-20 16:40:57.050994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.194 [2024-11-20 16:40:57.051001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.194 qpair failed and we were unable to recover it. 00:29:11.194 [2024-11-20 16:40:57.051318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.194 [2024-11-20 16:40:57.051326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.194 qpair failed and we were unable to recover it. 00:29:11.194 [2024-11-20 16:40:57.051614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.194 [2024-11-20 16:40:57.051622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.194 qpair failed and we were unable to recover it. 00:29:11.194 [2024-11-20 16:40:57.051950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.194 [2024-11-20 16:40:57.051959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.194 qpair failed and we were unable to recover it. 00:29:11.194 [2024-11-20 16:40:57.052254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.194 [2024-11-20 16:40:57.052261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.194 qpair failed and we were unable to recover it. 00:29:11.194 [2024-11-20 16:40:57.052544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.194 [2024-11-20 16:40:57.052551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.194 qpair failed and we were unable to recover it. 00:29:11.194 [2024-11-20 16:40:57.052873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.194 [2024-11-20 16:40:57.052880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.194 qpair failed and we were unable to recover it. 00:29:11.194 [2024-11-20 16:40:57.053262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.194 [2024-11-20 16:40:57.053269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.194 qpair failed and we were unable to recover it. 00:29:11.194 [2024-11-20 16:40:57.053571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.194 [2024-11-20 16:40:57.053578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.195 qpair failed and we were unable to recover it. 00:29:11.195 [2024-11-20 16:40:57.053876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.195 [2024-11-20 16:40:57.053882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.195 qpair failed and we were unable to recover it. 00:29:11.195 [2024-11-20 16:40:57.054199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.195 [2024-11-20 16:40:57.054206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.195 qpair failed and we were unable to recover it. 00:29:11.195 [2024-11-20 16:40:57.054534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.195 [2024-11-20 16:40:57.054542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.195 qpair failed and we were unable to recover it. 00:29:11.195 [2024-11-20 16:40:57.054849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.195 [2024-11-20 16:40:57.054855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.195 qpair failed and we were unable to recover it. 00:29:11.195 [2024-11-20 16:40:57.055154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.195 [2024-11-20 16:40:57.055162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.195 qpair failed and we were unable to recover it. 00:29:11.195 [2024-11-20 16:40:57.055487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.195 [2024-11-20 16:40:57.055493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.195 qpair failed and we were unable to recover it. 00:29:11.195 [2024-11-20 16:40:57.055778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.195 [2024-11-20 16:40:57.055786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.195 qpair failed and we were unable to recover it. 00:29:11.195 [2024-11-20 16:40:57.055987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.195 [2024-11-20 16:40:57.055995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.195 qpair failed and we were unable to recover it. 00:29:11.195 [2024-11-20 16:40:57.056160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.195 [2024-11-20 16:40:57.056168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.195 qpair failed and we were unable to recover it. 00:29:11.195 [2024-11-20 16:40:57.056382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.195 [2024-11-20 16:40:57.056389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.195 qpair failed and we were unable to recover it. 00:29:11.195 [2024-11-20 16:40:57.056624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.195 [2024-11-20 16:40:57.056631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.195 qpair failed and we were unable to recover it. 00:29:11.195 [2024-11-20 16:40:57.056920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.195 [2024-11-20 16:40:57.056927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.195 qpair failed and we were unable to recover it. 00:29:11.195 [2024-11-20 16:40:57.057239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.195 [2024-11-20 16:40:57.057246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.195 qpair failed and we were unable to recover it. 00:29:11.195 [2024-11-20 16:40:57.057556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.195 [2024-11-20 16:40:57.057563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.195 qpair failed and we were unable to recover it. 00:29:11.195 [2024-11-20 16:40:57.057760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.195 [2024-11-20 16:40:57.057767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.195 qpair failed and we were unable to recover it. 00:29:11.195 [2024-11-20 16:40:57.057946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.195 [2024-11-20 16:40:57.057953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.195 qpair failed and we were unable to recover it. 00:29:11.195 [2024-11-20 16:40:57.058284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.195 [2024-11-20 16:40:57.058291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.195 qpair failed and we were unable to recover it. 00:29:11.195 [2024-11-20 16:40:57.058593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.195 [2024-11-20 16:40:57.058600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.195 qpair failed and we were unable to recover it. 00:29:11.195 [2024-11-20 16:40:57.058925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.195 [2024-11-20 16:40:57.058932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.195 qpair failed and we were unable to recover it. 00:29:11.195 [2024-11-20 16:40:57.059244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.195 [2024-11-20 16:40:57.059251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.195 qpair failed and we were unable to recover it. 00:29:11.195 [2024-11-20 16:40:57.059563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.195 [2024-11-20 16:40:57.059570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.195 qpair failed and we were unable to recover it. 00:29:11.195 [2024-11-20 16:40:57.059873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.195 [2024-11-20 16:40:57.059880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.195 qpair failed and we were unable to recover it. 00:29:11.195 [2024-11-20 16:40:57.060184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.195 [2024-11-20 16:40:57.060191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.195 qpair failed and we were unable to recover it. 00:29:11.195 [2024-11-20 16:40:57.060437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.195 [2024-11-20 16:40:57.060444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.195 qpair failed and we were unable to recover it. 00:29:11.195 [2024-11-20 16:40:57.060768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.195 [2024-11-20 16:40:57.060775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.195 qpair failed and we were unable to recover it. 00:29:11.195 [2024-11-20 16:40:57.061065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.195 [2024-11-20 16:40:57.061071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.195 qpair failed and we were unable to recover it. 00:29:11.195 [2024-11-20 16:40:57.061370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.195 [2024-11-20 16:40:57.061378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.195 qpair failed and we were unable to recover it. 00:29:11.195 [2024-11-20 16:40:57.061566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.195 [2024-11-20 16:40:57.061573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.195 qpair failed and we were unable to recover it. 00:29:11.195 [2024-11-20 16:40:57.061870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.195 [2024-11-20 16:40:57.061877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.195 qpair failed and we were unable to recover it. 00:29:11.195 [2024-11-20 16:40:57.062210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.195 [2024-11-20 16:40:57.062217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.195 qpair failed and we were unable to recover it. 00:29:11.195 [2024-11-20 16:40:57.062394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.195 [2024-11-20 16:40:57.062401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.195 qpair failed and we were unable to recover it. 00:29:11.195 [2024-11-20 16:40:57.062741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.195 [2024-11-20 16:40:57.062747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.195 qpair failed and we were unable to recover it. 00:29:11.195 [2024-11-20 16:40:57.063066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.195 [2024-11-20 16:40:57.063073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.195 qpair failed and we were unable to recover it. 00:29:11.195 [2024-11-20 16:40:57.063392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.195 [2024-11-20 16:40:57.063399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.195 qpair failed and we were unable to recover it. 00:29:11.195 [2024-11-20 16:40:57.063690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.195 [2024-11-20 16:40:57.063699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.195 qpair failed and we were unable to recover it. 00:29:11.195 [2024-11-20 16:40:57.063996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.195 [2024-11-20 16:40:57.064003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.195 qpair failed and we were unable to recover it. 00:29:11.196 [2024-11-20 16:40:57.064320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.196 [2024-11-20 16:40:57.064327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.196 qpair failed and we were unable to recover it. 00:29:11.196 [2024-11-20 16:40:57.064650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.196 [2024-11-20 16:40:57.064657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.196 qpair failed and we were unable to recover it. 00:29:11.196 [2024-11-20 16:40:57.065016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.196 [2024-11-20 16:40:57.065024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.196 qpair failed and we were unable to recover it. 00:29:11.196 [2024-11-20 16:40:57.065354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.196 [2024-11-20 16:40:57.065361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.196 qpair failed and we were unable to recover it. 00:29:11.196 [2024-11-20 16:40:57.065655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.196 [2024-11-20 16:40:57.065662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.196 qpair failed and we were unable to recover it. 00:29:11.196 [2024-11-20 16:40:57.065988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.196 [2024-11-20 16:40:57.065995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.196 qpair failed and we were unable to recover it. 00:29:11.196 [2024-11-20 16:40:57.066328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.196 [2024-11-20 16:40:57.066335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.196 qpair failed and we were unable to recover it. 00:29:11.196 [2024-11-20 16:40:57.066507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.196 [2024-11-20 16:40:57.066514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.196 qpair failed and we were unable to recover it. 00:29:11.196 [2024-11-20 16:40:57.066812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.196 [2024-11-20 16:40:57.066819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.196 qpair failed and we were unable to recover it. 00:29:11.196 [2024-11-20 16:40:57.067007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.196 [2024-11-20 16:40:57.067015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.196 qpair failed and we were unable to recover it. 00:29:11.196 [2024-11-20 16:40:57.067352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.196 [2024-11-20 16:40:57.067360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.196 qpair failed and we were unable to recover it. 00:29:11.196 [2024-11-20 16:40:57.067669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.196 [2024-11-20 16:40:57.067676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.196 qpair failed and we were unable to recover it. 00:29:11.196 [2024-11-20 16:40:57.067864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.196 [2024-11-20 16:40:57.067871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.196 qpair failed and we were unable to recover it. 00:29:11.196 [2024-11-20 16:40:57.068196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.196 [2024-11-20 16:40:57.068203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.196 qpair failed and we were unable to recover it. 00:29:11.196 [2024-11-20 16:40:57.068471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.196 [2024-11-20 16:40:57.068478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.196 qpair failed and we were unable to recover it. 00:29:11.196 [2024-11-20 16:40:57.068782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.196 [2024-11-20 16:40:57.068790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.196 qpair failed and we were unable to recover it. 00:29:11.196 [2024-11-20 16:40:57.069089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.196 [2024-11-20 16:40:57.069096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.196 qpair failed and we were unable to recover it. 00:29:11.196 [2024-11-20 16:40:57.069382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.196 [2024-11-20 16:40:57.069389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.196 qpair failed and we were unable to recover it. 00:29:11.196 [2024-11-20 16:40:57.069682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.196 [2024-11-20 16:40:57.069689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.196 qpair failed and we were unable to recover it. 00:29:11.196 [2024-11-20 16:40:57.069994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.196 [2024-11-20 16:40:57.070001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.196 qpair failed and we were unable to recover it. 00:29:11.196 [2024-11-20 16:40:57.070201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.196 [2024-11-20 16:40:57.070208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.196 qpair failed and we were unable to recover it. 00:29:11.196 [2024-11-20 16:40:57.070520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.196 [2024-11-20 16:40:57.070527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.196 qpair failed and we were unable to recover it. 00:29:11.196 [2024-11-20 16:40:57.070823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.196 [2024-11-20 16:40:57.070830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.196 qpair failed and we were unable to recover it. 00:29:11.196 [2024-11-20 16:40:57.071147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.196 [2024-11-20 16:40:57.071154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.196 qpair failed and we were unable to recover it. 00:29:11.196 [2024-11-20 16:40:57.071493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.196 [2024-11-20 16:40:57.071501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.196 qpair failed and we were unable to recover it. 00:29:11.196 [2024-11-20 16:40:57.071675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.196 [2024-11-20 16:40:57.071682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.196 qpair failed and we were unable to recover it. 00:29:11.196 [2024-11-20 16:40:57.071990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.196 [2024-11-20 16:40:57.071997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.196 qpair failed and we were unable to recover it. 00:29:11.196 [2024-11-20 16:40:57.072352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.196 [2024-11-20 16:40:57.072359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.196 qpair failed and we were unable to recover it. 00:29:11.196 [2024-11-20 16:40:57.072676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.196 [2024-11-20 16:40:57.072683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.196 qpair failed and we were unable to recover it. 00:29:11.196 [2024-11-20 16:40:57.072893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.196 [2024-11-20 16:40:57.072900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.196 qpair failed and we were unable to recover it. 00:29:11.196 [2024-11-20 16:40:57.073188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.196 [2024-11-20 16:40:57.073195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.196 qpair failed and we were unable to recover it. 00:29:11.196 [2024-11-20 16:40:57.073371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.196 [2024-11-20 16:40:57.073378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.196 qpair failed and we were unable to recover it. 00:29:11.196 [2024-11-20 16:40:57.073687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.196 [2024-11-20 16:40:57.073694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.196 qpair failed and we were unable to recover it. 00:29:11.196 [2024-11-20 16:40:57.073993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.196 [2024-11-20 16:40:57.074001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.196 qpair failed and we were unable to recover it. 00:29:11.196 [2024-11-20 16:40:57.074300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.196 [2024-11-20 16:40:57.074308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.196 qpair failed and we were unable to recover it. 00:29:11.196 [2024-11-20 16:40:57.074360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.196 [2024-11-20 16:40:57.074367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.196 qpair failed and we were unable to recover it. 00:29:11.196 [2024-11-20 16:40:57.074709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.196 [2024-11-20 16:40:57.074716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.196 qpair failed and we were unable to recover it. 00:29:11.196 [2024-11-20 16:40:57.074997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.196 [2024-11-20 16:40:57.075004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.196 qpair failed and we were unable to recover it. 00:29:11.197 [2024-11-20 16:40:57.075311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.197 [2024-11-20 16:40:57.075320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.197 qpair failed and we were unable to recover it. 00:29:11.197 [2024-11-20 16:40:57.075623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.197 [2024-11-20 16:40:57.075630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.197 qpair failed and we were unable to recover it. 00:29:11.197 [2024-11-20 16:40:57.075912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.197 [2024-11-20 16:40:57.075927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.197 qpair failed and we were unable to recover it. 00:29:11.197 [2024-11-20 16:40:57.076230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.197 [2024-11-20 16:40:57.076237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.197 qpair failed and we were unable to recover it. 00:29:11.197 [2024-11-20 16:40:57.076521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.197 [2024-11-20 16:40:57.076536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.197 qpair failed and we were unable to recover it. 00:29:11.197 [2024-11-20 16:40:57.076635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.197 [2024-11-20 16:40:57.076642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.197 qpair failed and we were unable to recover it. 00:29:11.197 [2024-11-20 16:40:57.076767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.197 [2024-11-20 16:40:57.076774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.197 qpair failed and we were unable to recover it. 00:29:11.197 [2024-11-20 16:40:57.077055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.197 [2024-11-20 16:40:57.077062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.197 qpair failed and we were unable to recover it. 00:29:11.197 [2024-11-20 16:40:57.077357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.197 [2024-11-20 16:40:57.077364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.197 qpair failed and we were unable to recover it. 00:29:11.197 [2024-11-20 16:40:57.077734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.197 [2024-11-20 16:40:57.077741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.197 qpair failed and we were unable to recover it. 00:29:11.197 [2024-11-20 16:40:57.078052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.197 [2024-11-20 16:40:57.078059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.197 qpair failed and we were unable to recover it. 00:29:11.197 [2024-11-20 16:40:57.078383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.197 [2024-11-20 16:40:57.078397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.197 qpair failed and we were unable to recover it. 00:29:11.197 [2024-11-20 16:40:57.078718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.197 [2024-11-20 16:40:57.078725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.197 qpair failed and we were unable to recover it. 00:29:11.197 [2024-11-20 16:40:57.079022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.197 [2024-11-20 16:40:57.079029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.197 qpair failed and we were unable to recover it. 00:29:11.197 [2024-11-20 16:40:57.079245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.197 [2024-11-20 16:40:57.079252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.197 qpair failed and we were unable to recover it. 00:29:11.197 [2024-11-20 16:40:57.079523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.197 [2024-11-20 16:40:57.079530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.197 qpair failed and we were unable to recover it. 00:29:11.197 [2024-11-20 16:40:57.079845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.197 [2024-11-20 16:40:57.079852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.197 qpair failed and we were unable to recover it. 00:29:11.197 [2024-11-20 16:40:57.080164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.197 [2024-11-20 16:40:57.080171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.197 qpair failed and we were unable to recover it. 00:29:11.197 [2024-11-20 16:40:57.080467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.197 [2024-11-20 16:40:57.080474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.197 qpair failed and we were unable to recover it. 00:29:11.197 [2024-11-20 16:40:57.080649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.197 [2024-11-20 16:40:57.080656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.197 qpair failed and we were unable to recover it. 00:29:11.197 [2024-11-20 16:40:57.080920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.197 [2024-11-20 16:40:57.080927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.197 qpair failed and we were unable to recover it. 00:29:11.197 [2024-11-20 16:40:57.081308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.197 [2024-11-20 16:40:57.081315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.197 qpair failed and we were unable to recover it. 00:29:11.197 [2024-11-20 16:40:57.081579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.197 [2024-11-20 16:40:57.081586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.197 qpair failed and we were unable to recover it. 00:29:11.197 [2024-11-20 16:40:57.081907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.197 [2024-11-20 16:40:57.081914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.197 qpair failed and we were unable to recover it. 00:29:11.197 [2024-11-20 16:40:57.082196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.197 [2024-11-20 16:40:57.082203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.197 qpair failed and we were unable to recover it. 00:29:11.197 [2024-11-20 16:40:57.082525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.197 [2024-11-20 16:40:57.082531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.197 qpair failed and we were unable to recover it. 00:29:11.197 [2024-11-20 16:40:57.082706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.197 [2024-11-20 16:40:57.082714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.197 qpair failed and we were unable to recover it. 00:29:11.197 [2024-11-20 16:40:57.083064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.197 [2024-11-20 16:40:57.083072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.197 qpair failed and we were unable to recover it. 00:29:11.197 [2024-11-20 16:40:57.083445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.197 [2024-11-20 16:40:57.083453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.197 qpair failed and we were unable to recover it. 00:29:11.197 [2024-11-20 16:40:57.083750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.197 [2024-11-20 16:40:57.083758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.197 qpair failed and we were unable to recover it. 00:29:11.198 [2024-11-20 16:40:57.084149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.198 [2024-11-20 16:40:57.084156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.198 qpair failed and we were unable to recover it. 00:29:11.198 [2024-11-20 16:40:57.084539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.198 [2024-11-20 16:40:57.084546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.198 qpair failed and we were unable to recover it. 00:29:11.198 [2024-11-20 16:40:57.084718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.198 [2024-11-20 16:40:57.084725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.198 qpair failed and we were unable to recover it. 00:29:11.198 [2024-11-20 16:40:57.085082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.198 [2024-11-20 16:40:57.085089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.198 qpair failed and we were unable to recover it. 00:29:11.198 [2024-11-20 16:40:57.085266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.198 [2024-11-20 16:40:57.085272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.198 qpair failed and we were unable to recover it. 00:29:11.198 [2024-11-20 16:40:57.085540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.198 [2024-11-20 16:40:57.085547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.198 qpair failed and we were unable to recover it. 00:29:11.198 [2024-11-20 16:40:57.085872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.198 [2024-11-20 16:40:57.085879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.198 qpair failed and we were unable to recover it. 00:29:11.198 [2024-11-20 16:40:57.086166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.198 [2024-11-20 16:40:57.086173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.198 qpair failed and we were unable to recover it. 00:29:11.198 [2024-11-20 16:40:57.086471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.198 [2024-11-20 16:40:57.086477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.198 qpair failed and we were unable to recover it. 00:29:11.198 [2024-11-20 16:40:57.086684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.198 [2024-11-20 16:40:57.086691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.198 qpair failed and we were unable to recover it. 00:29:11.198 [2024-11-20 16:40:57.086960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.198 [2024-11-20 16:40:57.086970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.198 qpair failed and we were unable to recover it. 00:29:11.198 [2024-11-20 16:40:57.087192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.198 [2024-11-20 16:40:57.087200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.198 qpair failed and we were unable to recover it. 00:29:11.198 [2024-11-20 16:40:57.087373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.198 [2024-11-20 16:40:57.087380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.198 qpair failed and we were unable to recover it. 00:29:11.198 [2024-11-20 16:40:57.087530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.198 [2024-11-20 16:40:57.087537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.198 qpair failed and we were unable to recover it. 00:29:11.198 [2024-11-20 16:40:57.087866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.198 [2024-11-20 16:40:57.087873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.198 qpair failed and we were unable to recover it. 00:29:11.198 [2024-11-20 16:40:57.088183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.198 [2024-11-20 16:40:57.088191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.198 qpair failed and we were unable to recover it. 00:29:11.198 [2024-11-20 16:40:57.088534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.198 [2024-11-20 16:40:57.088542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.198 qpair failed and we were unable to recover it. 00:29:11.198 [2024-11-20 16:40:57.088917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.198 [2024-11-20 16:40:57.088924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.198 qpair failed and we were unable to recover it. 00:29:11.198 [2024-11-20 16:40:57.089226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.198 [2024-11-20 16:40:57.089234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.198 qpair failed and we were unable to recover it. 00:29:11.198 [2024-11-20 16:40:57.089530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.198 [2024-11-20 16:40:57.089538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.198 qpair failed and we were unable to recover it. 00:29:11.198 [2024-11-20 16:40:57.089926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.198 [2024-11-20 16:40:57.089935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.198 qpair failed and we were unable to recover it. 00:29:11.198 [2024-11-20 16:40:57.090103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.198 [2024-11-20 16:40:57.090111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.198 qpair failed and we were unable to recover it. 00:29:11.198 [2024-11-20 16:40:57.090420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.198 [2024-11-20 16:40:57.090428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.198 qpair failed and we were unable to recover it. 00:29:11.198 [2024-11-20 16:40:57.090777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.198 [2024-11-20 16:40:57.090785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.198 qpair failed and we were unable to recover it. 00:29:11.198 [2024-11-20 16:40:57.091099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.198 [2024-11-20 16:40:57.091106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.198 qpair failed and we were unable to recover it. 00:29:11.198 [2024-11-20 16:40:57.091410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.198 [2024-11-20 16:40:57.091417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.198 qpair failed and we were unable to recover it. 00:29:11.198 [2024-11-20 16:40:57.091688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.198 [2024-11-20 16:40:57.091694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.198 qpair failed and we were unable to recover it. 00:29:11.198 [2024-11-20 16:40:57.092034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.198 [2024-11-20 16:40:57.092041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.198 qpair failed and we were unable to recover it. 00:29:11.198 [2024-11-20 16:40:57.092254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.198 [2024-11-20 16:40:57.092260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.198 qpair failed and we were unable to recover it. 00:29:11.198 [2024-11-20 16:40:57.092412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.198 [2024-11-20 16:40:57.092419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.198 qpair failed and we were unable to recover it. 00:29:11.198 [2024-11-20 16:40:57.092640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.198 [2024-11-20 16:40:57.092647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.198 qpair failed and we were unable to recover it. 00:29:11.198 [2024-11-20 16:40:57.093048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.198 [2024-11-20 16:40:57.093056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.198 qpair failed and we were unable to recover it. 00:29:11.198 [2024-11-20 16:40:57.093373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.198 [2024-11-20 16:40:57.093380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.198 qpair failed and we were unable to recover it. 00:29:11.198 [2024-11-20 16:40:57.093608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.198 [2024-11-20 16:40:57.093614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.198 qpair failed and we were unable to recover it. 00:29:11.198 [2024-11-20 16:40:57.093904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.198 [2024-11-20 16:40:57.093911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.198 qpair failed and we were unable to recover it. 00:29:11.198 [2024-11-20 16:40:57.094118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.199 [2024-11-20 16:40:57.094125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.199 qpair failed and we were unable to recover it. 00:29:11.199 [2024-11-20 16:40:57.094303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.199 [2024-11-20 16:40:57.094310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.199 qpair failed and we were unable to recover it. 00:29:11.199 [2024-11-20 16:40:57.094617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.199 [2024-11-20 16:40:57.094624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.199 qpair failed and we were unable to recover it. 00:29:11.199 [2024-11-20 16:40:57.094938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.199 [2024-11-20 16:40:57.094945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.199 qpair failed and we were unable to recover it. 00:29:11.199 [2024-11-20 16:40:57.095158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.199 [2024-11-20 16:40:57.095165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.199 qpair failed and we were unable to recover it. 00:29:11.199 [2024-11-20 16:40:57.095495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.199 [2024-11-20 16:40:57.095502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.199 qpair failed and we were unable to recover it. 00:29:11.199 [2024-11-20 16:40:57.095677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.199 [2024-11-20 16:40:57.095684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.199 qpair failed and we were unable to recover it. 00:29:11.199 [2024-11-20 16:40:57.095990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.199 [2024-11-20 16:40:57.095997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.199 qpair failed and we were unable to recover it. 00:29:11.199 [2024-11-20 16:40:57.096342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.199 [2024-11-20 16:40:57.096349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.199 qpair failed and we were unable to recover it. 00:29:11.199 [2024-11-20 16:40:57.096676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.199 [2024-11-20 16:40:57.096682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.199 qpair failed and we were unable to recover it. 00:29:11.199 [2024-11-20 16:40:57.096885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.199 [2024-11-20 16:40:57.096892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.199 qpair failed and we were unable to recover it. 00:29:11.199 [2024-11-20 16:40:57.097095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.199 [2024-11-20 16:40:57.097103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.199 qpair failed and we were unable to recover it. 00:29:11.199 [2024-11-20 16:40:57.097437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.199 [2024-11-20 16:40:57.097444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.199 qpair failed and we were unable to recover it. 00:29:11.199 [2024-11-20 16:40:57.097733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.199 [2024-11-20 16:40:57.097740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.199 qpair failed and we were unable to recover it. 00:29:11.199 [2024-11-20 16:40:57.098050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.199 [2024-11-20 16:40:57.098057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.199 qpair failed and we were unable to recover it. 00:29:11.199 [2024-11-20 16:40:57.098436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.199 [2024-11-20 16:40:57.098445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.199 qpair failed and we were unable to recover it. 00:29:11.199 [2024-11-20 16:40:57.098739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.199 [2024-11-20 16:40:57.098746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.199 qpair failed and we were unable to recover it. 00:29:11.199 [2024-11-20 16:40:57.098918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.199 [2024-11-20 16:40:57.098925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.199 qpair failed and we were unable to recover it. 00:29:11.199 [2024-11-20 16:40:57.099233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.199 [2024-11-20 16:40:57.099240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.199 qpair failed and we were unable to recover it. 00:29:11.199 [2024-11-20 16:40:57.099559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.199 [2024-11-20 16:40:57.099567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.199 qpair failed and we were unable to recover it. 00:29:11.199 [2024-11-20 16:40:57.099913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.199 [2024-11-20 16:40:57.099921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.199 qpair failed and we were unable to recover it. 00:29:11.199 [2024-11-20 16:40:57.100214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.199 [2024-11-20 16:40:57.100221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.199 qpair failed and we were unable to recover it. 00:29:11.199 [2024-11-20 16:40:57.100505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.199 [2024-11-20 16:40:57.100512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.199 qpair failed and we were unable to recover it. 00:29:11.199 [2024-11-20 16:40:57.100792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.199 [2024-11-20 16:40:57.100799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.199 qpair failed and we were unable to recover it. 00:29:11.199 [2024-11-20 16:40:57.101108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.199 [2024-11-20 16:40:57.101115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.199 qpair failed and we were unable to recover it. 00:29:11.199 [2024-11-20 16:40:57.101329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.199 [2024-11-20 16:40:57.101339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.199 qpair failed and we were unable to recover it. 00:29:11.199 [2024-11-20 16:40:57.101665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.199 [2024-11-20 16:40:57.101671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.199 qpair failed and we were unable to recover it. 00:29:11.199 [2024-11-20 16:40:57.101964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.199 [2024-11-20 16:40:57.101972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.199 qpair failed and we were unable to recover it. 00:29:11.199 [2024-11-20 16:40:57.102276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.199 [2024-11-20 16:40:57.102284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.199 qpair failed and we were unable to recover it. 00:29:11.199 [2024-11-20 16:40:57.102584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.199 [2024-11-20 16:40:57.102592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.199 qpair failed and we were unable to recover it. 00:29:11.199 [2024-11-20 16:40:57.102797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.199 [2024-11-20 16:40:57.102804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.199 qpair failed and we were unable to recover it. 00:29:11.199 [2024-11-20 16:40:57.103011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.199 [2024-11-20 16:40:57.103018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.199 qpair failed and we were unable to recover it. 00:29:11.199 [2024-11-20 16:40:57.103293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.199 [2024-11-20 16:40:57.103300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.199 qpair failed and we were unable to recover it. 00:29:11.199 [2024-11-20 16:40:57.103604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.199 [2024-11-20 16:40:57.103610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.199 qpair failed and we were unable to recover it. 00:29:11.199 [2024-11-20 16:40:57.103897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.199 [2024-11-20 16:40:57.103903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.199 qpair failed and we were unable to recover it. 00:29:11.199 [2024-11-20 16:40:57.104114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.199 [2024-11-20 16:40:57.104122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.199 qpair failed and we were unable to recover it. 00:29:11.199 [2024-11-20 16:40:57.104434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.199 [2024-11-20 16:40:57.104440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.199 qpair failed and we were unable to recover it. 00:29:11.199 [2024-11-20 16:40:57.104722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.199 [2024-11-20 16:40:57.104729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.199 qpair failed and we were unable to recover it. 00:29:11.199 [2024-11-20 16:40:57.105043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.200 [2024-11-20 16:40:57.105050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.200 qpair failed and we were unable to recover it. 00:29:11.200 [2024-11-20 16:40:57.105348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.200 [2024-11-20 16:40:57.105356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.200 qpair failed and we were unable to recover it. 00:29:11.200 [2024-11-20 16:40:57.105649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.200 [2024-11-20 16:40:57.105656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.200 qpair failed and we were unable to recover it. 00:29:11.200 [2024-11-20 16:40:57.105928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.200 [2024-11-20 16:40:57.105935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.200 qpair failed and we were unable to recover it. 00:29:11.200 [2024-11-20 16:40:57.106091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.200 [2024-11-20 16:40:57.106100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.200 qpair failed and we were unable to recover it. 00:29:11.200 [2024-11-20 16:40:57.106406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.200 [2024-11-20 16:40:57.106413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.200 qpair failed and we were unable to recover it. 00:29:11.200 [2024-11-20 16:40:57.106722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.200 [2024-11-20 16:40:57.106728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.200 qpair failed and we were unable to recover it. 00:29:11.200 [2024-11-20 16:40:57.106929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.200 [2024-11-20 16:40:57.106936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.200 qpair failed and we were unable to recover it. 00:29:11.200 [2024-11-20 16:40:57.107213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.200 [2024-11-20 16:40:57.107220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.200 qpair failed and we were unable to recover it. 00:29:11.200 [2024-11-20 16:40:57.107514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.200 [2024-11-20 16:40:57.107521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.200 qpair failed and we were unable to recover it. 00:29:11.200 [2024-11-20 16:40:57.107868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.200 [2024-11-20 16:40:57.107875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.200 qpair failed and we were unable to recover it. 00:29:11.200 [2024-11-20 16:40:57.108060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.200 [2024-11-20 16:40:57.108067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.200 qpair failed and we were unable to recover it. 00:29:11.200 [2024-11-20 16:40:57.108342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.200 [2024-11-20 16:40:57.108348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.200 qpair failed and we were unable to recover it. 00:29:11.200 [2024-11-20 16:40:57.108644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.200 [2024-11-20 16:40:57.108652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.200 qpair failed and we were unable to recover it. 00:29:11.200 [2024-11-20 16:40:57.108854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.200 [2024-11-20 16:40:57.108862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.200 qpair failed and we were unable to recover it. 00:29:11.200 [2024-11-20 16:40:57.109152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.200 [2024-11-20 16:40:57.109159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.200 qpair failed and we were unable to recover it. 00:29:11.200 [2024-11-20 16:40:57.109455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.200 [2024-11-20 16:40:57.109470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.200 qpair failed and we were unable to recover it. 00:29:11.200 [2024-11-20 16:40:57.109837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.200 [2024-11-20 16:40:57.109845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.200 qpair failed and we were unable to recover it. 00:29:11.200 [2024-11-20 16:40:57.110053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.200 [2024-11-20 16:40:57.110060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.200 qpair failed and we were unable to recover it. 00:29:11.200 [2024-11-20 16:40:57.110384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.200 [2024-11-20 16:40:57.110392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.200 qpair failed and we were unable to recover it. 00:29:11.200 [2024-11-20 16:40:57.110694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.200 [2024-11-20 16:40:57.110701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.200 qpair failed and we were unable to recover it. 00:29:11.200 [2024-11-20 16:40:57.110991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.200 [2024-11-20 16:40:57.110998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.200 qpair failed and we were unable to recover it. 00:29:11.200 [2024-11-20 16:40:57.111285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.200 [2024-11-20 16:40:57.111292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.200 qpair failed and we were unable to recover it. 00:29:11.200 [2024-11-20 16:40:57.111601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.200 [2024-11-20 16:40:57.111608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.200 qpair failed and we were unable to recover it. 00:29:11.200 [2024-11-20 16:40:57.111888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.200 [2024-11-20 16:40:57.111903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.200 qpair failed and we were unable to recover it. 00:29:11.200 [2024-11-20 16:40:57.112193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.200 [2024-11-20 16:40:57.112201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.200 qpair failed and we were unable to recover it. 00:29:11.200 [2024-11-20 16:40:57.112394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.200 [2024-11-20 16:40:57.112401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.200 qpair failed and we were unable to recover it. 00:29:11.200 [2024-11-20 16:40:57.112731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.200 [2024-11-20 16:40:57.112739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.200 qpair failed and we were unable to recover it. 00:29:11.200 [2024-11-20 16:40:57.113030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.200 [2024-11-20 16:40:57.113038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.200 qpair failed and we were unable to recover it. 00:29:11.200 [2024-11-20 16:40:57.113371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.200 [2024-11-20 16:40:57.113378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.200 qpair failed and we were unable to recover it. 00:29:11.200 [2024-11-20 16:40:57.113666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.200 [2024-11-20 16:40:57.113679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.200 qpair failed and we were unable to recover it. 00:29:11.200 [2024-11-20 16:40:57.113871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.200 [2024-11-20 16:40:57.113877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.200 qpair failed and we were unable to recover it. 00:29:11.200 [2024-11-20 16:40:57.114036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.200 [2024-11-20 16:40:57.114044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.200 qpair failed and we were unable to recover it. 00:29:11.200 [2024-11-20 16:40:57.114237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.200 [2024-11-20 16:40:57.114244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.200 qpair failed and we were unable to recover it. 00:29:11.200 [2024-11-20 16:40:57.114545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.200 [2024-11-20 16:40:57.114552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.200 qpair failed and we were unable to recover it. 00:29:11.200 [2024-11-20 16:40:57.114853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.200 [2024-11-20 16:40:57.114860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.200 qpair failed and we were unable to recover it. 00:29:11.200 [2024-11-20 16:40:57.115154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.200 [2024-11-20 16:40:57.115161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.200 qpair failed and we were unable to recover it. 00:29:11.200 [2024-11-20 16:40:57.115547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.200 [2024-11-20 16:40:57.115554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.201 qpair failed and we were unable to recover it. 00:29:11.201 [2024-11-20 16:40:57.115856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.201 [2024-11-20 16:40:57.115863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.201 qpair failed and we were unable to recover it. 00:29:11.476 [2024-11-20 16:40:57.116185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.476 [2024-11-20 16:40:57.116194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.476 qpair failed and we were unable to recover it. 00:29:11.476 [2024-11-20 16:40:57.116503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.476 [2024-11-20 16:40:57.116510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.476 qpair failed and we were unable to recover it. 00:29:11.476 [2024-11-20 16:40:57.116803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.476 [2024-11-20 16:40:57.116810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.476 qpair failed and we were unable to recover it. 00:29:11.476 [2024-11-20 16:40:57.117218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.476 [2024-11-20 16:40:57.117225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.476 qpair failed and we were unable to recover it. 00:29:11.476 [2024-11-20 16:40:57.117518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.476 [2024-11-20 16:40:57.117525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.476 qpair failed and we were unable to recover it. 00:29:11.476 [2024-11-20 16:40:57.117828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.476 [2024-11-20 16:40:57.117835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.476 qpair failed and we were unable to recover it. 00:29:11.476 [2024-11-20 16:40:57.118030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.476 [2024-11-20 16:40:57.118038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.476 qpair failed and we were unable to recover it. 00:29:11.476 [2024-11-20 16:40:57.118357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.476 [2024-11-20 16:40:57.118363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.476 qpair failed and we were unable to recover it. 00:29:11.476 [2024-11-20 16:40:57.118648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.476 [2024-11-20 16:40:57.118655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.476 qpair failed and we were unable to recover it. 00:29:11.476 [2024-11-20 16:40:57.118959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.476 [2024-11-20 16:40:57.118966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.476 qpair failed and we were unable to recover it. 00:29:11.476 [2024-11-20 16:40:57.119274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.476 [2024-11-20 16:40:57.119281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.476 qpair failed and we were unable to recover it. 00:29:11.476 [2024-11-20 16:40:57.119426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.476 [2024-11-20 16:40:57.119433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.476 qpair failed and we were unable to recover it. 00:29:11.476 [2024-11-20 16:40:57.119753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.476 [2024-11-20 16:40:57.119760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.476 qpair failed and we were unable to recover it. 00:29:11.476 [2024-11-20 16:40:57.120057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.476 [2024-11-20 16:40:57.120065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.476 qpair failed and we were unable to recover it. 00:29:11.476 [2024-11-20 16:40:57.120377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.476 [2024-11-20 16:40:57.120383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.476 qpair failed and we were unable to recover it. 00:29:11.476 [2024-11-20 16:40:57.120691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.476 [2024-11-20 16:40:57.120698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.476 qpair failed and we were unable to recover it. 00:29:11.476 [2024-11-20 16:40:57.121006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.476 [2024-11-20 16:40:57.121013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.476 qpair failed and we were unable to recover it. 00:29:11.476 [2024-11-20 16:40:57.121355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.476 [2024-11-20 16:40:57.121363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.476 qpair failed and we were unable to recover it. 00:29:11.476 [2024-11-20 16:40:57.121645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.476 [2024-11-20 16:40:57.121654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.476 qpair failed and we were unable to recover it. 00:29:11.476 [2024-11-20 16:40:57.121938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.476 [2024-11-20 16:40:57.121950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.476 qpair failed and we were unable to recover it. 00:29:11.476 [2024-11-20 16:40:57.122249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.476 [2024-11-20 16:40:57.122256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.476 qpair failed and we were unable to recover it. 00:29:11.476 [2024-11-20 16:40:57.122433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.476 [2024-11-20 16:40:57.122439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.476 qpair failed and we were unable to recover it. 00:29:11.476 [2024-11-20 16:40:57.122717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.476 [2024-11-20 16:40:57.122723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.476 qpair failed and we were unable to recover it. 00:29:11.476 [2024-11-20 16:40:57.122867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.476 [2024-11-20 16:40:57.122873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.476 qpair failed and we were unable to recover it. 00:29:11.476 [2024-11-20 16:40:57.123022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.476 [2024-11-20 16:40:57.123029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.476 qpair failed and we were unable to recover it. 00:29:11.476 [2024-11-20 16:40:57.123311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.476 [2024-11-20 16:40:57.123319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.476 qpair failed and we were unable to recover it. 00:29:11.476 [2024-11-20 16:40:57.123661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.476 [2024-11-20 16:40:57.123668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.476 qpair failed and we were unable to recover it. 00:29:11.476 [2024-11-20 16:40:57.123953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.476 [2024-11-20 16:40:57.123960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.476 qpair failed and we were unable to recover it. 00:29:11.476 [2024-11-20 16:40:57.124239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.477 [2024-11-20 16:40:57.124246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.477 qpair failed and we were unable to recover it. 00:29:11.477 [2024-11-20 16:40:57.124540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.477 [2024-11-20 16:40:57.124547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.477 qpair failed and we were unable to recover it. 00:29:11.477 [2024-11-20 16:40:57.124828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.477 [2024-11-20 16:40:57.124834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.477 qpair failed and we were unable to recover it. 00:29:11.477 [2024-11-20 16:40:57.125146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.477 [2024-11-20 16:40:57.125154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.477 qpair failed and we were unable to recover it. 00:29:11.477 [2024-11-20 16:40:57.125342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.477 [2024-11-20 16:40:57.125349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.477 qpair failed and we were unable to recover it. 00:29:11.477 [2024-11-20 16:40:57.125521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.477 [2024-11-20 16:40:57.125528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.477 qpair failed and we were unable to recover it. 00:29:11.477 [2024-11-20 16:40:57.125817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.477 [2024-11-20 16:40:57.125824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.477 qpair failed and we were unable to recover it. 00:29:11.477 [2024-11-20 16:40:57.126125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.477 [2024-11-20 16:40:57.126132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.477 qpair failed and we were unable to recover it. 00:29:11.477 [2024-11-20 16:40:57.126444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.477 [2024-11-20 16:40:57.126452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.477 qpair failed and we were unable to recover it. 00:29:11.477 [2024-11-20 16:40:57.126564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.477 [2024-11-20 16:40:57.126572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.477 qpair failed and we were unable to recover it. 00:29:11.477 [2024-11-20 16:40:57.126863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.477 [2024-11-20 16:40:57.126870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.477 qpair failed and we were unable to recover it. 00:29:11.477 [2024-11-20 16:40:57.127190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.477 [2024-11-20 16:40:57.127197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.477 qpair failed and we were unable to recover it. 00:29:11.477 [2024-11-20 16:40:57.127498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.477 [2024-11-20 16:40:57.127504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.477 qpair failed and we were unable to recover it. 00:29:11.477 [2024-11-20 16:40:57.127794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.477 [2024-11-20 16:40:57.127800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.477 qpair failed and we were unable to recover it. 00:29:11.477 [2024-11-20 16:40:57.128118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.477 [2024-11-20 16:40:57.128125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.477 qpair failed and we were unable to recover it. 00:29:11.477 [2024-11-20 16:40:57.128438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.477 [2024-11-20 16:40:57.128445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.477 qpair failed and we were unable to recover it. 00:29:11.477 [2024-11-20 16:40:57.128733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.477 [2024-11-20 16:40:57.128740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.477 qpair failed and we were unable to recover it. 00:29:11.477 [2024-11-20 16:40:57.128915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.477 [2024-11-20 16:40:57.128922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.477 qpair failed and we were unable to recover it. 00:29:11.477 [2024-11-20 16:40:57.129167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.477 [2024-11-20 16:40:57.129174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.477 qpair failed and we were unable to recover it. 00:29:11.477 [2024-11-20 16:40:57.129328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.477 [2024-11-20 16:40:57.129335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.477 qpair failed and we were unable to recover it. 00:29:11.477 [2024-11-20 16:40:57.129603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.477 [2024-11-20 16:40:57.129610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.477 qpair failed and we were unable to recover it. 00:29:11.477 [2024-11-20 16:40:57.129900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.477 [2024-11-20 16:40:57.129908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.477 qpair failed and we were unable to recover it. 00:29:11.477 [2024-11-20 16:40:57.130197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.477 [2024-11-20 16:40:57.130204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.477 qpair failed and we were unable to recover it. 00:29:11.477 [2024-11-20 16:40:57.130410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.477 [2024-11-20 16:40:57.130417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.477 qpair failed and we were unable to recover it. 00:29:11.477 [2024-11-20 16:40:57.130765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.477 [2024-11-20 16:40:57.130772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.477 qpair failed and we were unable to recover it. 00:29:11.477 [2024-11-20 16:40:57.131055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.477 [2024-11-20 16:40:57.131062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.477 qpair failed and we were unable to recover it. 00:29:11.477 [2024-11-20 16:40:57.131376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.477 [2024-11-20 16:40:57.131382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.477 qpair failed and we were unable to recover it. 00:29:11.477 [2024-11-20 16:40:57.131580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.477 [2024-11-20 16:40:57.131587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.477 qpair failed and we were unable to recover it. 00:29:11.477 [2024-11-20 16:40:57.131911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.477 [2024-11-20 16:40:57.131917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.477 qpair failed and we were unable to recover it. 00:29:11.477 [2024-11-20 16:40:57.132251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.477 [2024-11-20 16:40:57.132259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.477 qpair failed and we were unable to recover it. 00:29:11.477 [2024-11-20 16:40:57.132573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.477 [2024-11-20 16:40:57.132582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.477 qpair failed and we were unable to recover it. 00:29:11.477 [2024-11-20 16:40:57.132875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.477 [2024-11-20 16:40:57.132883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.477 qpair failed and we were unable to recover it. 00:29:11.477 [2024-11-20 16:40:57.133222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.477 [2024-11-20 16:40:57.133229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.477 qpair failed and we were unable to recover it. 00:29:11.477 [2024-11-20 16:40:57.133512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.477 [2024-11-20 16:40:57.133520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.477 qpair failed and we were unable to recover it. 00:29:11.477 [2024-11-20 16:40:57.133705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.477 [2024-11-20 16:40:57.133713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.477 qpair failed and we were unable to recover it. 00:29:11.477 [2024-11-20 16:40:57.133919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.477 [2024-11-20 16:40:57.133927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.477 qpair failed and we were unable to recover it. 00:29:11.477 [2024-11-20 16:40:57.134237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.477 [2024-11-20 16:40:57.134244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.477 qpair failed and we were unable to recover it. 00:29:11.477 [2024-11-20 16:40:57.134534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.477 [2024-11-20 16:40:57.134541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.477 qpair failed and we were unable to recover it. 00:29:11.477 [2024-11-20 16:40:57.134740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.477 [2024-11-20 16:40:57.134748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.477 qpair failed and we were unable to recover it. 00:29:11.477 [2024-11-20 16:40:57.135060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.477 [2024-11-20 16:40:57.135067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.477 qpair failed and we were unable to recover it. 00:29:11.477 [2024-11-20 16:40:57.135242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.477 [2024-11-20 16:40:57.135249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.477 qpair failed and we were unable to recover it. 00:29:11.478 [2024-11-20 16:40:57.135558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.478 [2024-11-20 16:40:57.135565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.478 qpair failed and we were unable to recover it. 00:29:11.478 [2024-11-20 16:40:57.135873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.478 [2024-11-20 16:40:57.135880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.478 qpair failed and we were unable to recover it. 00:29:11.478 [2024-11-20 16:40:57.136243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.478 [2024-11-20 16:40:57.136250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.478 qpair failed and we were unable to recover it. 00:29:11.478 [2024-11-20 16:40:57.136576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.478 [2024-11-20 16:40:57.136583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.478 qpair failed and we were unable to recover it. 00:29:11.478 [2024-11-20 16:40:57.136946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.478 [2024-11-20 16:40:57.136953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.478 qpair failed and we were unable to recover it. 00:29:11.478 [2024-11-20 16:40:57.137162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.478 [2024-11-20 16:40:57.137170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.478 qpair failed and we were unable to recover it. 00:29:11.478 [2024-11-20 16:40:57.137486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.478 [2024-11-20 16:40:57.137493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.478 qpair failed and we were unable to recover it. 00:29:11.478 [2024-11-20 16:40:57.137646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.478 [2024-11-20 16:40:57.137653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.478 qpair failed and we were unable to recover it. 00:29:11.478 [2024-11-20 16:40:57.137929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.478 [2024-11-20 16:40:57.137937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.478 qpair failed and we were unable to recover it. 00:29:11.478 [2024-11-20 16:40:57.138091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.478 [2024-11-20 16:40:57.138099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.478 qpair failed and we were unable to recover it. 00:29:11.478 [2024-11-20 16:40:57.138372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.478 [2024-11-20 16:40:57.138379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.478 qpair failed and we were unable to recover it. 00:29:11.478 [2024-11-20 16:40:57.138760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.478 [2024-11-20 16:40:57.138766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.478 qpair failed and we were unable to recover it. 00:29:11.478 [2024-11-20 16:40:57.139077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.478 [2024-11-20 16:40:57.139084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.478 qpair failed and we were unable to recover it. 00:29:11.478 [2024-11-20 16:40:57.139399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.478 [2024-11-20 16:40:57.139405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.478 qpair failed and we were unable to recover it. 00:29:11.478 [2024-11-20 16:40:57.139577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.478 [2024-11-20 16:40:57.139584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.478 qpair failed and we were unable to recover it. 00:29:11.478 [2024-11-20 16:40:57.139860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.478 [2024-11-20 16:40:57.139867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.478 qpair failed and we were unable to recover it. 00:29:11.478 [2024-11-20 16:40:57.140183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.478 [2024-11-20 16:40:57.140190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.478 qpair failed and we were unable to recover it. 00:29:11.478 [2024-11-20 16:40:57.140514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.478 [2024-11-20 16:40:57.140521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.478 qpair failed and we were unable to recover it. 00:29:11.478 [2024-11-20 16:40:57.140723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.478 [2024-11-20 16:40:57.140730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.478 qpair failed and we were unable to recover it. 00:29:11.478 [2024-11-20 16:40:57.141060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.478 [2024-11-20 16:40:57.141067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.478 qpair failed and we were unable to recover it. 00:29:11.478 [2024-11-20 16:40:57.141338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.478 [2024-11-20 16:40:57.141345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.478 qpair failed and we were unable to recover it. 00:29:11.478 [2024-11-20 16:40:57.141658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.478 [2024-11-20 16:40:57.141665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.478 qpair failed and we were unable to recover it. 00:29:11.478 [2024-11-20 16:40:57.141952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.478 [2024-11-20 16:40:57.141965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.478 qpair failed and we were unable to recover it. 00:29:11.478 [2024-11-20 16:40:57.142270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.478 [2024-11-20 16:40:57.142277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.478 qpair failed and we were unable to recover it. 00:29:11.478 [2024-11-20 16:40:57.142551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.478 [2024-11-20 16:40:57.142558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.478 qpair failed and we were unable to recover it. 00:29:11.478 [2024-11-20 16:40:57.142860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.478 [2024-11-20 16:40:57.142867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.478 qpair failed and we were unable to recover it. 00:29:11.478 [2024-11-20 16:40:57.143155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.478 [2024-11-20 16:40:57.143163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.478 qpair failed and we were unable to recover it. 00:29:11.478 [2024-11-20 16:40:57.143331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.478 [2024-11-20 16:40:57.143339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.478 qpair failed and we were unable to recover it. 00:29:11.478 [2024-11-20 16:40:57.143623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.478 [2024-11-20 16:40:57.143631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.478 qpair failed and we were unable to recover it. 00:29:11.478 [2024-11-20 16:40:57.143834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.478 [2024-11-20 16:40:57.143842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.478 qpair failed and we were unable to recover it. 00:29:11.478 [2024-11-20 16:40:57.144175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.478 [2024-11-20 16:40:57.144182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.478 qpair failed and we were unable to recover it. 00:29:11.478 [2024-11-20 16:40:57.144473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.478 [2024-11-20 16:40:57.144487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.478 qpair failed and we were unable to recover it. 00:29:11.478 [2024-11-20 16:40:57.144671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.478 [2024-11-20 16:40:57.144678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.478 qpair failed and we were unable to recover it. 00:29:11.478 [2024-11-20 16:40:57.144995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.478 [2024-11-20 16:40:57.145003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.478 qpair failed and we were unable to recover it. 00:29:11.478 [2024-11-20 16:40:57.145391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.478 [2024-11-20 16:40:57.145398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.478 qpair failed and we were unable to recover it. 00:29:11.478 [2024-11-20 16:40:57.145700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.478 [2024-11-20 16:40:57.145707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.478 qpair failed and we were unable to recover it. 00:29:11.478 [2024-11-20 16:40:57.145869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.478 [2024-11-20 16:40:57.145877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.478 qpair failed and we were unable to recover it. 00:29:11.478 [2024-11-20 16:40:57.146126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.478 [2024-11-20 16:40:57.146133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.478 qpair failed and we were unable to recover it. 00:29:11.478 [2024-11-20 16:40:57.146442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.478 [2024-11-20 16:40:57.146449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.478 qpair failed and we were unable to recover it. 00:29:11.478 [2024-11-20 16:40:57.146742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.478 [2024-11-20 16:40:57.146758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.478 qpair failed and we were unable to recover it. 00:29:11.478 [2024-11-20 16:40:57.147048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.479 [2024-11-20 16:40:57.147055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.479 qpair failed and we were unable to recover it. 00:29:11.479 [2024-11-20 16:40:57.147349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.479 [2024-11-20 16:40:57.147356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.479 qpair failed and we were unable to recover it. 00:29:11.479 [2024-11-20 16:40:57.147644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.479 [2024-11-20 16:40:57.147651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.479 qpair failed and we were unable to recover it. 00:29:11.479 [2024-11-20 16:40:57.147855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.479 [2024-11-20 16:40:57.147862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.479 qpair failed and we were unable to recover it. 00:29:11.479 [2024-11-20 16:40:57.148206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.479 [2024-11-20 16:40:57.148214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.479 qpair failed and we were unable to recover it. 00:29:11.479 [2024-11-20 16:40:57.148520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.479 [2024-11-20 16:40:57.148527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.479 qpair failed and we were unable to recover it. 00:29:11.479 [2024-11-20 16:40:57.148837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.479 [2024-11-20 16:40:57.148844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.479 qpair failed and we were unable to recover it. 00:29:11.479 [2024-11-20 16:40:57.149160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.479 [2024-11-20 16:40:57.149167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.479 qpair failed and we were unable to recover it. 00:29:11.479 [2024-11-20 16:40:57.149535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.479 [2024-11-20 16:40:57.149543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.479 qpair failed and we were unable to recover it. 00:29:11.479 [2024-11-20 16:40:57.149845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.479 [2024-11-20 16:40:57.149852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.479 qpair failed and we were unable to recover it. 00:29:11.479 [2024-11-20 16:40:57.150152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.479 [2024-11-20 16:40:57.150159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.479 qpair failed and we were unable to recover it. 00:29:11.479 [2024-11-20 16:40:57.150471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.479 [2024-11-20 16:40:57.150478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.479 qpair failed and we were unable to recover it. 00:29:11.479 [2024-11-20 16:40:57.150774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.479 [2024-11-20 16:40:57.150780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.479 qpair failed and we were unable to recover it. 00:29:11.479 [2024-11-20 16:40:57.151099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.479 [2024-11-20 16:40:57.151106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.479 qpair failed and we were unable to recover it. 00:29:11.479 [2024-11-20 16:40:57.151383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.479 [2024-11-20 16:40:57.151390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.479 qpair failed and we were unable to recover it. 00:29:11.479 [2024-11-20 16:40:57.151706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.479 [2024-11-20 16:40:57.151713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.479 qpair failed and we were unable to recover it. 00:29:11.479 [2024-11-20 16:40:57.151999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.479 [2024-11-20 16:40:57.152008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.479 qpair failed and we were unable to recover it. 00:29:11.479 [2024-11-20 16:40:57.152284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.479 [2024-11-20 16:40:57.152291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.479 qpair failed and we were unable to recover it. 00:29:11.479 [2024-11-20 16:40:57.152584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.479 [2024-11-20 16:40:57.152591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.479 qpair failed and we were unable to recover it. 00:29:11.479 [2024-11-20 16:40:57.152777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.479 [2024-11-20 16:40:57.152784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.479 qpair failed and we were unable to recover it. 00:29:11.479 [2024-11-20 16:40:57.153086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.479 [2024-11-20 16:40:57.153093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.479 qpair failed and we were unable to recover it. 00:29:11.479 [2024-11-20 16:40:57.153287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.479 [2024-11-20 16:40:57.153293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.479 qpair failed and we were unable to recover it. 00:29:11.479 [2024-11-20 16:40:57.153644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.479 [2024-11-20 16:40:57.153651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.479 qpair failed and we were unable to recover it. 00:29:11.479 [2024-11-20 16:40:57.153957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.479 [2024-11-20 16:40:57.153964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.479 qpair failed and we were unable to recover it. 00:29:11.479 [2024-11-20 16:40:57.154332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.479 [2024-11-20 16:40:57.154339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.479 qpair failed and we were unable to recover it. 00:29:11.479 [2024-11-20 16:40:57.154617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.479 [2024-11-20 16:40:57.154624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.479 qpair failed and we were unable to recover it. 00:29:11.479 [2024-11-20 16:40:57.154924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.479 [2024-11-20 16:40:57.154932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.479 qpair failed and we were unable to recover it. 00:29:11.479 [2024-11-20 16:40:57.155209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.479 [2024-11-20 16:40:57.155217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.479 qpair failed and we were unable to recover it. 00:29:11.479 [2024-11-20 16:40:57.155507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.479 [2024-11-20 16:40:57.155515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.479 qpair failed and we were unable to recover it. 00:29:11.479 [2024-11-20 16:40:57.155668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.479 [2024-11-20 16:40:57.155677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.479 qpair failed and we were unable to recover it. 00:29:11.479 [2024-11-20 16:40:57.155988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.479 [2024-11-20 16:40:57.155995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.479 qpair failed and we were unable to recover it. 00:29:11.479 [2024-11-20 16:40:57.156282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.479 [2024-11-20 16:40:57.156290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.479 qpair failed and we were unable to recover it. 00:29:11.479 [2024-11-20 16:40:57.156589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.479 [2024-11-20 16:40:57.156597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.479 qpair failed and we were unable to recover it. 00:29:11.479 [2024-11-20 16:40:57.156902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.479 [2024-11-20 16:40:57.156909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.479 qpair failed and we were unable to recover it. 00:29:11.479 [2024-11-20 16:40:57.157206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.479 [2024-11-20 16:40:57.157213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.479 qpair failed and we were unable to recover it. 00:29:11.479 [2024-11-20 16:40:57.157513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.479 [2024-11-20 16:40:57.157528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.479 qpair failed and we were unable to recover it. 00:29:11.479 [2024-11-20 16:40:57.157827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.479 [2024-11-20 16:40:57.157834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.479 qpair failed and we were unable to recover it. 00:29:11.479 [2024-11-20 16:40:57.158145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.479 [2024-11-20 16:40:57.158152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.479 qpair failed and we were unable to recover it. 00:29:11.479 [2024-11-20 16:40:57.158311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.479 [2024-11-20 16:40:57.158318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.479 qpair failed and we were unable to recover it. 00:29:11.479 [2024-11-20 16:40:57.158584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.479 [2024-11-20 16:40:57.158591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.479 qpair failed and we were unable to recover it. 00:29:11.479 [2024-11-20 16:40:57.158862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.479 [2024-11-20 16:40:57.158869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.479 qpair failed and we were unable to recover it. 00:29:11.479 [2024-11-20 16:40:57.159192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.479 [2024-11-20 16:40:57.159201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.479 qpair failed and we were unable to recover it. 00:29:11.479 [2024-11-20 16:40:57.159501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.479 [2024-11-20 16:40:57.159508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.479 qpair failed and we were unable to recover it. 00:29:11.480 [2024-11-20 16:40:57.159747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.480 [2024-11-20 16:40:57.159753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.480 qpair failed and we were unable to recover it. 00:29:11.480 [2024-11-20 16:40:57.160094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.480 [2024-11-20 16:40:57.160102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.480 qpair failed and we were unable to recover it. 00:29:11.480 [2024-11-20 16:40:57.160417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.480 [2024-11-20 16:40:57.160424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.480 qpair failed and we were unable to recover it. 00:29:11.480 [2024-11-20 16:40:57.160747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.480 [2024-11-20 16:40:57.160754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.480 qpair failed and we were unable to recover it. 00:29:11.480 [2024-11-20 16:40:57.161067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.480 [2024-11-20 16:40:57.161074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.480 qpair failed and we were unable to recover it. 00:29:11.480 [2024-11-20 16:40:57.161292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.480 [2024-11-20 16:40:57.161300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.480 qpair failed and we were unable to recover it. 00:29:11.480 [2024-11-20 16:40:57.161564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.480 [2024-11-20 16:40:57.161571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.480 qpair failed and we were unable to recover it. 00:29:11.480 [2024-11-20 16:40:57.161861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.480 [2024-11-20 16:40:57.161869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.480 qpair failed and we were unable to recover it. 00:29:11.480 [2024-11-20 16:40:57.162152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.480 [2024-11-20 16:40:57.162159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.480 qpair failed and we were unable to recover it. 00:29:11.480 [2024-11-20 16:40:57.162537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.480 [2024-11-20 16:40:57.162544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.480 qpair failed and we were unable to recover it. 00:29:11.480 [2024-11-20 16:40:57.162857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.480 [2024-11-20 16:40:57.162864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.480 qpair failed and we were unable to recover it. 00:29:11.480 [2024-11-20 16:40:57.163184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.480 [2024-11-20 16:40:57.163191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.480 qpair failed and we were unable to recover it. 00:29:11.480 [2024-11-20 16:40:57.163494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.480 [2024-11-20 16:40:57.163502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.480 qpair failed and we were unable to recover it. 00:29:11.480 [2024-11-20 16:40:57.163824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.480 [2024-11-20 16:40:57.163833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.480 qpair failed and we were unable to recover it. 00:29:11.480 [2024-11-20 16:40:57.164146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.480 [2024-11-20 16:40:57.164153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.480 qpair failed and we were unable to recover it. 00:29:11.480 [2024-11-20 16:40:57.164475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.480 [2024-11-20 16:40:57.164482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.480 qpair failed and we were unable to recover it. 00:29:11.480 [2024-11-20 16:40:57.164788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.480 [2024-11-20 16:40:57.164795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.480 qpair failed and we were unable to recover it. 00:29:11.480 [2024-11-20 16:40:57.165120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.480 [2024-11-20 16:40:57.165127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.480 qpair failed and we were unable to recover it. 00:29:11.480 [2024-11-20 16:40:57.165440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.480 [2024-11-20 16:40:57.165448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.480 qpair failed and we were unable to recover it. 00:29:11.480 [2024-11-20 16:40:57.165737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.480 [2024-11-20 16:40:57.165743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.480 qpair failed and we were unable to recover it. 00:29:11.480 [2024-11-20 16:40:57.165949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.480 [2024-11-20 16:40:57.165956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.480 qpair failed and we were unable to recover it. 00:29:11.480 [2024-11-20 16:40:57.166329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.480 [2024-11-20 16:40:57.166336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.480 qpair failed and we were unable to recover it. 00:29:11.480 [2024-11-20 16:40:57.166716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.480 [2024-11-20 16:40:57.166722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.480 qpair failed and we were unable to recover it. 00:29:11.480 [2024-11-20 16:40:57.167028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.480 [2024-11-20 16:40:57.167037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.480 qpair failed and we were unable to recover it. 00:29:11.480 [2024-11-20 16:40:57.167232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.480 [2024-11-20 16:40:57.167240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.480 qpair failed and we were unable to recover it. 00:29:11.480 [2024-11-20 16:40:57.167475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.480 [2024-11-20 16:40:57.167481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.480 qpair failed and we were unable to recover it. 00:29:11.480 [2024-11-20 16:40:57.167786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.480 [2024-11-20 16:40:57.167801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.480 qpair failed and we were unable to recover it. 00:29:11.480 [2024-11-20 16:40:57.168102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.480 [2024-11-20 16:40:57.168109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.480 qpair failed and we were unable to recover it. 00:29:11.480 [2024-11-20 16:40:57.168407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.480 [2024-11-20 16:40:57.168415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.480 qpair failed and we were unable to recover it. 00:29:11.480 [2024-11-20 16:40:57.168710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.480 [2024-11-20 16:40:57.168717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.480 qpair failed and we were unable to recover it. 00:29:11.480 [2024-11-20 16:40:57.169023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.480 [2024-11-20 16:40:57.169030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.480 qpair failed and we were unable to recover it. 00:29:11.480 [2024-11-20 16:40:57.169349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.480 [2024-11-20 16:40:57.169355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.480 qpair failed and we were unable to recover it. 00:29:11.480 [2024-11-20 16:40:57.169640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.480 [2024-11-20 16:40:57.169654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.480 qpair failed and we were unable to recover it. 00:29:11.480 [2024-11-20 16:40:57.169983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.480 [2024-11-20 16:40:57.169991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.480 qpair failed and we were unable to recover it. 00:29:11.480 [2024-11-20 16:40:57.170175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.480 [2024-11-20 16:40:57.170182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.480 qpair failed and we were unable to recover it. 00:29:11.480 [2024-11-20 16:40:57.170523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.480 [2024-11-20 16:40:57.170529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.480 qpair failed and we were unable to recover it. 00:29:11.480 [2024-11-20 16:40:57.170731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.480 [2024-11-20 16:40:57.170738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.480 qpair failed and we were unable to recover it. 00:29:11.480 [2024-11-20 16:40:57.170895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.480 [2024-11-20 16:40:57.170902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.481 qpair failed and we were unable to recover it. 00:29:11.481 [2024-11-20 16:40:57.171236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.481 [2024-11-20 16:40:57.171243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.481 qpair failed and we were unable to recover it. 00:29:11.481 [2024-11-20 16:40:57.171527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.481 [2024-11-20 16:40:57.171534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.481 qpair failed and we were unable to recover it. 00:29:11.481 [2024-11-20 16:40:57.171836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.481 [2024-11-20 16:40:57.171843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.481 qpair failed and we were unable to recover it. 00:29:11.481 [2024-11-20 16:40:57.172139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.481 [2024-11-20 16:40:57.172146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.481 qpair failed and we were unable to recover it. 00:29:11.481 [2024-11-20 16:40:57.172368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.481 [2024-11-20 16:40:57.172375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.481 qpair failed and we were unable to recover it. 00:29:11.481 [2024-11-20 16:40:57.172649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.481 [2024-11-20 16:40:57.172655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.481 qpair failed and we were unable to recover it. 00:29:11.481 [2024-11-20 16:40:57.172965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.481 [2024-11-20 16:40:57.172972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.481 qpair failed and we were unable to recover it. 00:29:11.481 [2024-11-20 16:40:57.173364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.481 [2024-11-20 16:40:57.173371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.481 qpair failed and we were unable to recover it. 00:29:11.481 [2024-11-20 16:40:57.173672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.481 [2024-11-20 16:40:57.173679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.481 qpair failed and we were unable to recover it. 00:29:11.481 [2024-11-20 16:40:57.173963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.481 [2024-11-20 16:40:57.173969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.481 qpair failed and we were unable to recover it. 00:29:11.481 [2024-11-20 16:40:57.174264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.481 [2024-11-20 16:40:57.174271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.481 qpair failed and we were unable to recover it. 00:29:11.481 [2024-11-20 16:40:57.174584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.481 [2024-11-20 16:40:57.174590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.481 qpair failed and we were unable to recover it. 00:29:11.481 [2024-11-20 16:40:57.174895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.481 [2024-11-20 16:40:57.174902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.481 qpair failed and we were unable to recover it. 00:29:11.481 [2024-11-20 16:40:57.175116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.481 [2024-11-20 16:40:57.175123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.481 qpair failed and we were unable to recover it. 00:29:11.481 [2024-11-20 16:40:57.175434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.481 [2024-11-20 16:40:57.175440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.481 qpair failed and we were unable to recover it. 00:29:11.481 [2024-11-20 16:40:57.175599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.481 [2024-11-20 16:40:57.175608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.481 qpair failed and we were unable to recover it. 00:29:11.481 [2024-11-20 16:40:57.175842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.481 [2024-11-20 16:40:57.175849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.481 qpair failed and we were unable to recover it. 00:29:11.481 [2024-11-20 16:40:57.175970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.481 [2024-11-20 16:40:57.175978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.481 qpair failed and we were unable to recover it. 00:29:11.481 [2024-11-20 16:40:57.176264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.481 [2024-11-20 16:40:57.176270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.481 qpair failed and we were unable to recover it. 00:29:11.481 [2024-11-20 16:40:57.176561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.481 [2024-11-20 16:40:57.176568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.481 qpair failed and we were unable to recover it. 00:29:11.481 [2024-11-20 16:40:57.176757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.481 [2024-11-20 16:40:57.176765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.481 qpair failed and we were unable to recover it. 00:29:11.481 [2024-11-20 16:40:57.177146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.481 [2024-11-20 16:40:57.177153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.481 qpair failed and we were unable to recover it. 00:29:11.481 [2024-11-20 16:40:57.177427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.481 [2024-11-20 16:40:57.177434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.481 qpair failed and we were unable to recover it. 00:29:11.481 [2024-11-20 16:40:57.177603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.481 [2024-11-20 16:40:57.177609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.481 qpair failed and we were unable to recover it. 00:29:11.481 [2024-11-20 16:40:57.177917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.481 [2024-11-20 16:40:57.177924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.481 qpair failed and we were unable to recover it. 00:29:11.481 [2024-11-20 16:40:57.178244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.481 [2024-11-20 16:40:57.178251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.481 qpair failed and we were unable to recover it. 00:29:11.481 [2024-11-20 16:40:57.178551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.481 [2024-11-20 16:40:57.178557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.481 qpair failed and we were unable to recover it. 00:29:11.481 [2024-11-20 16:40:57.178867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.481 [2024-11-20 16:40:57.178874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.481 qpair failed and we were unable to recover it. 00:29:11.481 [2024-11-20 16:40:57.179064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.481 [2024-11-20 16:40:57.179071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.481 qpair failed and we were unable to recover it. 00:29:11.481 [2024-11-20 16:40:57.179349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.481 [2024-11-20 16:40:57.179356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.481 qpair failed and we were unable to recover it. 00:29:11.481 [2024-11-20 16:40:57.179667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.481 [2024-11-20 16:40:57.179673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.481 qpair failed and we were unable to recover it. 00:29:11.481 [2024-11-20 16:40:57.180000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.481 [2024-11-20 16:40:57.180007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.481 qpair failed and we were unable to recover it. 00:29:11.481 [2024-11-20 16:40:57.180309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.481 [2024-11-20 16:40:57.180316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.481 qpair failed and we were unable to recover it. 00:29:11.481 [2024-11-20 16:40:57.180523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.481 [2024-11-20 16:40:57.180529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.481 qpair failed and we were unable to recover it. 00:29:11.481 [2024-11-20 16:40:57.180737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.481 [2024-11-20 16:40:57.180744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.481 qpair failed and we were unable to recover it. 00:29:11.481 [2024-11-20 16:40:57.181109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.481 [2024-11-20 16:40:57.181116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.481 qpair failed and we were unable to recover it. 00:29:11.481 [2024-11-20 16:40:57.181419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.481 [2024-11-20 16:40:57.181425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.481 qpair failed and we were unable to recover it. 00:29:11.481 [2024-11-20 16:40:57.181751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.481 [2024-11-20 16:40:57.181757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.481 qpair failed and we were unable to recover it. 00:29:11.481 [2024-11-20 16:40:57.182057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.481 [2024-11-20 16:40:57.182064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.481 qpair failed and we were unable to recover it. 00:29:11.481 [2024-11-20 16:40:57.182357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.481 [2024-11-20 16:40:57.182364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.481 qpair failed and we were unable to recover it. 00:29:11.481 [2024-11-20 16:40:57.182666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.481 [2024-11-20 16:40:57.182672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.481 qpair failed and we were unable to recover it. 00:29:11.481 [2024-11-20 16:40:57.182959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.481 [2024-11-20 16:40:57.182966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.481 qpair failed and we were unable to recover it. 00:29:11.481 [2024-11-20 16:40:57.183297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.481 [2024-11-20 16:40:57.183304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.481 qpair failed and we were unable to recover it. 00:29:11.481 [2024-11-20 16:40:57.183588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.482 [2024-11-20 16:40:57.183595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.482 qpair failed and we were unable to recover it. 00:29:11.482 [2024-11-20 16:40:57.183769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.482 [2024-11-20 16:40:57.183777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.482 qpair failed and we were unable to recover it. 00:29:11.482 [2024-11-20 16:40:57.183942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.482 [2024-11-20 16:40:57.183949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.482 qpair failed and we were unable to recover it. 00:29:11.482 [2024-11-20 16:40:57.184129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.482 [2024-11-20 16:40:57.184136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.482 qpair failed and we were unable to recover it. 00:29:11.482 [2024-11-20 16:40:57.184416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.482 [2024-11-20 16:40:57.184423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.482 qpair failed and we were unable to recover it. 00:29:11.482 [2024-11-20 16:40:57.184692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.482 [2024-11-20 16:40:57.184699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.482 qpair failed and we were unable to recover it. 00:29:11.482 [2024-11-20 16:40:57.184990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.482 [2024-11-20 16:40:57.184998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.482 qpair failed and we were unable to recover it. 00:29:11.482 [2024-11-20 16:40:57.185310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.482 [2024-11-20 16:40:57.185316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.482 qpair failed and we were unable to recover it. 00:29:11.482 [2024-11-20 16:40:57.185510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.482 [2024-11-20 16:40:57.185517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.482 qpair failed and we were unable to recover it. 00:29:11.482 [2024-11-20 16:40:57.185824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.482 [2024-11-20 16:40:57.185831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.482 qpair failed and we were unable to recover it. 00:29:11.482 [2024-11-20 16:40:57.186184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.482 [2024-11-20 16:40:57.186191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.482 qpair failed and we were unable to recover it. 00:29:11.482 [2024-11-20 16:40:57.186521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.482 [2024-11-20 16:40:57.186527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.482 qpair failed and we were unable to recover it. 00:29:11.482 [2024-11-20 16:40:57.186809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.482 [2024-11-20 16:40:57.186823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.482 qpair failed and we were unable to recover it. 00:29:11.482 [2024-11-20 16:40:57.187130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.482 [2024-11-20 16:40:57.187137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.482 qpair failed and we were unable to recover it. 00:29:11.482 [2024-11-20 16:40:57.187428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.482 [2024-11-20 16:40:57.187435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.482 qpair failed and we were unable to recover it. 00:29:11.482 [2024-11-20 16:40:57.187737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.482 [2024-11-20 16:40:57.187744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.482 qpair failed and we were unable to recover it. 00:29:11.482 [2024-11-20 16:40:57.188027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.482 [2024-11-20 16:40:57.188034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.482 qpair failed and we were unable to recover it. 00:29:11.482 [2024-11-20 16:40:57.188345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.482 [2024-11-20 16:40:57.188352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.482 qpair failed and we were unable to recover it. 00:29:11.482 [2024-11-20 16:40:57.188641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.482 [2024-11-20 16:40:57.188647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.482 qpair failed and we were unable to recover it. 00:29:11.482 [2024-11-20 16:40:57.188964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.482 [2024-11-20 16:40:57.188971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.482 qpair failed and we were unable to recover it. 00:29:11.482 [2024-11-20 16:40:57.189159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.482 [2024-11-20 16:40:57.189166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.482 qpair failed and we were unable to recover it. 00:29:11.482 [2024-11-20 16:40:57.189436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.482 [2024-11-20 16:40:57.189443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.482 qpair failed and we were unable to recover it. 00:29:11.482 [2024-11-20 16:40:57.189661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.482 [2024-11-20 16:40:57.189668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.482 qpair failed and we were unable to recover it. 00:29:11.482 [2024-11-20 16:40:57.189955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.482 [2024-11-20 16:40:57.189963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.482 qpair failed and we were unable to recover it. 00:29:11.482 [2024-11-20 16:40:57.190288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.482 [2024-11-20 16:40:57.190296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.482 qpair failed and we were unable to recover it. 00:29:11.482 [2024-11-20 16:40:57.190585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.482 [2024-11-20 16:40:57.190593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.482 qpair failed and we were unable to recover it. 00:29:11.482 [2024-11-20 16:40:57.190746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.482 [2024-11-20 16:40:57.190754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.482 qpair failed and we were unable to recover it. 00:29:11.482 [2024-11-20 16:40:57.190940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.482 [2024-11-20 16:40:57.190948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.482 qpair failed and we were unable to recover it. 00:29:11.482 [2024-11-20 16:40:57.191238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.482 [2024-11-20 16:40:57.191246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.482 qpair failed and we were unable to recover it. 00:29:11.482 [2024-11-20 16:40:57.191549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.482 [2024-11-20 16:40:57.191557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.482 qpair failed and we were unable to recover it. 00:29:11.482 [2024-11-20 16:40:57.191835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.482 [2024-11-20 16:40:57.191843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.482 qpair failed and we were unable to recover it. 00:29:11.482 [2024-11-20 16:40:57.192152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.482 [2024-11-20 16:40:57.192159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.482 qpair failed and we were unable to recover it. 00:29:11.482 [2024-11-20 16:40:57.192522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.482 [2024-11-20 16:40:57.192529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.482 qpair failed and we were unable to recover it. 00:29:11.482 [2024-11-20 16:40:57.192828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.482 [2024-11-20 16:40:57.192836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.482 qpair failed and we were unable to recover it. 00:29:11.482 [2024-11-20 16:40:57.193012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.482 [2024-11-20 16:40:57.193021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.482 qpair failed and we were unable to recover it. 00:29:11.482 [2024-11-20 16:40:57.193323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.482 [2024-11-20 16:40:57.193330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.482 qpair failed and we were unable to recover it. 00:29:11.482 [2024-11-20 16:40:57.193621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.482 [2024-11-20 16:40:57.193629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.482 qpair failed and we were unable to recover it. 00:29:11.482 [2024-11-20 16:40:57.193952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.482 [2024-11-20 16:40:57.193959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.482 qpair failed and we were unable to recover it. 00:29:11.482 [2024-11-20 16:40:57.194260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.482 [2024-11-20 16:40:57.194267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.482 qpair failed and we were unable to recover it. 00:29:11.482 [2024-11-20 16:40:57.194595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.482 [2024-11-20 16:40:57.194601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.482 qpair failed and we were unable to recover it. 00:29:11.482 [2024-11-20 16:40:57.194884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.482 [2024-11-20 16:40:57.194897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.482 qpair failed and we were unable to recover it. 00:29:11.482 [2024-11-20 16:40:57.195070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.482 [2024-11-20 16:40:57.195077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.482 qpair failed and we were unable to recover it. 00:29:11.482 [2024-11-20 16:40:57.195276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.482 [2024-11-20 16:40:57.195283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.482 qpair failed and we were unable to recover it. 00:29:11.482 [2024-11-20 16:40:57.195569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.482 [2024-11-20 16:40:57.195576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.482 qpair failed and we were unable to recover it. 00:29:11.482 [2024-11-20 16:40:57.195761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.482 [2024-11-20 16:40:57.195769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.482 qpair failed and we were unable to recover it. 00:29:11.482 [2024-11-20 16:40:57.196095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.482 [2024-11-20 16:40:57.196102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.482 qpair failed and we were unable to recover it. 00:29:11.482 [2024-11-20 16:40:57.196313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.482 [2024-11-20 16:40:57.196319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.482 qpair failed and we were unable to recover it. 00:29:11.483 [2024-11-20 16:40:57.196592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.483 [2024-11-20 16:40:57.196607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.483 qpair failed and we were unable to recover it. 00:29:11.483 [2024-11-20 16:40:57.196928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.483 [2024-11-20 16:40:57.196934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.483 qpair failed and we were unable to recover it. 00:29:11.483 [2024-11-20 16:40:57.197242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.483 [2024-11-20 16:40:57.197249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.483 qpair failed and we were unable to recover it. 00:29:11.483 [2024-11-20 16:40:57.197539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.483 [2024-11-20 16:40:57.197546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.483 qpair failed and we were unable to recover it. 00:29:11.483 [2024-11-20 16:40:57.197856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.483 [2024-11-20 16:40:57.197863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.483 qpair failed and we were unable to recover it. 00:29:11.483 [2024-11-20 16:40:57.198152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.483 [2024-11-20 16:40:57.198160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.483 qpair failed and we were unable to recover it. 00:29:11.483 [2024-11-20 16:40:57.198462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.483 [2024-11-20 16:40:57.198475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.483 qpair failed and we were unable to recover it. 00:29:11.483 [2024-11-20 16:40:57.198762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.483 [2024-11-20 16:40:57.198768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.483 qpair failed and we were unable to recover it. 00:29:11.483 [2024-11-20 16:40:57.199049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.483 [2024-11-20 16:40:57.199058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.483 qpair failed and we were unable to recover it. 00:29:11.483 [2024-11-20 16:40:57.199367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.483 [2024-11-20 16:40:57.199374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.483 qpair failed and we were unable to recover it. 00:29:11.483 [2024-11-20 16:40:57.199688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.483 [2024-11-20 16:40:57.199695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.483 qpair failed and we were unable to recover it. 00:29:11.483 [2024-11-20 16:40:57.200030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.483 [2024-11-20 16:40:57.200038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.483 qpair failed and we were unable to recover it. 00:29:11.483 [2024-11-20 16:40:57.200347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.483 [2024-11-20 16:40:57.200355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.483 qpair failed and we were unable to recover it. 00:29:11.483 [2024-11-20 16:40:57.200508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.483 [2024-11-20 16:40:57.200516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.483 qpair failed and we were unable to recover it. 00:29:11.483 [2024-11-20 16:40:57.200920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.483 [2024-11-20 16:40:57.200927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.483 qpair failed and we were unable to recover it. 00:29:11.483 [2024-11-20 16:40:57.201261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.483 [2024-11-20 16:40:57.201269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.483 qpair failed and we were unable to recover it. 00:29:11.483 [2024-11-20 16:40:57.201577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.483 [2024-11-20 16:40:57.201584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.483 qpair failed and we were unable to recover it. 00:29:11.483 [2024-11-20 16:40:57.201795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.483 [2024-11-20 16:40:57.201801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.483 qpair failed and we were unable to recover it. 00:29:11.483 [2024-11-20 16:40:57.201999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.483 [2024-11-20 16:40:57.202007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.483 qpair failed and we were unable to recover it. 00:29:11.483 [2024-11-20 16:40:57.202274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.483 [2024-11-20 16:40:57.202281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.483 qpair failed and we were unable to recover it. 00:29:11.483 [2024-11-20 16:40:57.202440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.483 [2024-11-20 16:40:57.202448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.483 qpair failed and we were unable to recover it. 00:29:11.483 [2024-11-20 16:40:57.202773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.483 [2024-11-20 16:40:57.202781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.483 qpair failed and we were unable to recover it. 00:29:11.483 [2024-11-20 16:40:57.202984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.483 [2024-11-20 16:40:57.202991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.483 qpair failed and we were unable to recover it. 00:29:11.483 [2024-11-20 16:40:57.203324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.483 [2024-11-20 16:40:57.203331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.483 qpair failed and we were unable to recover it. 00:29:11.483 [2024-11-20 16:40:57.203650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.483 [2024-11-20 16:40:57.203657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.483 qpair failed and we were unable to recover it. 00:29:11.483 [2024-11-20 16:40:57.203966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.483 [2024-11-20 16:40:57.203972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.483 qpair failed and we were unable to recover it. 00:29:11.483 [2024-11-20 16:40:57.204341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.483 [2024-11-20 16:40:57.204348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.483 qpair failed and we were unable to recover it. 00:29:11.483 [2024-11-20 16:40:57.204640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.483 [2024-11-20 16:40:57.204647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.483 qpair failed and we were unable to recover it. 00:29:11.483 [2024-11-20 16:40:57.204955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.483 [2024-11-20 16:40:57.204962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.483 qpair failed and we were unable to recover it. 00:29:11.483 [2024-11-20 16:40:57.205259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.483 [2024-11-20 16:40:57.205266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.483 qpair failed and we were unable to recover it. 00:29:11.483 [2024-11-20 16:40:57.205584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.483 [2024-11-20 16:40:57.205590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.483 qpair failed and we were unable to recover it. 00:29:11.483 [2024-11-20 16:40:57.205755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.483 [2024-11-20 16:40:57.205763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.483 qpair failed and we were unable to recover it. 00:29:11.483 [2024-11-20 16:40:57.206125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.483 [2024-11-20 16:40:57.206133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.483 qpair failed and we were unable to recover it. 00:29:11.483 [2024-11-20 16:40:57.206436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.483 [2024-11-20 16:40:57.206443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.483 qpair failed and we were unable to recover it. 00:29:11.483 [2024-11-20 16:40:57.206637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.483 [2024-11-20 16:40:57.206644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.483 qpair failed and we were unable to recover it. 00:29:11.483 [2024-11-20 16:40:57.206948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.483 [2024-11-20 16:40:57.206956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.483 qpair failed and we were unable to recover it. 00:29:11.483 [2024-11-20 16:40:57.207234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.483 [2024-11-20 16:40:57.207241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.483 qpair failed and we were unable to recover it. 00:29:11.483 [2024-11-20 16:40:57.207394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.483 [2024-11-20 16:40:57.207402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.483 qpair failed and we were unable to recover it. 00:29:11.483 [2024-11-20 16:40:57.207620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.483 [2024-11-20 16:40:57.207627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.483 qpair failed and we were unable to recover it. 00:29:11.483 [2024-11-20 16:40:57.207932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.483 [2024-11-20 16:40:57.207940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.483 qpair failed and we were unable to recover it. 00:29:11.483 [2024-11-20 16:40:57.208127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.483 [2024-11-20 16:40:57.208134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.483 qpair failed and we were unable to recover it. 00:29:11.483 [2024-11-20 16:40:57.208392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.483 [2024-11-20 16:40:57.208399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.483 qpair failed and we were unable to recover it. 00:29:11.483 [2024-11-20 16:40:57.208727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.483 [2024-11-20 16:40:57.208735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.483 qpair failed and we were unable to recover it. 00:29:11.483 [2024-11-20 16:40:57.209055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.483 [2024-11-20 16:40:57.209062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.483 qpair failed and we were unable to recover it. 00:29:11.483 [2024-11-20 16:40:57.209322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.483 [2024-11-20 16:40:57.209329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.483 qpair failed and we were unable to recover it. 00:29:11.483 [2024-11-20 16:40:57.209657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.483 [2024-11-20 16:40:57.209665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.483 qpair failed and we were unable to recover it. 00:29:11.483 [2024-11-20 16:40:57.209846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.484 [2024-11-20 16:40:57.209853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.484 qpair failed and we were unable to recover it. 00:29:11.484 [2024-11-20 16:40:57.210183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.484 [2024-11-20 16:40:57.210190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.484 qpair failed and we were unable to recover it. 00:29:11.484 [2024-11-20 16:40:57.210414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.484 [2024-11-20 16:40:57.210421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.484 qpair failed and we were unable to recover it. 00:29:11.484 [2024-11-20 16:40:57.210587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.484 [2024-11-20 16:40:57.210595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.484 qpair failed and we were unable to recover it. 00:29:11.484 [2024-11-20 16:40:57.210863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.484 [2024-11-20 16:40:57.210870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.484 qpair failed and we were unable to recover it. 00:29:11.484 [2024-11-20 16:40:57.211170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.484 [2024-11-20 16:40:57.211180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.484 qpair failed and we were unable to recover it. 00:29:11.484 [2024-11-20 16:40:57.211489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.484 [2024-11-20 16:40:57.211497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.484 qpair failed and we were unable to recover it. 00:29:11.484 [2024-11-20 16:40:57.211786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.484 [2024-11-20 16:40:57.211794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.484 qpair failed and we were unable to recover it. 00:29:11.484 [2024-11-20 16:40:57.212104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.484 [2024-11-20 16:40:57.212111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.484 qpair failed and we were unable to recover it. 00:29:11.484 [2024-11-20 16:40:57.212444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.484 [2024-11-20 16:40:57.212450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.484 qpair failed and we were unable to recover it. 00:29:11.484 [2024-11-20 16:40:57.212754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.484 [2024-11-20 16:40:57.212760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.484 qpair failed and we were unable to recover it. 00:29:11.484 [2024-11-20 16:40:57.213057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.484 [2024-11-20 16:40:57.213064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.484 qpair failed and we were unable to recover it. 00:29:11.484 [2024-11-20 16:40:57.213384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.484 [2024-11-20 16:40:57.213391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.484 qpair failed and we were unable to recover it. 00:29:11.484 [2024-11-20 16:40:57.213728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.484 [2024-11-20 16:40:57.213735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.484 qpair failed and we were unable to recover it. 00:29:11.484 [2024-11-20 16:40:57.214075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.484 [2024-11-20 16:40:57.214083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.484 qpair failed and we were unable to recover it. 00:29:11.484 [2024-11-20 16:40:57.214365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.484 [2024-11-20 16:40:57.214372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.484 qpair failed and we were unable to recover it. 00:29:11.484 [2024-11-20 16:40:57.214660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.484 [2024-11-20 16:40:57.214667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.484 qpair failed and we were unable to recover it. 00:29:11.484 [2024-11-20 16:40:57.214843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.484 [2024-11-20 16:40:57.214850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.484 qpair failed and we were unable to recover it. 00:29:11.484 [2024-11-20 16:40:57.215121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.484 [2024-11-20 16:40:57.215128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.484 qpair failed and we were unable to recover it. 00:29:11.484 [2024-11-20 16:40:57.215559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.484 [2024-11-20 16:40:57.215566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.484 qpair failed and we were unable to recover it. 00:29:11.484 [2024-11-20 16:40:57.215856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.484 [2024-11-20 16:40:57.215864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.484 qpair failed and we were unable to recover it. 00:29:11.484 [2024-11-20 16:40:57.216162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.484 [2024-11-20 16:40:57.216170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.484 qpair failed and we were unable to recover it. 00:29:11.484 [2024-11-20 16:40:57.216454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.484 [2024-11-20 16:40:57.216461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.484 qpair failed and we were unable to recover it. 00:29:11.484 [2024-11-20 16:40:57.216778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.484 [2024-11-20 16:40:57.216785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.484 qpair failed and we were unable to recover it. 00:29:11.484 [2024-11-20 16:40:57.217097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.484 [2024-11-20 16:40:57.217105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.484 qpair failed and we were unable to recover it. 00:29:11.484 [2024-11-20 16:40:57.217288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.484 [2024-11-20 16:40:57.217295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.484 qpair failed and we were unable to recover it. 00:29:11.484 [2024-11-20 16:40:57.217552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.484 [2024-11-20 16:40:57.217559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.484 qpair failed and we were unable to recover it. 00:29:11.484 [2024-11-20 16:40:57.217926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.484 [2024-11-20 16:40:57.217932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.484 qpair failed and we were unable to recover it. 00:29:11.484 [2024-11-20 16:40:57.218140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.484 [2024-11-20 16:40:57.218147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.484 qpair failed and we were unable to recover it. 00:29:11.484 [2024-11-20 16:40:57.218550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.484 [2024-11-20 16:40:57.218557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.484 qpair failed and we were unable to recover it. 00:29:11.484 [2024-11-20 16:40:57.218854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.484 [2024-11-20 16:40:57.218862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.484 qpair failed and we were unable to recover it. 00:29:11.484 [2024-11-20 16:40:57.219211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.484 [2024-11-20 16:40:57.219218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.484 qpair failed and we were unable to recover it. 00:29:11.484 [2024-11-20 16:40:57.219568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.484 [2024-11-20 16:40:57.219575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.484 qpair failed and we were unable to recover it. 00:29:11.484 [2024-11-20 16:40:57.219889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.484 [2024-11-20 16:40:57.219896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.484 qpair failed and we were unable to recover it. 00:29:11.484 [2024-11-20 16:40:57.220167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.484 [2024-11-20 16:40:57.220174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.484 qpair failed and we were unable to recover it. 00:29:11.484 [2024-11-20 16:40:57.220485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.484 [2024-11-20 16:40:57.220492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.484 qpair failed and we were unable to recover it. 00:29:11.484 [2024-11-20 16:40:57.220778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.484 [2024-11-20 16:40:57.220785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.484 qpair failed and we were unable to recover it. 00:29:11.484 [2024-11-20 16:40:57.221099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.484 [2024-11-20 16:40:57.221106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.484 qpair failed and we were unable to recover it. 00:29:11.484 [2024-11-20 16:40:57.221418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.484 [2024-11-20 16:40:57.221425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.484 qpair failed and we were unable to recover it. 00:29:11.484 [2024-11-20 16:40:57.221776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.484 [2024-11-20 16:40:57.221784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.484 qpair failed and we were unable to recover it. 00:29:11.484 [2024-11-20 16:40:57.222077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.484 [2024-11-20 16:40:57.222085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.484 qpair failed and we were unable to recover it. 00:29:11.485 [2024-11-20 16:40:57.222381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.485 [2024-11-20 16:40:57.222387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.485 qpair failed and we were unable to recover it. 00:29:11.485 [2024-11-20 16:40:57.222699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.485 [2024-11-20 16:40:57.222706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.485 qpair failed and we were unable to recover it. 00:29:11.485 [2024-11-20 16:40:57.223013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.485 [2024-11-20 16:40:57.223020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.485 qpair failed and we were unable to recover it. 00:29:11.485 [2024-11-20 16:40:57.223319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.485 [2024-11-20 16:40:57.223326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.485 qpair failed and we were unable to recover it. 00:29:11.485 [2024-11-20 16:40:57.223629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.485 [2024-11-20 16:40:57.223636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.485 qpair failed and we were unable to recover it. 00:29:11.485 [2024-11-20 16:40:57.223923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.485 [2024-11-20 16:40:57.223930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.485 qpair failed and we were unable to recover it. 00:29:11.485 [2024-11-20 16:40:57.224222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.485 [2024-11-20 16:40:57.224229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.485 qpair failed and we were unable to recover it. 00:29:11.485 [2024-11-20 16:40:57.224536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.485 [2024-11-20 16:40:57.224543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.485 qpair failed and we were unable to recover it. 00:29:11.485 [2024-11-20 16:40:57.224858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.485 [2024-11-20 16:40:57.224865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.485 qpair failed and we were unable to recover it. 00:29:11.485 [2024-11-20 16:40:57.225190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.485 [2024-11-20 16:40:57.225197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.485 qpair failed and we were unable to recover it. 00:29:11.485 [2024-11-20 16:40:57.225524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.485 [2024-11-20 16:40:57.225530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.485 qpair failed and we were unable to recover it. 00:29:11.485 [2024-11-20 16:40:57.225845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.485 [2024-11-20 16:40:57.225852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.485 qpair failed and we were unable to recover it. 00:29:11.485 [2024-11-20 16:40:57.226027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.485 [2024-11-20 16:40:57.226035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.485 qpair failed and we were unable to recover it. 00:29:11.485 [2024-11-20 16:40:57.226426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.485 [2024-11-20 16:40:57.226433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.485 qpair failed and we were unable to recover it. 00:29:11.485 [2024-11-20 16:40:57.226741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.485 [2024-11-20 16:40:57.226749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.485 qpair failed and we were unable to recover it. 00:29:11.485 [2024-11-20 16:40:57.227045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.485 [2024-11-20 16:40:57.227052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.485 qpair failed and we were unable to recover it. 00:29:11.485 [2024-11-20 16:40:57.227379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.485 [2024-11-20 16:40:57.227386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.485 qpair failed and we were unable to recover it. 00:29:11.485 [2024-11-20 16:40:57.227705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.485 [2024-11-20 16:40:57.227712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.485 qpair failed and we were unable to recover it. 00:29:11.485 [2024-11-20 16:40:57.228080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.485 [2024-11-20 16:40:57.228087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.485 qpair failed and we were unable to recover it. 00:29:11.485 [2024-11-20 16:40:57.228399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.485 [2024-11-20 16:40:57.228405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.485 qpair failed and we were unable to recover it. 00:29:11.485 [2024-11-20 16:40:57.228727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.485 [2024-11-20 16:40:57.228734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.485 qpair failed and we were unable to recover it. 00:29:11.485 [2024-11-20 16:40:57.228940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.485 [2024-11-20 16:40:57.228947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.485 qpair failed and we were unable to recover it. 00:29:11.485 [2024-11-20 16:40:57.229267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.485 [2024-11-20 16:40:57.229274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.485 qpair failed and we were unable to recover it. 00:29:11.485 [2024-11-20 16:40:57.229562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.485 [2024-11-20 16:40:57.229569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.485 qpair failed and we were unable to recover it. 00:29:11.485 [2024-11-20 16:40:57.229869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.485 [2024-11-20 16:40:57.229877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.485 qpair failed and we were unable to recover it. 00:29:11.485 [2024-11-20 16:40:57.230161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.485 [2024-11-20 16:40:57.230168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.485 qpair failed and we were unable to recover it. 00:29:11.485 [2024-11-20 16:40:57.230507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.485 [2024-11-20 16:40:57.230514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.485 qpair failed and we were unable to recover it. 00:29:11.485 [2024-11-20 16:40:57.230824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.485 [2024-11-20 16:40:57.230831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.485 qpair failed and we were unable to recover it. 00:29:11.485 [2024-11-20 16:40:57.231133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.485 [2024-11-20 16:40:57.231140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.485 qpair failed and we were unable to recover it. 00:29:11.485 [2024-11-20 16:40:57.231461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.485 [2024-11-20 16:40:57.231468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.485 qpair failed and we were unable to recover it. 00:29:11.485 [2024-11-20 16:40:57.231762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.485 [2024-11-20 16:40:57.231769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.485 qpair failed and we were unable to recover it. 00:29:11.485 [2024-11-20 16:40:57.231958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.485 [2024-11-20 16:40:57.231965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.485 qpair failed and we were unable to recover it. 00:29:11.485 [2024-11-20 16:40:57.232235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.485 [2024-11-20 16:40:57.232242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.485 qpair failed and we were unable to recover it. 00:29:11.485 [2024-11-20 16:40:57.232407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.485 [2024-11-20 16:40:57.232414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.485 qpair failed and we were unable to recover it. 00:29:11.485 [2024-11-20 16:40:57.232725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.485 [2024-11-20 16:40:57.232732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.485 qpair failed and we were unable to recover it. 00:29:11.485 [2024-11-20 16:40:57.233024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.485 [2024-11-20 16:40:57.233030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.485 qpair failed and we were unable to recover it. 00:29:11.485 [2024-11-20 16:40:57.233324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.485 [2024-11-20 16:40:57.233330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.485 qpair failed and we were unable to recover it. 00:29:11.485 [2024-11-20 16:40:57.233623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.485 [2024-11-20 16:40:57.233630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.485 qpair failed and we were unable to recover it. 00:29:11.485 [2024-11-20 16:40:57.233945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.485 [2024-11-20 16:40:57.233954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.485 qpair failed and we were unable to recover it. 00:29:11.485 [2024-11-20 16:40:57.234237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.485 [2024-11-20 16:40:57.234244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.485 qpair failed and we were unable to recover it. 00:29:11.485 [2024-11-20 16:40:57.234591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.485 [2024-11-20 16:40:57.234598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.485 qpair failed and we were unable to recover it. 00:29:11.485 [2024-11-20 16:40:57.234913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.485 [2024-11-20 16:40:57.234920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.485 qpair failed and we were unable to recover it. 00:29:11.485 [2024-11-20 16:40:57.235242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.485 [2024-11-20 16:40:57.235248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.485 qpair failed and we were unable to recover it. 00:29:11.485 [2024-11-20 16:40:57.235443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.485 [2024-11-20 16:40:57.235450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.485 qpair failed and we were unable to recover it. 00:29:11.485 [2024-11-20 16:40:57.235778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.485 [2024-11-20 16:40:57.235785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.485 qpair failed and we were unable to recover it. 00:29:11.485 [2024-11-20 16:40:57.236100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.485 [2024-11-20 16:40:57.236107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.485 qpair failed and we were unable to recover it. 00:29:11.485 [2024-11-20 16:40:57.236436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.485 [2024-11-20 16:40:57.236443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.485 qpair failed and we were unable to recover it. 00:29:11.485 [2024-11-20 16:40:57.236754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.486 [2024-11-20 16:40:57.236762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-11-20 16:40:57.236977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.486 [2024-11-20 16:40:57.236987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-11-20 16:40:57.237300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.486 [2024-11-20 16:40:57.237307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-11-20 16:40:57.237643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.486 [2024-11-20 16:40:57.237649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-11-20 16:40:57.237991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.486 [2024-11-20 16:40:57.237998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-11-20 16:40:57.238306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.486 [2024-11-20 16:40:57.238312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-11-20 16:40:57.238610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.486 [2024-11-20 16:40:57.238623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-11-20 16:40:57.238925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.486 [2024-11-20 16:40:57.238932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-11-20 16:40:57.239157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.486 [2024-11-20 16:40:57.239164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-11-20 16:40:57.239501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.486 [2024-11-20 16:40:57.239507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-11-20 16:40:57.239815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.486 [2024-11-20 16:40:57.239822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-11-20 16:40:57.240134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.486 [2024-11-20 16:40:57.240141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-11-20 16:40:57.240450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.486 [2024-11-20 16:40:57.240457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-11-20 16:40:57.240767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.486 [2024-11-20 16:40:57.240774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-11-20 16:40:57.241067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.486 [2024-11-20 16:40:57.241075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-11-20 16:40:57.241391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.486 [2024-11-20 16:40:57.241398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-11-20 16:40:57.241757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.486 [2024-11-20 16:40:57.241763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-11-20 16:40:57.242012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.486 [2024-11-20 16:40:57.242019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-11-20 16:40:57.242337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.486 [2024-11-20 16:40:57.242343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-11-20 16:40:57.242639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.486 [2024-11-20 16:40:57.242646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-11-20 16:40:57.242945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.486 [2024-11-20 16:40:57.242951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-11-20 16:40:57.243324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.486 [2024-11-20 16:40:57.243331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-11-20 16:40:57.243573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.486 [2024-11-20 16:40:57.243580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-11-20 16:40:57.243899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.486 [2024-11-20 16:40:57.243906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-11-20 16:40:57.244215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.486 [2024-11-20 16:40:57.244222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-11-20 16:40:57.244514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.486 [2024-11-20 16:40:57.244520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-11-20 16:40:57.244830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.486 [2024-11-20 16:40:57.244836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-11-20 16:40:57.245141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.486 [2024-11-20 16:40:57.245148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-11-20 16:40:57.245469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.486 [2024-11-20 16:40:57.245477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-11-20 16:40:57.245792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.486 [2024-11-20 16:40:57.245800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-11-20 16:40:57.246105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.486 [2024-11-20 16:40:57.246112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-11-20 16:40:57.246420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.486 [2024-11-20 16:40:57.246430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-11-20 16:40:57.246737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.486 [2024-11-20 16:40:57.246744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-11-20 16:40:57.247050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.486 [2024-11-20 16:40:57.247057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-11-20 16:40:57.247362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.486 [2024-11-20 16:40:57.247369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-11-20 16:40:57.247666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.486 [2024-11-20 16:40:57.247673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-11-20 16:40:57.247868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.486 [2024-11-20 16:40:57.247876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-11-20 16:40:57.248162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.486 [2024-11-20 16:40:57.248169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-11-20 16:40:57.248454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.486 [2024-11-20 16:40:57.248461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-11-20 16:40:57.248645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.486 [2024-11-20 16:40:57.248652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-11-20 16:40:57.248850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.486 [2024-11-20 16:40:57.248856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-11-20 16:40:57.249244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.486 [2024-11-20 16:40:57.249252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-11-20 16:40:57.249558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.486 [2024-11-20 16:40:57.249565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-11-20 16:40:57.249757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.486 [2024-11-20 16:40:57.249764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-11-20 16:40:57.249939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.486 [2024-11-20 16:40:57.249945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-11-20 16:40:57.250234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.486 [2024-11-20 16:40:57.250240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-11-20 16:40:57.250532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.486 [2024-11-20 16:40:57.250540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.486 qpair failed and we were unable to recover it. 00:29:11.486 [2024-11-20 16:40:57.250866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.487 [2024-11-20 16:40:57.250873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.487 qpair failed and we were unable to recover it. 00:29:11.487 [2024-11-20 16:40:57.251151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.487 [2024-11-20 16:40:57.251159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.487 qpair failed and we were unable to recover it. 00:29:11.487 [2024-11-20 16:40:57.251496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.487 [2024-11-20 16:40:57.251502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.487 qpair failed and we were unable to recover it. 00:29:11.487 [2024-11-20 16:40:57.251795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.487 [2024-11-20 16:40:57.251802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.487 qpair failed and we were unable to recover it. 00:29:11.487 [2024-11-20 16:40:57.251958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.487 [2024-11-20 16:40:57.251965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.487 qpair failed and we were unable to recover it. 00:29:11.487 [2024-11-20 16:40:57.252283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.487 [2024-11-20 16:40:57.252291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.487 qpair failed and we were unable to recover it. 00:29:11.487 [2024-11-20 16:40:57.252623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.487 [2024-11-20 16:40:57.252629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.487 qpair failed and we were unable to recover it. 00:29:11.487 [2024-11-20 16:40:57.252827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.487 [2024-11-20 16:40:57.252833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.487 qpair failed and we were unable to recover it. 00:29:11.487 [2024-11-20 16:40:57.253186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.487 [2024-11-20 16:40:57.253193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.487 qpair failed and we were unable to recover it. 00:29:11.487 [2024-11-20 16:40:57.253512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.487 [2024-11-20 16:40:57.253519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.487 qpair failed and we were unable to recover it. 00:29:11.487 [2024-11-20 16:40:57.253858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.487 [2024-11-20 16:40:57.253865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.487 qpair failed and we were unable to recover it. 00:29:11.487 [2024-11-20 16:40:57.254267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.487 [2024-11-20 16:40:57.254275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.487 qpair failed and we were unable to recover it. 00:29:11.487 [2024-11-20 16:40:57.254600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.487 [2024-11-20 16:40:57.254607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.487 qpair failed and we were unable to recover it. 00:29:11.487 [2024-11-20 16:40:57.254891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.487 [2024-11-20 16:40:57.254905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.487 qpair failed and we were unable to recover it. 00:29:11.487 [2024-11-20 16:40:57.255215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.487 [2024-11-20 16:40:57.255222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.487 qpair failed and we were unable to recover it. 00:29:11.487 [2024-11-20 16:40:57.255441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.487 [2024-11-20 16:40:57.255447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.487 qpair failed and we were unable to recover it. 00:29:11.487 [2024-11-20 16:40:57.255775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.487 [2024-11-20 16:40:57.255782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.487 qpair failed and we were unable to recover it. 00:29:11.487 [2024-11-20 16:40:57.255998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.487 [2024-11-20 16:40:57.256005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.487 qpair failed and we were unable to recover it. 00:29:11.487 [2024-11-20 16:40:57.256293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.487 [2024-11-20 16:40:57.256300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.487 qpair failed and we were unable to recover it. 00:29:11.487 [2024-11-20 16:40:57.256606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.487 [2024-11-20 16:40:57.256613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.487 qpair failed and we were unable to recover it. 00:29:11.487 [2024-11-20 16:40:57.256905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.487 [2024-11-20 16:40:57.256912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.487 qpair failed and we were unable to recover it. 00:29:11.487 [2024-11-20 16:40:57.257222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.487 [2024-11-20 16:40:57.257230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.487 qpair failed and we were unable to recover it. 00:29:11.487 [2024-11-20 16:40:57.257595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.487 [2024-11-20 16:40:57.257601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.487 qpair failed and we were unable to recover it. 00:29:11.487 [2024-11-20 16:40:57.257878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.487 [2024-11-20 16:40:57.257885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.487 qpair failed and we were unable to recover it. 00:29:11.487 [2024-11-20 16:40:57.258202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.487 [2024-11-20 16:40:57.258211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.487 qpair failed and we were unable to recover it. 00:29:11.487 [2024-11-20 16:40:57.258600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.487 [2024-11-20 16:40:57.258607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.487 qpair failed and we were unable to recover it. 00:29:11.487 [2024-11-20 16:40:57.258926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.487 [2024-11-20 16:40:57.258933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.487 qpair failed and we were unable to recover it. 00:29:11.487 [2024-11-20 16:40:57.259246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.487 [2024-11-20 16:40:57.259254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.487 qpair failed and we were unable to recover it. 00:29:11.487 [2024-11-20 16:40:57.259578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.487 [2024-11-20 16:40:57.259585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.487 qpair failed and we were unable to recover it. 00:29:11.487 [2024-11-20 16:40:57.259896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.487 [2024-11-20 16:40:57.259902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.487 qpair failed and we were unable to recover it. 00:29:11.487 [2024-11-20 16:40:57.260098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.487 [2024-11-20 16:40:57.260105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.487 qpair failed and we were unable to recover it. 00:29:11.487 [2024-11-20 16:40:57.260397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.487 [2024-11-20 16:40:57.260404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.487 qpair failed and we were unable to recover it. 00:29:11.487 [2024-11-20 16:40:57.260733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.487 [2024-11-20 16:40:57.260739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.487 qpair failed and we were unable to recover it. 00:29:11.487 [2024-11-20 16:40:57.261026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.487 [2024-11-20 16:40:57.261034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.487 qpair failed and we were unable to recover it. 00:29:11.487 [2024-11-20 16:40:57.261210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.487 [2024-11-20 16:40:57.261217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.487 qpair failed and we were unable to recover it. 00:29:11.487 [2024-11-20 16:40:57.261523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.487 [2024-11-20 16:40:57.261529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.487 qpair failed and we were unable to recover it. 00:29:11.487 [2024-11-20 16:40:57.261807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.487 [2024-11-20 16:40:57.261821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.487 qpair failed and we were unable to recover it. 00:29:11.487 [2024-11-20 16:40:57.262138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.487 [2024-11-20 16:40:57.262144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.487 qpair failed and we were unable to recover it. 00:29:11.487 [2024-11-20 16:40:57.262394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.487 [2024-11-20 16:40:57.262402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.487 qpair failed and we were unable to recover it. 00:29:11.487 [2024-11-20 16:40:57.262731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.487 [2024-11-20 16:40:57.262738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.487 qpair failed and we were unable to recover it. 00:29:11.487 [2024-11-20 16:40:57.263021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.487 [2024-11-20 16:40:57.263028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.487 qpair failed and we were unable to recover it. 00:29:11.487 [2024-11-20 16:40:57.263361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.487 [2024-11-20 16:40:57.263368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.487 qpair failed and we were unable to recover it. 00:29:11.487 [2024-11-20 16:40:57.263670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.487 [2024-11-20 16:40:57.263676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.487 qpair failed and we were unable to recover it. 00:29:11.487 [2024-11-20 16:40:57.263835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.487 [2024-11-20 16:40:57.263843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.487 qpair failed and we were unable to recover it. 00:29:11.487 [2024-11-20 16:40:57.264138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.487 [2024-11-20 16:40:57.264145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.487 qpair failed and we were unable to recover it. 00:29:11.487 [2024-11-20 16:40:57.264460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.488 [2024-11-20 16:40:57.264466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.488 qpair failed and we were unable to recover it. 00:29:11.488 [2024-11-20 16:40:57.264756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.488 [2024-11-20 16:40:57.264762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.488 qpair failed and we were unable to recover it. 00:29:11.488 [2024-11-20 16:40:57.265062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.488 [2024-11-20 16:40:57.265069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.488 qpair failed and we were unable to recover it. 00:29:11.488 [2024-11-20 16:40:57.265401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.488 [2024-11-20 16:40:57.265408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.488 qpair failed and we were unable to recover it. 00:29:11.488 [2024-11-20 16:40:57.265800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.488 [2024-11-20 16:40:57.265807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.488 qpair failed and we were unable to recover it. 00:29:11.488 [2024-11-20 16:40:57.266012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.488 [2024-11-20 16:40:57.266019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.488 qpair failed and we were unable to recover it. 00:29:11.488 [2024-11-20 16:40:57.266338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.488 [2024-11-20 16:40:57.266345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.488 qpair failed and we were unable to recover it. 00:29:11.488 [2024-11-20 16:40:57.266706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.488 [2024-11-20 16:40:57.266712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.488 qpair failed and we were unable to recover it. 00:29:11.488 [2024-11-20 16:40:57.266889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.488 [2024-11-20 16:40:57.266896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.488 qpair failed and we were unable to recover it. 00:29:11.488 [2024-11-20 16:40:57.267247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.488 [2024-11-20 16:40:57.267254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.488 qpair failed and we were unable to recover it. 00:29:11.488 [2024-11-20 16:40:57.267581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.488 [2024-11-20 16:40:57.267589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.488 qpair failed and we were unable to recover it. 00:29:11.488 [2024-11-20 16:40:57.267928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.488 [2024-11-20 16:40:57.267935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.488 qpair failed and we were unable to recover it. 00:29:11.488 [2024-11-20 16:40:57.268173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.488 [2024-11-20 16:40:57.268180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.488 qpair failed and we were unable to recover it. 00:29:11.488 [2024-11-20 16:40:57.268523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.488 [2024-11-20 16:40:57.268530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.488 qpair failed and we were unable to recover it. 00:29:11.488 [2024-11-20 16:40:57.268841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.488 [2024-11-20 16:40:57.268849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.488 qpair failed and we were unable to recover it. 00:29:11.488 [2024-11-20 16:40:57.269137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.488 [2024-11-20 16:40:57.269144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.488 qpair failed and we were unable to recover it. 00:29:11.488 [2024-11-20 16:40:57.269368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.488 [2024-11-20 16:40:57.269375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.488 qpair failed and we were unable to recover it. 00:29:11.488 [2024-11-20 16:40:57.269670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.488 [2024-11-20 16:40:57.269677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.488 qpair failed and we were unable to recover it. 00:29:11.488 [2024-11-20 16:40:57.269868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.488 [2024-11-20 16:40:57.269875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.488 qpair failed and we were unable to recover it. 00:29:11.488 [2024-11-20 16:40:57.270163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.488 [2024-11-20 16:40:57.270172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.488 qpair failed and we were unable to recover it. 00:29:11.488 [2024-11-20 16:40:57.270511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.488 [2024-11-20 16:40:57.270517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.488 qpair failed and we were unable to recover it. 00:29:11.488 [2024-11-20 16:40:57.270800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.488 [2024-11-20 16:40:57.270813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.488 qpair failed and we were unable to recover it. 00:29:11.488 [2024-11-20 16:40:57.271114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.488 [2024-11-20 16:40:57.271121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.488 qpair failed and we were unable to recover it. 00:29:11.488 [2024-11-20 16:40:57.271429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.488 [2024-11-20 16:40:57.271436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.488 qpair failed and we were unable to recover it. 00:29:11.488 [2024-11-20 16:40:57.271795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.488 [2024-11-20 16:40:57.271802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.488 qpair failed and we were unable to recover it. 00:29:11.488 [2024-11-20 16:40:57.272096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.488 [2024-11-20 16:40:57.272103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.488 qpair failed and we were unable to recover it. 00:29:11.488 [2024-11-20 16:40:57.272470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.488 [2024-11-20 16:40:57.272477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.488 qpair failed and we were unable to recover it. 00:29:11.488 [2024-11-20 16:40:57.272768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.488 [2024-11-20 16:40:57.272775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.488 qpair failed and we were unable to recover it. 00:29:11.488 [2024-11-20 16:40:57.273097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.488 [2024-11-20 16:40:57.273104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.488 qpair failed and we were unable to recover it. 00:29:11.488 [2024-11-20 16:40:57.273265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.488 [2024-11-20 16:40:57.273273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.488 qpair failed and we were unable to recover it. 00:29:11.488 [2024-11-20 16:40:57.273546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.488 [2024-11-20 16:40:57.273554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.488 qpair failed and we were unable to recover it. 00:29:11.488 [2024-11-20 16:40:57.273718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.488 [2024-11-20 16:40:57.273726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.488 qpair failed and we were unable to recover it. 00:29:11.488 [2024-11-20 16:40:57.274045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.488 [2024-11-20 16:40:57.274052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.488 qpair failed and we were unable to recover it. 00:29:11.488 [2024-11-20 16:40:57.274352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.488 [2024-11-20 16:40:57.274359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.488 qpair failed and we were unable to recover it. 00:29:11.488 [2024-11-20 16:40:57.274628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.488 [2024-11-20 16:40:57.274634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.488 qpair failed and we were unable to recover it. 00:29:11.488 [2024-11-20 16:40:57.274838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.488 [2024-11-20 16:40:57.274845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.488 qpair failed and we were unable to recover it. 00:29:11.488 [2024-11-20 16:40:57.275139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.488 [2024-11-20 16:40:57.275146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.488 qpair failed and we were unable to recover it. 00:29:11.488 [2024-11-20 16:40:57.275454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.488 [2024-11-20 16:40:57.275460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.488 qpair failed and we were unable to recover it. 00:29:11.488 [2024-11-20 16:40:57.275785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.488 [2024-11-20 16:40:57.275792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.488 qpair failed and we were unable to recover it. 00:29:11.488 [2024-11-20 16:40:57.276118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.488 [2024-11-20 16:40:57.276125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.488 qpair failed and we were unable to recover it. 00:29:11.488 [2024-11-20 16:40:57.276419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.488 [2024-11-20 16:40:57.276431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.488 qpair failed and we were unable to recover it. 00:29:11.488 [2024-11-20 16:40:57.276761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.488 [2024-11-20 16:40:57.276767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.488 qpair failed and we were unable to recover it. 00:29:11.488 [2024-11-20 16:40:57.276972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.488 [2024-11-20 16:40:57.276978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.488 qpair failed and we were unable to recover it. 00:29:11.488 [2024-11-20 16:40:57.277308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.488 [2024-11-20 16:40:57.277315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.488 qpair failed and we were unable to recover it. 00:29:11.488 [2024-11-20 16:40:57.277652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.488 [2024-11-20 16:40:57.277658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.488 qpair failed and we were unable to recover it. 00:29:11.488 [2024-11-20 16:40:57.277862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.488 [2024-11-20 16:40:57.277870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.488 qpair failed and we were unable to recover it. 00:29:11.488 [2024-11-20 16:40:57.278198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.488 [2024-11-20 16:40:57.278205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.488 qpair failed and we were unable to recover it. 00:29:11.488 [2024-11-20 16:40:57.278502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.488 [2024-11-20 16:40:57.278509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.488 qpair failed and we were unable to recover it. 00:29:11.488 [2024-11-20 16:40:57.278820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.488 [2024-11-20 16:40:57.278827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.488 qpair failed and we were unable to recover it. 00:29:11.488 [2024-11-20 16:40:57.279121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.489 [2024-11-20 16:40:57.279128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.489 qpair failed and we were unable to recover it. 00:29:11.489 [2024-11-20 16:40:57.279333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.489 [2024-11-20 16:40:57.279341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.489 qpair failed and we were unable to recover it. 00:29:11.489 [2024-11-20 16:40:57.279424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.489 [2024-11-20 16:40:57.279432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.489 qpair failed and we were unable to recover it. 00:29:11.489 [2024-11-20 16:40:57.279722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.489 [2024-11-20 16:40:57.279731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.489 qpair failed and we were unable to recover it. 00:29:11.489 [2024-11-20 16:40:57.280048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.489 [2024-11-20 16:40:57.280056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.489 qpair failed and we were unable to recover it. 00:29:11.489 [2024-11-20 16:40:57.280375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.489 [2024-11-20 16:40:57.280382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.489 qpair failed and we were unable to recover it. 00:29:11.489 [2024-11-20 16:40:57.280655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.489 [2024-11-20 16:40:57.280663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.489 qpair failed and we were unable to recover it. 00:29:11.489 [2024-11-20 16:40:57.280843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.489 [2024-11-20 16:40:57.280852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.489 qpair failed and we were unable to recover it. 00:29:11.489 [2024-11-20 16:40:57.281151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.489 [2024-11-20 16:40:57.281159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.489 qpair failed and we were unable to recover it. 00:29:11.489 [2024-11-20 16:40:57.281468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.489 [2024-11-20 16:40:57.281476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.489 qpair failed and we were unable to recover it. 00:29:11.489 [2024-11-20 16:40:57.281788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.489 [2024-11-20 16:40:57.281796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.489 qpair failed and we were unable to recover it. 00:29:11.489 [2024-11-20 16:40:57.282203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.489 [2024-11-20 16:40:57.282211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.489 qpair failed and we were unable to recover it. 00:29:11.489 [2024-11-20 16:40:57.282386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.489 [2024-11-20 16:40:57.282394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.489 qpair failed and we were unable to recover it. 00:29:11.489 [2024-11-20 16:40:57.282726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.489 [2024-11-20 16:40:57.282733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.489 qpair failed and we were unable to recover it. 00:29:11.489 [2024-11-20 16:40:57.283001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.489 [2024-11-20 16:40:57.283009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.489 qpair failed and we were unable to recover it. 00:29:11.489 [2024-11-20 16:40:57.283286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.489 [2024-11-20 16:40:57.283293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.489 qpair failed and we were unable to recover it. 00:29:11.489 [2024-11-20 16:40:57.283608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.489 [2024-11-20 16:40:57.283616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.489 qpair failed and we were unable to recover it. 00:29:11.489 [2024-11-20 16:40:57.283942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.489 [2024-11-20 16:40:57.283950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.489 qpair failed and we were unable to recover it. 00:29:11.489 [2024-11-20 16:40:57.284141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.489 [2024-11-20 16:40:57.284149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.489 qpair failed and we were unable to recover it. 00:29:11.489 [2024-11-20 16:40:57.284437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.489 [2024-11-20 16:40:57.284445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.489 qpair failed and we were unable to recover it. 00:29:11.489 [2024-11-20 16:40:57.284634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.489 [2024-11-20 16:40:57.284642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.489 qpair failed and we were unable to recover it. 00:29:11.489 [2024-11-20 16:40:57.284979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.489 [2024-11-20 16:40:57.284990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.489 qpair failed and we were unable to recover it. 00:29:11.489 [2024-11-20 16:40:57.285302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.489 [2024-11-20 16:40:57.285310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.489 qpair failed and we were unable to recover it. 00:29:11.489 [2024-11-20 16:40:57.285602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.489 [2024-11-20 16:40:57.285610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.489 qpair failed and we were unable to recover it. 00:29:11.489 [2024-11-20 16:40:57.285920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.489 [2024-11-20 16:40:57.285927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.489 qpair failed and we were unable to recover it. 00:29:11.489 [2024-11-20 16:40:57.286132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.489 [2024-11-20 16:40:57.286140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.489 qpair failed and we were unable to recover it. 00:29:11.489 [2024-11-20 16:40:57.286487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.489 [2024-11-20 16:40:57.286495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.489 qpair failed and we were unable to recover it. 00:29:11.489 [2024-11-20 16:40:57.286817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.489 [2024-11-20 16:40:57.286825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.489 qpair failed and we were unable to recover it. 00:29:11.489 [2024-11-20 16:40:57.287128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.489 [2024-11-20 16:40:57.287136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.489 qpair failed and we were unable to recover it. 00:29:11.489 [2024-11-20 16:40:57.287397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.489 [2024-11-20 16:40:57.287404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.489 qpair failed and we were unable to recover it. 00:29:11.489 [2024-11-20 16:40:57.287756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.489 [2024-11-20 16:40:57.287764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.489 qpair failed and we were unable to recover it. 00:29:11.489 [2024-11-20 16:40:57.288072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.489 [2024-11-20 16:40:57.288080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.489 qpair failed and we were unable to recover it. 00:29:11.489 [2024-11-20 16:40:57.288430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.489 [2024-11-20 16:40:57.288438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.489 qpair failed and we were unable to recover it. 00:29:11.489 [2024-11-20 16:40:57.288752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.489 [2024-11-20 16:40:57.288759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.489 qpair failed and we were unable to recover it. 00:29:11.489 [2024-11-20 16:40:57.289057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.489 [2024-11-20 16:40:57.289064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.489 qpair failed and we were unable to recover it. 00:29:11.489 [2024-11-20 16:40:57.289395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.489 [2024-11-20 16:40:57.289402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.489 qpair failed and we were unable to recover it. 00:29:11.489 [2024-11-20 16:40:57.289748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.489 [2024-11-20 16:40:57.289755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.489 qpair failed and we were unable to recover it. 00:29:11.489 [2024-11-20 16:40:57.289955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.489 [2024-11-20 16:40:57.289963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.489 qpair failed and we were unable to recover it. 00:29:11.489 [2024-11-20 16:40:57.290231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.489 [2024-11-20 16:40:57.290239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.489 qpair failed and we were unable to recover it. 00:29:11.489 [2024-11-20 16:40:57.290318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.489 [2024-11-20 16:40:57.290325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.489 qpair failed and we were unable to recover it. 00:29:11.489 [2024-11-20 16:40:57.290652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.489 [2024-11-20 16:40:57.290659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.489 qpair failed and we were unable to recover it. 00:29:11.489 [2024-11-20 16:40:57.290993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.489 [2024-11-20 16:40:57.291002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.489 qpair failed and we were unable to recover it. 00:29:11.489 [2024-11-20 16:40:57.291358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.489 [2024-11-20 16:40:57.291366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.489 qpair failed and we were unable to recover it. 00:29:11.489 [2024-11-20 16:40:57.291665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.489 [2024-11-20 16:40:57.291672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.489 qpair failed and we were unable to recover it. 00:29:11.489 [2024-11-20 16:40:57.292000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.489 [2024-11-20 16:40:57.292007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.489 qpair failed and we were unable to recover it. 00:29:11.489 [2024-11-20 16:40:57.292216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.489 [2024-11-20 16:40:57.292224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.489 qpair failed and we were unable to recover it. 00:29:11.489 [2024-11-20 16:40:57.292522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.489 [2024-11-20 16:40:57.292530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.489 qpair failed and we were unable to recover it. 00:29:11.489 [2024-11-20 16:40:57.292822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.489 [2024-11-20 16:40:57.292830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.489 qpair failed and we were unable to recover it. 00:29:11.489 [2024-11-20 16:40:57.293140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.489 [2024-11-20 16:40:57.293148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.489 qpair failed and we were unable to recover it. 00:29:11.489 [2024-11-20 16:40:57.293442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.489 [2024-11-20 16:40:57.293450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.489 qpair failed and we were unable to recover it. 00:29:11.489 [2024-11-20 16:40:57.293768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.489 [2024-11-20 16:40:57.293774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.489 qpair failed and we were unable to recover it. 00:29:11.489 [2024-11-20 16:40:57.294080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.489 [2024-11-20 16:40:57.294087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.489 qpair failed and we were unable to recover it. 00:29:11.489 [2024-11-20 16:40:57.294295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.489 [2024-11-20 16:40:57.294302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.490 qpair failed and we were unable to recover it. 00:29:11.490 [2024-11-20 16:40:57.294585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.490 [2024-11-20 16:40:57.294591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.490 qpair failed and we were unable to recover it. 00:29:11.490 [2024-11-20 16:40:57.294791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.490 [2024-11-20 16:40:57.294799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.490 qpair failed and we were unable to recover it. 00:29:11.490 [2024-11-20 16:40:57.295104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.490 [2024-11-20 16:40:57.295111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.490 qpair failed and we were unable to recover it. 00:29:11.490 [2024-11-20 16:40:57.295286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.490 [2024-11-20 16:40:57.295293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.490 qpair failed and we were unable to recover it. 00:29:11.490 [2024-11-20 16:40:57.295554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.490 [2024-11-20 16:40:57.295562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.490 qpair failed and we were unable to recover it. 00:29:11.490 [2024-11-20 16:40:57.295903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.490 [2024-11-20 16:40:57.295909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.490 qpair failed and we were unable to recover it. 00:29:11.490 [2024-11-20 16:40:57.296235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.490 [2024-11-20 16:40:57.296242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.490 qpair failed and we were unable to recover it. 00:29:11.490 [2024-11-20 16:40:57.296567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.490 [2024-11-20 16:40:57.296573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.490 qpair failed and we were unable to recover it. 00:29:11.490 [2024-11-20 16:40:57.296888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.490 [2024-11-20 16:40:57.296896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.490 qpair failed and we were unable to recover it. 00:29:11.490 [2024-11-20 16:40:57.297077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.490 [2024-11-20 16:40:57.297084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.490 qpair failed and we were unable to recover it. 00:29:11.490 [2024-11-20 16:40:57.297343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.490 [2024-11-20 16:40:57.297349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.490 qpair failed and we were unable to recover it. 00:29:11.490 [2024-11-20 16:40:57.297528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.490 [2024-11-20 16:40:57.297535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.490 qpair failed and we were unable to recover it. 00:29:11.490 [2024-11-20 16:40:57.297821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.490 [2024-11-20 16:40:57.297828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.490 qpair failed and we were unable to recover it. 00:29:11.490 [2024-11-20 16:40:57.298131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.490 [2024-11-20 16:40:57.298137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.490 qpair failed and we were unable to recover it. 00:29:11.490 [2024-11-20 16:40:57.298460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.490 [2024-11-20 16:40:57.298467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.490 qpair failed and we were unable to recover it. 00:29:11.490 [2024-11-20 16:40:57.298634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.490 [2024-11-20 16:40:57.298640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.490 qpair failed and we were unable to recover it. 00:29:11.490 [2024-11-20 16:40:57.298949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.490 [2024-11-20 16:40:57.298956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.490 qpair failed and we were unable to recover it. 00:29:11.490 [2024-11-20 16:40:57.299294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.490 [2024-11-20 16:40:57.299302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.490 qpair failed and we were unable to recover it. 00:29:11.490 [2024-11-20 16:40:57.299594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.490 [2024-11-20 16:40:57.299601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.490 qpair failed and we were unable to recover it. 00:29:11.490 [2024-11-20 16:40:57.299884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.490 [2024-11-20 16:40:57.299891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.490 qpair failed and we were unable to recover it. 00:29:11.490 [2024-11-20 16:40:57.300201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.490 [2024-11-20 16:40:57.300208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.490 qpair failed and we were unable to recover it. 00:29:11.490 [2024-11-20 16:40:57.300517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.490 [2024-11-20 16:40:57.300525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.490 qpair failed and we were unable to recover it. 00:29:11.490 [2024-11-20 16:40:57.300833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.490 [2024-11-20 16:40:57.300841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.490 qpair failed and we were unable to recover it. 00:29:11.490 [2024-11-20 16:40:57.301142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.490 [2024-11-20 16:40:57.301149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.490 qpair failed and we were unable to recover it. 00:29:11.490 [2024-11-20 16:40:57.301524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.490 [2024-11-20 16:40:57.301533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.490 qpair failed and we were unable to recover it. 00:29:11.490 [2024-11-20 16:40:57.301848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.490 [2024-11-20 16:40:57.301856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.490 qpair failed and we were unable to recover it. 00:29:11.490 [2024-11-20 16:40:57.302153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.490 [2024-11-20 16:40:57.302160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.490 qpair failed and we were unable to recover it. 00:29:11.490 [2024-11-20 16:40:57.302524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.490 [2024-11-20 16:40:57.302531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.490 qpair failed and we were unable to recover it. 00:29:11.490 [2024-11-20 16:40:57.302845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.490 [2024-11-20 16:40:57.302852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.490 qpair failed and we were unable to recover it. 00:29:11.490 [2024-11-20 16:40:57.303041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.490 [2024-11-20 16:40:57.303048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.490 qpair failed and we were unable to recover it. 00:29:11.490 [2024-11-20 16:40:57.303363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.490 [2024-11-20 16:40:57.303370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.490 qpair failed and we were unable to recover it. 00:29:11.490 [2024-11-20 16:40:57.303663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.490 [2024-11-20 16:40:57.303670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.490 qpair failed and we were unable to recover it. 00:29:11.490 [2024-11-20 16:40:57.303976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.490 [2024-11-20 16:40:57.303987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.490 qpair failed and we were unable to recover it. 00:29:11.490 [2024-11-20 16:40:57.304274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.490 [2024-11-20 16:40:57.304281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.490 qpair failed and we were unable to recover it. 00:29:11.490 [2024-11-20 16:40:57.304600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.490 [2024-11-20 16:40:57.304607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.490 qpair failed and we were unable to recover it. 00:29:11.490 [2024-11-20 16:40:57.304903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.490 [2024-11-20 16:40:57.304910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.490 qpair failed and we were unable to recover it. 00:29:11.490 [2024-11-20 16:40:57.305326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.490 [2024-11-20 16:40:57.305333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.490 qpair failed and we were unable to recover it. 00:29:11.490 [2024-11-20 16:40:57.305523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.490 [2024-11-20 16:40:57.305530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.490 qpair failed and we were unable to recover it. 00:29:11.490 [2024-11-20 16:40:57.305891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.490 [2024-11-20 16:40:57.305898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.490 qpair failed and we were unable to recover it. 00:29:11.490 [2024-11-20 16:40:57.306207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.490 [2024-11-20 16:40:57.306214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.490 qpair failed and we were unable to recover it. 00:29:11.490 [2024-11-20 16:40:57.306520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.490 [2024-11-20 16:40:57.306526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.490 qpair failed and we were unable to recover it. 00:29:11.490 [2024-11-20 16:40:57.306821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.490 [2024-11-20 16:40:57.306828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.490 qpair failed and we were unable to recover it. 00:29:11.490 [2024-11-20 16:40:57.307136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.490 [2024-11-20 16:40:57.307143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.490 qpair failed and we were unable to recover it. 00:29:11.490 [2024-11-20 16:40:57.307517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.490 [2024-11-20 16:40:57.307524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.490 qpair failed and we were unable to recover it. 00:29:11.490 [2024-11-20 16:40:57.307711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.490 [2024-11-20 16:40:57.307719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.490 qpair failed and we were unable to recover it. 00:29:11.490 [2024-11-20 16:40:57.307918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.490 [2024-11-20 16:40:57.307925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.490 qpair failed and we were unable to recover it. 00:29:11.490 [2024-11-20 16:40:57.308114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.490 [2024-11-20 16:40:57.308122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.490 qpair failed and we were unable to recover it. 00:29:11.490 [2024-11-20 16:40:57.308480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.490 [2024-11-20 16:40:57.308487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.490 qpair failed and we were unable to recover it. 00:29:11.490 [2024-11-20 16:40:57.308794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.490 [2024-11-20 16:40:57.308802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.490 qpair failed and we were unable to recover it. 00:29:11.490 [2024-11-20 16:40:57.309117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.490 [2024-11-20 16:40:57.309124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.490 qpair failed and we were unable to recover it. 00:29:11.490 [2024-11-20 16:40:57.309435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.490 [2024-11-20 16:40:57.309442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.490 qpair failed and we were unable to recover it. 00:29:11.490 [2024-11-20 16:40:57.309761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.490 [2024-11-20 16:40:57.309769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.490 qpair failed and we were unable to recover it. 00:29:11.490 [2024-11-20 16:40:57.309963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.490 [2024-11-20 16:40:57.309970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.490 qpair failed and we were unable to recover it. 00:29:11.491 [2024-11-20 16:40:57.310274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.491 [2024-11-20 16:40:57.310282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.491 qpair failed and we were unable to recover it. 00:29:11.491 [2024-11-20 16:40:57.310609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.491 [2024-11-20 16:40:57.310616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.491 qpair failed and we were unable to recover it. 00:29:11.491 [2024-11-20 16:40:57.310910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.491 [2024-11-20 16:40:57.310918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.491 qpair failed and we were unable to recover it. 00:29:11.491 [2024-11-20 16:40:57.311118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.491 [2024-11-20 16:40:57.311126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.491 qpair failed and we were unable to recover it. 00:29:11.491 [2024-11-20 16:40:57.311440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.491 [2024-11-20 16:40:57.311447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.491 qpair failed and we were unable to recover it. 00:29:11.491 [2024-11-20 16:40:57.311743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.491 [2024-11-20 16:40:57.311751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.491 qpair failed and we were unable to recover it. 00:29:11.491 [2024-11-20 16:40:57.312060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.491 [2024-11-20 16:40:57.312068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.491 qpair failed and we were unable to recover it. 00:29:11.491 [2024-11-20 16:40:57.312383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.491 [2024-11-20 16:40:57.312391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.491 qpair failed and we were unable to recover it. 00:29:11.491 [2024-11-20 16:40:57.312684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.491 [2024-11-20 16:40:57.312692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.491 qpair failed and we were unable to recover it. 00:29:11.491 [2024-11-20 16:40:57.313005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.491 [2024-11-20 16:40:57.313013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.491 qpair failed and we were unable to recover it. 00:29:11.491 [2024-11-20 16:40:57.313242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.491 [2024-11-20 16:40:57.313251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.491 qpair failed and we were unable to recover it. 00:29:11.491 [2024-11-20 16:40:57.313583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.491 [2024-11-20 16:40:57.313592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.491 qpair failed and we were unable to recover it. 00:29:11.491 [2024-11-20 16:40:57.313767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.491 [2024-11-20 16:40:57.313774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.491 qpair failed and we were unable to recover it. 00:29:11.491 [2024-11-20 16:40:57.314091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.491 [2024-11-20 16:40:57.314099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.491 qpair failed and we were unable to recover it. 00:29:11.491 [2024-11-20 16:40:57.314434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.491 [2024-11-20 16:40:57.314442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.491 qpair failed and we were unable to recover it. 00:29:11.491 [2024-11-20 16:40:57.314752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.491 [2024-11-20 16:40:57.314760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.491 qpair failed and we were unable to recover it. 00:29:11.491 [2024-11-20 16:40:57.314928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.491 [2024-11-20 16:40:57.314936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.491 qpair failed and we were unable to recover it. 00:29:11.491 [2024-11-20 16:40:57.315227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.491 [2024-11-20 16:40:57.315234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.491 qpair failed and we were unable to recover it. 00:29:11.491 [2024-11-20 16:40:57.315538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.491 [2024-11-20 16:40:57.315545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.491 qpair failed and we were unable to recover it. 00:29:11.491 [2024-11-20 16:40:57.315865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.491 [2024-11-20 16:40:57.315871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.491 qpair failed and we were unable to recover it. 00:29:11.491 [2024-11-20 16:40:57.316229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.491 [2024-11-20 16:40:57.316236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.491 qpair failed and we were unable to recover it. 00:29:11.491 [2024-11-20 16:40:57.316543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.491 [2024-11-20 16:40:57.316550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.491 qpair failed and we were unable to recover it. 00:29:11.491 [2024-11-20 16:40:57.316871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.491 [2024-11-20 16:40:57.316878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.491 qpair failed and we were unable to recover it. 00:29:11.491 [2024-11-20 16:40:57.317075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.491 [2024-11-20 16:40:57.317083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.491 qpair failed and we were unable to recover it. 00:29:11.491 [2024-11-20 16:40:57.317270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.491 [2024-11-20 16:40:57.317278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.491 qpair failed and we were unable to recover it. 00:29:11.491 [2024-11-20 16:40:57.317590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.491 [2024-11-20 16:40:57.317597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.491 qpair failed and we were unable to recover it. 00:29:11.491 [2024-11-20 16:40:57.317849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.491 [2024-11-20 16:40:57.317856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.491 qpair failed and we were unable to recover it. 00:29:11.491 [2024-11-20 16:40:57.318151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.491 [2024-11-20 16:40:57.318158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.491 qpair failed and we were unable to recover it. 00:29:11.491 [2024-11-20 16:40:57.318416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.491 [2024-11-20 16:40:57.318422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.491 qpair failed and we were unable to recover it. 00:29:11.491 [2024-11-20 16:40:57.318760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.491 [2024-11-20 16:40:57.318767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.491 qpair failed and we were unable to recover it. 00:29:11.491 [2024-11-20 16:40:57.319105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.491 [2024-11-20 16:40:57.319113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.491 qpair failed and we were unable to recover it. 00:29:11.491 [2024-11-20 16:40:57.319419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.491 [2024-11-20 16:40:57.319426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.491 qpair failed and we were unable to recover it. 00:29:11.491 [2024-11-20 16:40:57.319721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.491 [2024-11-20 16:40:57.319728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.491 qpair failed and we were unable to recover it. 00:29:11.491 [2024-11-20 16:40:57.320033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.491 [2024-11-20 16:40:57.320040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.491 qpair failed and we were unable to recover it. 00:29:11.491 [2024-11-20 16:40:57.320214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.491 [2024-11-20 16:40:57.320221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.491 qpair failed and we were unable to recover it. 00:29:11.491 [2024-11-20 16:40:57.320426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.491 [2024-11-20 16:40:57.320432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.491 qpair failed and we were unable to recover it. 00:29:11.491 [2024-11-20 16:40:57.320633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.491 [2024-11-20 16:40:57.320640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.491 qpair failed and we were unable to recover it. 00:29:11.491 [2024-11-20 16:40:57.320960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.491 [2024-11-20 16:40:57.320967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.491 qpair failed and we were unable to recover it. 00:29:11.491 [2024-11-20 16:40:57.321253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.491 [2024-11-20 16:40:57.321261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.491 qpair failed and we were unable to recover it. 00:29:11.491 [2024-11-20 16:40:57.321560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.491 [2024-11-20 16:40:57.321567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.491 qpair failed and we were unable to recover it. 00:29:11.491 [2024-11-20 16:40:57.321867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.491 [2024-11-20 16:40:57.321874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.491 qpair failed and we were unable to recover it. 00:29:11.491 [2024-11-20 16:40:57.322255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.491 [2024-11-20 16:40:57.322262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.491 qpair failed and we were unable to recover it. 00:29:11.491 [2024-11-20 16:40:57.322412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.491 [2024-11-20 16:40:57.322419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.491 qpair failed and we were unable to recover it. 00:29:11.491 [2024-11-20 16:40:57.322744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.492 [2024-11-20 16:40:57.322750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.492 qpair failed and we were unable to recover it. 00:29:11.492 [2024-11-20 16:40:57.323047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.492 [2024-11-20 16:40:57.323054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.492 qpair failed and we were unable to recover it. 00:29:11.492 [2024-11-20 16:40:57.323367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.492 [2024-11-20 16:40:57.323374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.492 qpair failed and we were unable to recover it. 00:29:11.492 [2024-11-20 16:40:57.323680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.492 [2024-11-20 16:40:57.323687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.492 qpair failed and we were unable to recover it. 00:29:11.492 [2024-11-20 16:40:57.323898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.492 [2024-11-20 16:40:57.323905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.492 qpair failed and we were unable to recover it. 00:29:11.492 [2024-11-20 16:40:57.324181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.492 [2024-11-20 16:40:57.324188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.492 qpair failed and we were unable to recover it. 00:29:11.492 [2024-11-20 16:40:57.324476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.492 [2024-11-20 16:40:57.324491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.492 qpair failed and we were unable to recover it. 00:29:11.492 [2024-11-20 16:40:57.324675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.492 [2024-11-20 16:40:57.324682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.492 qpair failed and we were unable to recover it. 00:29:11.492 [2024-11-20 16:40:57.324969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.492 [2024-11-20 16:40:57.324977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.492 qpair failed and we were unable to recover it. 00:29:11.492 [2024-11-20 16:40:57.325157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.492 [2024-11-20 16:40:57.325200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.492 qpair failed and we were unable to recover it. 00:29:11.492 [2024-11-20 16:40:57.325424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.492 [2024-11-20 16:40:57.325432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.492 qpair failed and we were unable to recover it. 00:29:11.492 [2024-11-20 16:40:57.325741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.492 [2024-11-20 16:40:57.325748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.492 qpair failed and we were unable to recover it. 00:29:11.492 [2024-11-20 16:40:57.326038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.492 [2024-11-20 16:40:57.326046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.492 qpair failed and we were unable to recover it. 00:29:11.492 [2024-11-20 16:40:57.326273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.492 [2024-11-20 16:40:57.326280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.492 qpair failed and we were unable to recover it. 00:29:11.492 [2024-11-20 16:40:57.326590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.492 [2024-11-20 16:40:57.326597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.492 qpair failed and we were unable to recover it. 00:29:11.492 [2024-11-20 16:40:57.326890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.492 [2024-11-20 16:40:57.326897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.492 qpair failed and we were unable to recover it. 00:29:11.492 [2024-11-20 16:40:57.327186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.492 [2024-11-20 16:40:57.327193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.492 qpair failed and we were unable to recover it. 00:29:11.492 [2024-11-20 16:40:57.327485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.492 [2024-11-20 16:40:57.327493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.492 qpair failed and we were unable to recover it. 00:29:11.492 [2024-11-20 16:40:57.327803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.492 [2024-11-20 16:40:57.327810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.492 qpair failed and we were unable to recover it. 00:29:11.492 [2024-11-20 16:40:57.327999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.492 [2024-11-20 16:40:57.328007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.492 qpair failed and we were unable to recover it. 00:29:11.492 [2024-11-20 16:40:57.328300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.492 [2024-11-20 16:40:57.328306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.492 qpair failed and we were unable to recover it. 00:29:11.492 [2024-11-20 16:40:57.328595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.492 [2024-11-20 16:40:57.328602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.492 qpair failed and we were unable to recover it. 00:29:11.492 [2024-11-20 16:40:57.328961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.492 [2024-11-20 16:40:57.328968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.492 qpair failed and we were unable to recover it. 00:29:11.492 [2024-11-20 16:40:57.329271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.492 [2024-11-20 16:40:57.329279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.492 qpair failed and we were unable to recover it. 00:29:11.492 [2024-11-20 16:40:57.329650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.492 [2024-11-20 16:40:57.329657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.492 qpair failed and we were unable to recover it. 00:29:11.492 [2024-11-20 16:40:57.329952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.492 [2024-11-20 16:40:57.329960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.492 qpair failed and we were unable to recover it. 00:29:11.492 [2024-11-20 16:40:57.330145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.492 [2024-11-20 16:40:57.330153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.492 qpair failed and we were unable to recover it. 00:29:11.492 [2024-11-20 16:40:57.330229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.492 [2024-11-20 16:40:57.330236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.492 qpair failed and we were unable to recover it. 00:29:11.492 [2024-11-20 16:40:57.330521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.492 [2024-11-20 16:40:57.330528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.492 qpair failed and we were unable to recover it. 00:29:11.492 [2024-11-20 16:40:57.330825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.492 [2024-11-20 16:40:57.330831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.492 qpair failed and we were unable to recover it. 00:29:11.492 [2024-11-20 16:40:57.331146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.492 [2024-11-20 16:40:57.331153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.492 qpair failed and we were unable to recover it. 00:29:11.492 [2024-11-20 16:40:57.331466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.492 [2024-11-20 16:40:57.331473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.492 qpair failed and we were unable to recover it. 00:29:11.492 [2024-11-20 16:40:57.331764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.492 [2024-11-20 16:40:57.331771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.492 qpair failed and we were unable to recover it. 00:29:11.492 [2024-11-20 16:40:57.332101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.492 [2024-11-20 16:40:57.332109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.492 qpair failed and we were unable to recover it. 00:29:11.492 [2024-11-20 16:40:57.332409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.492 [2024-11-20 16:40:57.332416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.492 qpair failed and we were unable to recover it. 00:29:11.492 [2024-11-20 16:40:57.332721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.492 [2024-11-20 16:40:57.332728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.492 qpair failed and we were unable to recover it. 00:29:11.492 [2024-11-20 16:40:57.332926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.492 [2024-11-20 16:40:57.332933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.492 qpair failed and we were unable to recover it. 00:29:11.492 [2024-11-20 16:40:57.333238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.492 [2024-11-20 16:40:57.333245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.492 qpair failed and we were unable to recover it. 00:29:11.492 [2024-11-20 16:40:57.333561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.492 [2024-11-20 16:40:57.333567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.492 qpair failed and we were unable to recover it. 00:29:11.492 [2024-11-20 16:40:57.333862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.492 [2024-11-20 16:40:57.333870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.492 qpair failed and we were unable to recover it. 00:29:11.492 [2024-11-20 16:40:57.334197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.492 [2024-11-20 16:40:57.334204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.492 qpair failed and we were unable to recover it. 00:29:11.492 [2024-11-20 16:40:57.334388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.492 [2024-11-20 16:40:57.334396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.492 qpair failed and we were unable to recover it. 00:29:11.492 [2024-11-20 16:40:57.334667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.492 [2024-11-20 16:40:57.334674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.492 qpair failed and we were unable to recover it. 00:29:11.492 [2024-11-20 16:40:57.335000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.492 [2024-11-20 16:40:57.335008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.492 qpair failed and we were unable to recover it. 00:29:11.492 [2024-11-20 16:40:57.335182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.492 [2024-11-20 16:40:57.335189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.492 qpair failed and we were unable to recover it. 00:29:11.492 [2024-11-20 16:40:57.335398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.492 [2024-11-20 16:40:57.335406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.492 qpair failed and we were unable to recover it. 00:29:11.492 [2024-11-20 16:40:57.335780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.492 [2024-11-20 16:40:57.335787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.492 qpair failed and we were unable to recover it. 00:29:11.492 [2024-11-20 16:40:57.335969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.492 [2024-11-20 16:40:57.335977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.492 qpair failed and we were unable to recover it. 00:29:11.492 [2024-11-20 16:40:57.336154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.492 [2024-11-20 16:40:57.336163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.492 qpair failed and we were unable to recover it. 00:29:11.492 [2024-11-20 16:40:57.336333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.492 [2024-11-20 16:40:57.336341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.492 qpair failed and we were unable to recover it. 00:29:11.492 [2024-11-20 16:40:57.336522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.492 [2024-11-20 16:40:57.336530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.492 qpair failed and we were unable to recover it. 00:29:11.492 [2024-11-20 16:40:57.336737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.492 [2024-11-20 16:40:57.336745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.492 qpair failed and we were unable to recover it. 00:29:11.492 [2024-11-20 16:40:57.337075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.492 [2024-11-20 16:40:57.337082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.492 qpair failed and we were unable to recover it. 00:29:11.492 [2024-11-20 16:40:57.337467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.492 [2024-11-20 16:40:57.337475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.492 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.337822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.337829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.338008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.338015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.338330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.338337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.338642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.338649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.338813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.338821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.339106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.339114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.339420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.339426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.339590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.339598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.339862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.339869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.340198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.340206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.340551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.340559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.340866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.340873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.341180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.341187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.341445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.341452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.341658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.341664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.341980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.341989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.342302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.342309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.342648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.342655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.342968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.342975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.343195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.343202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.343500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.343512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.343795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.343802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.344164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.344170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.344453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.344460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.344759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.344765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.345065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.345072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.345196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.345203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.345478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.345485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.345778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.345785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.346176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.346183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.346475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.346482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.346630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.346638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.346951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.346958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.347294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.347302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.347508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.347516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.347719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.347726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.348095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.348102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.348426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.348433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.348599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.348607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.348883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.348889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.349184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.349191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.349507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.349514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.349810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.349817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.350143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.350150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.350446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.350461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.350768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.350774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.351064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.351071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.351383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.351389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.351683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.351689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.351895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.351902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.352210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.352217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.352371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.352379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.352657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.352665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.352973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.352979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.353303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.493 [2024-11-20 16:40:57.353310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.493 qpair failed and we were unable to recover it. 00:29:11.493 [2024-11-20 16:40:57.353474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.494 [2024-11-20 16:40:57.353482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.494 qpair failed and we were unable to recover it. 00:29:11.494 [2024-11-20 16:40:57.353788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.494 [2024-11-20 16:40:57.353795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.494 qpair failed and we were unable to recover it. 00:29:11.494 [2024-11-20 16:40:57.353993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.494 [2024-11-20 16:40:57.354001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.494 qpair failed and we were unable to recover it. 00:29:11.494 [2024-11-20 16:40:57.354275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.494 [2024-11-20 16:40:57.354282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.494 qpair failed and we were unable to recover it. 00:29:11.494 [2024-11-20 16:40:57.354486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.494 [2024-11-20 16:40:57.354493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.494 qpair failed and we were unable to recover it. 00:29:11.494 [2024-11-20 16:40:57.354837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.494 [2024-11-20 16:40:57.354844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.494 qpair failed and we were unable to recover it. 00:29:11.494 [2024-11-20 16:40:57.355152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.494 [2024-11-20 16:40:57.355159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.494 qpair failed and we were unable to recover it. 00:29:11.494 [2024-11-20 16:40:57.355484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.494 [2024-11-20 16:40:57.355490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.494 qpair failed and we were unable to recover it. 00:29:11.494 [2024-11-20 16:40:57.355697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.494 [2024-11-20 16:40:57.355704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.494 qpair failed and we were unable to recover it. 00:29:11.494 [2024-11-20 16:40:57.355914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.494 [2024-11-20 16:40:57.355922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.494 qpair failed and we were unable to recover it. 00:29:11.494 [2024-11-20 16:40:57.356232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.494 [2024-11-20 16:40:57.356239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.494 qpair failed and we were unable to recover it. 00:29:11.494 [2024-11-20 16:40:57.356530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.494 [2024-11-20 16:40:57.356537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.494 qpair failed and we were unable to recover it. 00:29:11.494 [2024-11-20 16:40:57.356858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.494 [2024-11-20 16:40:57.356864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.494 qpair failed and we were unable to recover it. 00:29:11.494 [2024-11-20 16:40:57.357175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.494 [2024-11-20 16:40:57.357183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.494 qpair failed and we were unable to recover it. 00:29:11.494 [2024-11-20 16:40:57.357491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.494 [2024-11-20 16:40:57.357498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.494 qpair failed and we were unable to recover it. 00:29:11.494 [2024-11-20 16:40:57.357624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.494 [2024-11-20 16:40:57.357630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.494 qpair failed and we were unable to recover it. 00:29:11.494 [2024-11-20 16:40:57.357893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.494 [2024-11-20 16:40:57.357900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.494 qpair failed and we were unable to recover it. 00:29:11.494 [2024-11-20 16:40:57.358097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.494 [2024-11-20 16:40:57.358104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.494 qpair failed and we were unable to recover it. 00:29:11.494 [2024-11-20 16:40:57.358308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.494 [2024-11-20 16:40:57.358315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.494 qpair failed and we were unable to recover it. 00:29:11.494 [2024-11-20 16:40:57.358598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.494 [2024-11-20 16:40:57.358607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.494 qpair failed and we were unable to recover it. 00:29:11.494 [2024-11-20 16:40:57.358918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.494 [2024-11-20 16:40:57.358925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.494 qpair failed and we were unable to recover it. 00:29:11.494 [2024-11-20 16:40:57.359231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.494 [2024-11-20 16:40:57.359238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.494 qpair failed and we were unable to recover it. 00:29:11.494 [2024-11-20 16:40:57.359549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.494 [2024-11-20 16:40:57.359556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.494 qpair failed and we were unable to recover it. 00:29:11.494 [2024-11-20 16:40:57.359839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.494 [2024-11-20 16:40:57.359852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.494 qpair failed and we were unable to recover it. 00:29:11.494 [2024-11-20 16:40:57.360151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.494 [2024-11-20 16:40:57.360159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.494 qpair failed and we were unable to recover it. 00:29:11.494 [2024-11-20 16:40:57.360463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.494 [2024-11-20 16:40:57.360471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.494 qpair failed and we were unable to recover it. 00:29:11.494 [2024-11-20 16:40:57.360775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.494 [2024-11-20 16:40:57.360782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.494 qpair failed and we were unable to recover it. 00:29:11.494 [2024-11-20 16:40:57.361071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.494 [2024-11-20 16:40:57.361078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.494 qpair failed and we were unable to recover it. 00:29:11.494 [2024-11-20 16:40:57.361400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.494 [2024-11-20 16:40:57.361407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.494 qpair failed and we were unable to recover it. 00:29:11.494 [2024-11-20 16:40:57.361705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.494 [2024-11-20 16:40:57.361712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.494 qpair failed and we were unable to recover it. 00:29:11.494 [2024-11-20 16:40:57.362014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.494 [2024-11-20 16:40:57.362022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.494 qpair failed and we were unable to recover it. 00:29:11.494 [2024-11-20 16:40:57.362355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.494 [2024-11-20 16:40:57.362362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.494 qpair failed and we were unable to recover it. 00:29:11.494 [2024-11-20 16:40:57.362647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.494 [2024-11-20 16:40:57.362655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.494 qpair failed and we were unable to recover it. 00:29:11.494 [2024-11-20 16:40:57.362948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.494 [2024-11-20 16:40:57.362954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.494 qpair failed and we were unable to recover it. 00:29:11.494 [2024-11-20 16:40:57.363237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.494 [2024-11-20 16:40:57.363244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.494 qpair failed and we were unable to recover it. 00:29:11.494 [2024-11-20 16:40:57.363541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.494 [2024-11-20 16:40:57.363548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.494 qpair failed and we were unable to recover it. 00:29:11.494 [2024-11-20 16:40:57.363718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.494 [2024-11-20 16:40:57.363725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.494 qpair failed and we were unable to recover it. 00:29:11.494 [2024-11-20 16:40:57.364077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.494 [2024-11-20 16:40:57.364084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.494 qpair failed and we were unable to recover it. 00:29:11.494 [2024-11-20 16:40:57.364387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.494 [2024-11-20 16:40:57.364394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.494 qpair failed and we were unable to recover it. 00:29:11.494 [2024-11-20 16:40:57.364695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.494 [2024-11-20 16:40:57.364702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.494 qpair failed and we were unable to recover it. 00:29:11.494 [2024-11-20 16:40:57.365056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.494 [2024-11-20 16:40:57.365064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.494 qpair failed and we were unable to recover it. 00:29:11.494 [2024-11-20 16:40:57.365352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.494 [2024-11-20 16:40:57.365358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.494 qpair failed and we were unable to recover it. 00:29:11.494 [2024-11-20 16:40:57.365644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.494 [2024-11-20 16:40:57.365651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.494 qpair failed and we were unable to recover it. 00:29:11.494 [2024-11-20 16:40:57.366010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.494 [2024-11-20 16:40:57.366017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.494 qpair failed and we were unable to recover it. 00:29:11.494 [2024-11-20 16:40:57.366308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.494 [2024-11-20 16:40:57.366315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.494 qpair failed and we were unable to recover it. 00:29:11.494 [2024-11-20 16:40:57.366607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.494 [2024-11-20 16:40:57.366614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.494 qpair failed and we were unable to recover it. 00:29:11.494 [2024-11-20 16:40:57.366771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.494 [2024-11-20 16:40:57.366779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.494 qpair failed and we were unable to recover it. 00:29:11.494 [2024-11-20 16:40:57.367055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.494 [2024-11-20 16:40:57.367062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.494 qpair failed and we were unable to recover it. 00:29:11.494 [2024-11-20 16:40:57.367387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.494 [2024-11-20 16:40:57.367395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.494 qpair failed and we were unable to recover it. 00:29:11.494 [2024-11-20 16:40:57.367713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.494 [2024-11-20 16:40:57.367719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.494 qpair failed and we were unable to recover it. 00:29:11.494 [2024-11-20 16:40:57.367999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.494 [2024-11-20 16:40:57.368006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.494 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.368288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.368294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.368541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.368548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.368834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.368841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.369152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.369160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.369335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.369343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.369638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.369645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.369935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.369942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.370264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.370271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.370546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.370554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.370854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.370861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.371021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.371028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.371379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.371386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.371689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.371696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.371890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.371897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.372169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.372175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.372485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.372491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.372796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.372804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.373128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.373135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.373441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.373448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.373767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.373774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.374064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.374071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.374379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.374386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.374578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.374585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.374892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.374900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.375196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.375203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.375511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.375518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.375710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.375718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.376036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.376042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.376358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.376365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.376539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.376547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.376851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.376857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.377150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.377157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.377482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.377489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.377649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.377656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.377906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.377913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.378231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.378238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.378524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.378530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.378846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.378852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.379156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.379163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.379466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.379473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.379832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.379840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.380151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.380158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.380454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.380461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.380732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.380738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.381059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.381066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.381319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.381326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.381567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.381575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.381785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.381792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.382101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.382110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.382403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.382417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.382598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.382604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.382913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.382920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.383227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.383234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.383548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.383555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.383876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.383884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.384175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.384182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.495 qpair failed and we were unable to recover it. 00:29:11.495 [2024-11-20 16:40:57.384470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.495 [2024-11-20 16:40:57.384484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.384670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.384678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.384949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.384957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.385265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.385273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.385573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.385581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.385884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.385891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.386200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.386209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.386481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.386489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.386817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.386824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.387127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.387135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.387469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.387477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.387750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.387757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.387948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.387955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.388237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.388244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.388553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.388560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.388863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.388870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.389154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.389161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.389481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.389488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.389683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.389689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.389848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.389855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.390143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.390151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.390470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.390477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.390572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.390578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.390853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.390860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.391206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.391213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.391413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.391421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.391702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.391709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.392019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.392027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.392311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.392317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.392627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.392634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.392928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.392934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.393218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.393225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.393515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.393523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.393830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.393837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.394154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.394160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.394474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.394481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.394772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.394779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.395085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.395093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.395373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.395379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.395588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.395595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.395911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.395919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.396229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.396237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.396425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.396432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.396737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.396744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.397046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.397053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.397372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.397379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.397660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.397668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.397970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.397977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.398134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.398142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.398457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.398464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.398644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.398652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.398950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.398958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.399268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.399276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.399584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.399590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.399909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.399916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.400192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.400200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.400493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.496 [2024-11-20 16:40:57.400500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.496 qpair failed and we were unable to recover it. 00:29:11.496 [2024-11-20 16:40:57.400808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-11-20 16:40:57.400815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-11-20 16:40:57.401143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-11-20 16:40:57.401150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-11-20 16:40:57.401456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-11-20 16:40:57.401463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-11-20 16:40:57.401752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-11-20 16:40:57.401759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-11-20 16:40:57.402067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-11-20 16:40:57.402074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-11-20 16:40:57.402398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-11-20 16:40:57.402405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-11-20 16:40:57.402584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-11-20 16:40:57.402590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-11-20 16:40:57.402791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-11-20 16:40:57.402797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-11-20 16:40:57.403070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-11-20 16:40:57.403077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-11-20 16:40:57.403378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-11-20 16:40:57.403393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-11-20 16:40:57.403693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-11-20 16:40:57.403700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-11-20 16:40:57.404001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-11-20 16:40:57.404008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-11-20 16:40:57.404145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-11-20 16:40:57.404152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-11-20 16:40:57.404391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-11-20 16:40:57.404398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-11-20 16:40:57.404687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-11-20 16:40:57.404694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-11-20 16:40:57.404991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-11-20 16:40:57.405000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-11-20 16:40:57.405263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-11-20 16:40:57.405270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-11-20 16:40:57.405432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-11-20 16:40:57.405439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-11-20 16:40:57.405688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-11-20 16:40:57.405695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-11-20 16:40:57.406021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-11-20 16:40:57.406028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-11-20 16:40:57.406237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-11-20 16:40:57.406244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-11-20 16:40:57.406552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-11-20 16:40:57.406559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-11-20 16:40:57.406843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-11-20 16:40:57.406851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-11-20 16:40:57.407150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-11-20 16:40:57.407157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-11-20 16:40:57.407471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-11-20 16:40:57.407478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-11-20 16:40:57.407669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-11-20 16:40:57.407675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-11-20 16:40:57.407968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-11-20 16:40:57.407975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-11-20 16:40:57.408302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-11-20 16:40:57.408309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-11-20 16:40:57.408618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-11-20 16:40:57.408625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-11-20 16:40:57.408913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-11-20 16:40:57.408920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-11-20 16:40:57.409213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-11-20 16:40:57.409221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-11-20 16:40:57.409549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-11-20 16:40:57.409555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-11-20 16:40:57.409834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-11-20 16:40:57.409841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-11-20 16:40:57.410157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-11-20 16:40:57.410164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-11-20 16:40:57.410477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-11-20 16:40:57.410484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-11-20 16:40:57.410809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-11-20 16:40:57.410816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-11-20 16:40:57.411112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-11-20 16:40:57.411118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-11-20 16:40:57.411410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-11-20 16:40:57.411418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-11-20 16:40:57.411729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-11-20 16:40:57.411736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-11-20 16:40:57.412042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-11-20 16:40:57.412049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-11-20 16:40:57.412198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-11-20 16:40:57.412206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-11-20 16:40:57.412592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-11-20 16:40:57.412598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-11-20 16:40:57.412874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-11-20 16:40:57.412881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-11-20 16:40:57.413200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-11-20 16:40:57.413207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-11-20 16:40:57.413515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-11-20 16:40:57.413523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-11-20 16:40:57.413844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-11-20 16:40:57.413850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-11-20 16:40:57.414151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-11-20 16:40:57.414158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-11-20 16:40:57.414459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-11-20 16:40:57.414466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-11-20 16:40:57.414768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-11-20 16:40:57.414776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-11-20 16:40:57.415108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-11-20 16:40:57.415115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-11-20 16:40:57.415304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-11-20 16:40:57.415312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-11-20 16:40:57.415491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-11-20 16:40:57.415498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.497 [2024-11-20 16:40:57.415837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.497 [2024-11-20 16:40:57.415844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.497 qpair failed and we were unable to recover it. 00:29:11.498 [2024-11-20 16:40:57.416150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-11-20 16:40:57.416157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-11-20 16:40:57.416461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-11-20 16:40:57.416467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-11-20 16:40:57.416764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-11-20 16:40:57.416772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-11-20 16:40:57.417039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-11-20 16:40:57.417046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-11-20 16:40:57.417330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-11-20 16:40:57.417336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.498 [2024-11-20 16:40:57.417644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.498 [2024-11-20 16:40:57.417651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.498 qpair failed and we were unable to recover it. 00:29:11.773 [2024-11-20 16:40:57.417947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-11-20 16:40:57.417955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-11-20 16:40:57.418238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-11-20 16:40:57.418248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-11-20 16:40:57.418536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-11-20 16:40:57.418542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-11-20 16:40:57.418850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-11-20 16:40:57.418857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-11-20 16:40:57.419181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-11-20 16:40:57.419188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-11-20 16:40:57.419551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-11-20 16:40:57.419557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-11-20 16:40:57.419923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-11-20 16:40:57.419931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-11-20 16:40:57.420233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-11-20 16:40:57.420240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-11-20 16:40:57.420531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-11-20 16:40:57.420538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-11-20 16:40:57.420859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-11-20 16:40:57.420866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-11-20 16:40:57.421176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-11-20 16:40:57.421184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-11-20 16:40:57.421502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-11-20 16:40:57.421509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-11-20 16:40:57.421820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-11-20 16:40:57.421827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-11-20 16:40:57.422177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-11-20 16:40:57.422184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-11-20 16:40:57.422493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-11-20 16:40:57.422500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-11-20 16:40:57.422812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-11-20 16:40:57.422819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-11-20 16:40:57.423129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-11-20 16:40:57.423137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-11-20 16:40:57.423439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-11-20 16:40:57.423446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-11-20 16:40:57.423741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-11-20 16:40:57.423748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-11-20 16:40:57.424065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-11-20 16:40:57.424072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-11-20 16:40:57.424240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-11-20 16:40:57.424247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-11-20 16:40:57.424607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-11-20 16:40:57.424614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-11-20 16:40:57.424948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-11-20 16:40:57.424955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-11-20 16:40:57.425318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-11-20 16:40:57.425324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-11-20 16:40:57.425619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-11-20 16:40:57.425626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-11-20 16:40:57.425927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-11-20 16:40:57.425934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-11-20 16:40:57.426238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-11-20 16:40:57.426246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-11-20 16:40:57.426555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-11-20 16:40:57.426562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-11-20 16:40:57.426855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-11-20 16:40:57.426862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-11-20 16:40:57.427154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-11-20 16:40:57.427162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-11-20 16:40:57.427460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-11-20 16:40:57.427472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-11-20 16:40:57.427774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-11-20 16:40:57.427782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-11-20 16:40:57.428065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-11-20 16:40:57.428073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-11-20 16:40:57.428378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-11-20 16:40:57.428385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-11-20 16:40:57.428676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-11-20 16:40:57.428683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-11-20 16:40:57.428889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.773 [2024-11-20 16:40:57.428896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.773 qpair failed and we were unable to recover it. 00:29:11.773 [2024-11-20 16:40:57.429056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-11-20 16:40:57.429065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-11-20 16:40:57.429345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-11-20 16:40:57.429352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-11-20 16:40:57.429683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-11-20 16:40:57.429690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-11-20 16:40:57.429887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-11-20 16:40:57.429894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-11-20 16:40:57.430079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-11-20 16:40:57.430087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-11-20 16:40:57.430416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-11-20 16:40:57.430423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-11-20 16:40:57.430706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-11-20 16:40:57.430713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-11-20 16:40:57.431060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-11-20 16:40:57.431068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-11-20 16:40:57.431242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-11-20 16:40:57.431248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-11-20 16:40:57.431545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-11-20 16:40:57.431553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-11-20 16:40:57.431712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-11-20 16:40:57.431720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-11-20 16:40:57.431995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-11-20 16:40:57.432002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-11-20 16:40:57.432274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-11-20 16:40:57.432281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-11-20 16:40:57.432451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-11-20 16:40:57.432457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-11-20 16:40:57.432742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-11-20 16:40:57.432749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-11-20 16:40:57.432940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-11-20 16:40:57.432947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-11-20 16:40:57.433132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-11-20 16:40:57.433139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-11-20 16:40:57.433510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-11-20 16:40:57.433517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-11-20 16:40:57.433815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-11-20 16:40:57.433822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-11-20 16:40:57.434149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-11-20 16:40:57.434156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-11-20 16:40:57.434535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-11-20 16:40:57.434542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-11-20 16:40:57.434838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-11-20 16:40:57.434844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-11-20 16:40:57.435159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-11-20 16:40:57.435166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-11-20 16:40:57.435428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-11-20 16:40:57.435434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-11-20 16:40:57.435767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-11-20 16:40:57.435774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-11-20 16:40:57.436084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-11-20 16:40:57.436092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-11-20 16:40:57.436269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-11-20 16:40:57.436276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-11-20 16:40:57.436588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-11-20 16:40:57.436595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-11-20 16:40:57.436907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-11-20 16:40:57.436914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-11-20 16:40:57.437224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-11-20 16:40:57.437231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-11-20 16:40:57.437524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-11-20 16:40:57.437531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-11-20 16:40:57.437723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-11-20 16:40:57.437731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-11-20 16:40:57.437926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-11-20 16:40:57.437933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-11-20 16:40:57.438240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-11-20 16:40:57.438247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-11-20 16:40:57.438622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-11-20 16:40:57.438628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-11-20 16:40:57.438776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-11-20 16:40:57.438783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-11-20 16:40:57.438970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-11-20 16:40:57.438977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.774 [2024-11-20 16:40:57.439276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.774 [2024-11-20 16:40:57.439283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.774 qpair failed and we were unable to recover it. 00:29:11.775 [2024-11-20 16:40:57.439531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-11-20 16:40:57.439538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-11-20 16:40:57.439842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-11-20 16:40:57.439849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-11-20 16:40:57.440175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-11-20 16:40:57.440182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-11-20 16:40:57.440498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-11-20 16:40:57.440505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-11-20 16:40:57.440792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-11-20 16:40:57.440798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-11-20 16:40:57.441111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-11-20 16:40:57.441118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-11-20 16:40:57.441437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-11-20 16:40:57.441445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-11-20 16:40:57.441752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-11-20 16:40:57.441760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-11-20 16:40:57.442051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-11-20 16:40:57.442058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-11-20 16:40:57.442360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-11-20 16:40:57.442368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-11-20 16:40:57.442693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-11-20 16:40:57.442701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-11-20 16:40:57.442997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-11-20 16:40:57.443005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-11-20 16:40:57.443321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-11-20 16:40:57.443327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-11-20 16:40:57.443637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-11-20 16:40:57.443644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-11-20 16:40:57.443939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-11-20 16:40:57.443946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-11-20 16:40:57.444248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-11-20 16:40:57.444255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-11-20 16:40:57.444540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-11-20 16:40:57.444548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-11-20 16:40:57.444861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-11-20 16:40:57.444867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-11-20 16:40:57.445164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-11-20 16:40:57.445171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-11-20 16:40:57.445489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-11-20 16:40:57.445496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-11-20 16:40:57.445804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-11-20 16:40:57.445811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-11-20 16:40:57.446126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-11-20 16:40:57.446133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-11-20 16:40:57.446495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-11-20 16:40:57.446502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-11-20 16:40:57.446788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-11-20 16:40:57.446796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-11-20 16:40:57.447095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-11-20 16:40:57.447101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-11-20 16:40:57.447413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-11-20 16:40:57.447420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-11-20 16:40:57.447724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-11-20 16:40:57.447730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-11-20 16:40:57.448095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-11-20 16:40:57.448103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-11-20 16:40:57.448356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-11-20 16:40:57.448363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-11-20 16:40:57.448672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-11-20 16:40:57.448680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-11-20 16:40:57.449038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-11-20 16:40:57.449046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-11-20 16:40:57.449345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-11-20 16:40:57.449352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-11-20 16:40:57.449630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-11-20 16:40:57.449637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-11-20 16:40:57.449939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-11-20 16:40:57.449945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-11-20 16:40:57.450313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-11-20 16:40:57.450319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-11-20 16:40:57.450628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-11-20 16:40:57.450635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-11-20 16:40:57.450926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-11-20 16:40:57.450933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.775 [2024-11-20 16:40:57.451087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.775 [2024-11-20 16:40:57.451095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.775 qpair failed and we were unable to recover it. 00:29:11.776 [2024-11-20 16:40:57.451365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-11-20 16:40:57.451372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-11-20 16:40:57.451701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-11-20 16:40:57.451708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-11-20 16:40:57.451904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-11-20 16:40:57.451911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-11-20 16:40:57.452221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-11-20 16:40:57.452228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-11-20 16:40:57.452527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-11-20 16:40:57.452535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-11-20 16:40:57.452825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-11-20 16:40:57.452832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-11-20 16:40:57.453140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-11-20 16:40:57.453147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-11-20 16:40:57.453468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-11-20 16:40:57.453474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-11-20 16:40:57.453701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-11-20 16:40:57.453708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-11-20 16:40:57.453909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-11-20 16:40:57.453916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-11-20 16:40:57.454226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-11-20 16:40:57.454233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-11-20 16:40:57.454523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-11-20 16:40:57.454531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-11-20 16:40:57.454709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-11-20 16:40:57.454716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-11-20 16:40:57.455048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-11-20 16:40:57.455055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-11-20 16:40:57.455410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-11-20 16:40:57.455417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-11-20 16:40:57.455705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-11-20 16:40:57.455712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-11-20 16:40:57.456053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-11-20 16:40:57.456060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-11-20 16:40:57.456379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-11-20 16:40:57.456386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-11-20 16:40:57.456699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-11-20 16:40:57.456706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-11-20 16:40:57.456993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-11-20 16:40:57.457001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-11-20 16:40:57.457278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-11-20 16:40:57.457285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-11-20 16:40:57.457616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-11-20 16:40:57.457623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-11-20 16:40:57.457909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-11-20 16:40:57.457916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-11-20 16:40:57.458302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-11-20 16:40:57.458309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-11-20 16:40:57.458588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-11-20 16:40:57.458595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-11-20 16:40:57.458907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-11-20 16:40:57.458914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-11-20 16:40:57.459146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-11-20 16:40:57.459153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-11-20 16:40:57.459463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-11-20 16:40:57.459470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-11-20 16:40:57.459777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-11-20 16:40:57.459784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-11-20 16:40:57.460074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-11-20 16:40:57.460081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-11-20 16:40:57.460233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-11-20 16:40:57.460240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-11-20 16:40:57.460438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-11-20 16:40:57.460446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-11-20 16:40:57.460753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-11-20 16:40:57.460760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-11-20 16:40:57.461063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-11-20 16:40:57.461070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-11-20 16:40:57.461379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-11-20 16:40:57.461386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-11-20 16:40:57.461672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-11-20 16:40:57.461679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-11-20 16:40:57.462004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-11-20 16:40:57.462012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.776 [2024-11-20 16:40:57.462315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.776 [2024-11-20 16:40:57.462322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.776 qpair failed and we were unable to recover it. 00:29:11.777 [2024-11-20 16:40:57.462601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.777 [2024-11-20 16:40:57.462616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.777 qpair failed and we were unable to recover it. 00:29:11.777 [2024-11-20 16:40:57.462899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.777 [2024-11-20 16:40:57.462906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.777 qpair failed and we were unable to recover it. 00:29:11.777 [2024-11-20 16:40:57.463101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.777 [2024-11-20 16:40:57.463108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.777 qpair failed and we were unable to recover it. 00:29:11.777 [2024-11-20 16:40:57.463368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.777 [2024-11-20 16:40:57.463374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.777 qpair failed and we were unable to recover it. 00:29:11.777 [2024-11-20 16:40:57.463644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.777 [2024-11-20 16:40:57.463651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.777 qpair failed and we were unable to recover it. 00:29:11.777 [2024-11-20 16:40:57.463951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.777 [2024-11-20 16:40:57.463958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.777 qpair failed and we were unable to recover it. 00:29:11.777 [2024-11-20 16:40:57.464259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.777 [2024-11-20 16:40:57.464266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.777 qpair failed and we were unable to recover it. 00:29:11.777 [2024-11-20 16:40:57.464570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.777 [2024-11-20 16:40:57.464577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.777 qpair failed and we were unable to recover it. 00:29:11.777 [2024-11-20 16:40:57.464881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.777 [2024-11-20 16:40:57.464888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.777 qpair failed and we were unable to recover it. 00:29:11.777 [2024-11-20 16:40:57.465069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.777 [2024-11-20 16:40:57.465076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.777 qpair failed and we were unable to recover it. 00:29:11.777 [2024-11-20 16:40:57.465297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.777 [2024-11-20 16:40:57.465303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.777 qpair failed and we were unable to recover it. 00:29:11.777 [2024-11-20 16:40:57.465582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.777 [2024-11-20 16:40:57.465589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.777 qpair failed and we were unable to recover it. 00:29:11.777 [2024-11-20 16:40:57.465899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.777 [2024-11-20 16:40:57.465906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.777 qpair failed and we were unable to recover it. 00:29:11.777 [2024-11-20 16:40:57.466204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.777 [2024-11-20 16:40:57.466212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.777 qpair failed and we were unable to recover it. 00:29:11.777 [2024-11-20 16:40:57.466522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.777 [2024-11-20 16:40:57.466528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.777 qpair failed and we were unable to recover it. 00:29:11.777 [2024-11-20 16:40:57.466820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.777 [2024-11-20 16:40:57.466827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.777 qpair failed and we were unable to recover it. 00:29:11.777 [2024-11-20 16:40:57.467035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.777 [2024-11-20 16:40:57.467043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.777 qpair failed and we were unable to recover it. 00:29:11.777 [2024-11-20 16:40:57.467142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.777 [2024-11-20 16:40:57.467149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.777 qpair failed and we were unable to recover it. 00:29:11.777 [2024-11-20 16:40:57.467471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.777 [2024-11-20 16:40:57.467477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.777 qpair failed and we were unable to recover it. 00:29:11.777 [2024-11-20 16:40:57.467633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.777 [2024-11-20 16:40:57.467640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.777 qpair failed and we were unable to recover it. 00:29:11.777 [2024-11-20 16:40:57.468016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.777 [2024-11-20 16:40:57.468024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.777 qpair failed and we were unable to recover it. 00:29:11.777 [2024-11-20 16:40:57.468326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.777 [2024-11-20 16:40:57.468333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.777 qpair failed and we were unable to recover it. 00:29:11.777 [2024-11-20 16:40:57.468517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.777 [2024-11-20 16:40:57.468524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.777 qpair failed and we were unable to recover it. 00:29:11.777 [2024-11-20 16:40:57.468809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.777 [2024-11-20 16:40:57.468816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.777 qpair failed and we were unable to recover it. 00:29:11.777 [2024-11-20 16:40:57.469137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.777 [2024-11-20 16:40:57.469144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.777 qpair failed and we were unable to recover it. 00:29:11.777 [2024-11-20 16:40:57.469434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.777 [2024-11-20 16:40:57.469441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.777 qpair failed and we were unable to recover it. 00:29:11.777 [2024-11-20 16:40:57.469743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.777 [2024-11-20 16:40:57.469751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.777 qpair failed and we were unable to recover it. 00:29:11.777 [2024-11-20 16:40:57.469941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.777 [2024-11-20 16:40:57.469948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.777 qpair failed and we were unable to recover it. 00:29:11.777 [2024-11-20 16:40:57.470216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.777 [2024-11-20 16:40:57.470223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.777 qpair failed and we were unable to recover it. 00:29:11.777 [2024-11-20 16:40:57.470582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.777 [2024-11-20 16:40:57.470589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.777 qpair failed and we were unable to recover it. 00:29:11.777 [2024-11-20 16:40:57.470863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.777 [2024-11-20 16:40:57.470870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.777 qpair failed and we were unable to recover it. 00:29:11.777 [2024-11-20 16:40:57.471144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.777 [2024-11-20 16:40:57.471151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.778 qpair failed and we were unable to recover it. 00:29:11.778 [2024-11-20 16:40:57.471447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.778 [2024-11-20 16:40:57.471454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.778 qpair failed and we were unable to recover it. 00:29:11.778 [2024-11-20 16:40:57.471744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.778 [2024-11-20 16:40:57.471753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.778 qpair failed and we were unable to recover it. 00:29:11.778 [2024-11-20 16:40:57.472051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.778 [2024-11-20 16:40:57.472058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.778 qpair failed and we were unable to recover it. 00:29:11.778 [2024-11-20 16:40:57.472364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.778 [2024-11-20 16:40:57.472371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.778 qpair failed and we were unable to recover it. 00:29:11.778 [2024-11-20 16:40:57.472651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.778 [2024-11-20 16:40:57.472666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.778 qpair failed and we were unable to recover it. 00:29:11.778 [2024-11-20 16:40:57.472946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.778 [2024-11-20 16:40:57.472954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.778 qpair failed and we were unable to recover it. 00:29:11.778 [2024-11-20 16:40:57.473264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.778 [2024-11-20 16:40:57.473271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.778 qpair failed and we were unable to recover it. 00:29:11.778 [2024-11-20 16:40:57.473582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.778 [2024-11-20 16:40:57.473588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.778 qpair failed and we were unable to recover it. 00:29:11.778 [2024-11-20 16:40:57.473900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.778 [2024-11-20 16:40:57.473907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.778 qpair failed and we were unable to recover it. 00:29:11.778 [2024-11-20 16:40:57.474205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.778 [2024-11-20 16:40:57.474212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.778 qpair failed and we were unable to recover it. 00:29:11.778 [2024-11-20 16:40:57.474515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.778 [2024-11-20 16:40:57.474523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.778 qpair failed and we were unable to recover it. 00:29:11.778 [2024-11-20 16:40:57.474813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.778 [2024-11-20 16:40:57.474820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.778 qpair failed and we were unable to recover it. 00:29:11.778 [2024-11-20 16:40:57.475127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.778 [2024-11-20 16:40:57.475134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.778 qpair failed and we were unable to recover it. 00:29:11.778 [2024-11-20 16:40:57.475457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.778 [2024-11-20 16:40:57.475464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.778 qpair failed and we were unable to recover it. 00:29:11.778 [2024-11-20 16:40:57.475772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.778 [2024-11-20 16:40:57.475779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.778 qpair failed and we were unable to recover it. 00:29:11.778 [2024-11-20 16:40:57.476077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.778 [2024-11-20 16:40:57.476084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.778 qpair failed and we were unable to recover it. 00:29:11.778 [2024-11-20 16:40:57.476397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.778 [2024-11-20 16:40:57.476404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.778 qpair failed and we were unable to recover it. 00:29:11.778 [2024-11-20 16:40:57.476707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.778 [2024-11-20 16:40:57.476715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.778 qpair failed and we were unable to recover it. 00:29:11.778 [2024-11-20 16:40:57.477021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.778 [2024-11-20 16:40:57.477029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.778 qpair failed and we were unable to recover it. 00:29:11.778 [2024-11-20 16:40:57.477383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.778 [2024-11-20 16:40:57.477389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.778 qpair failed and we were unable to recover it. 00:29:11.778 [2024-11-20 16:40:57.477667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.778 [2024-11-20 16:40:57.477683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.778 qpair failed and we were unable to recover it. 00:29:11.778 [2024-11-20 16:40:57.477960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.778 [2024-11-20 16:40:57.477967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.778 qpair failed and we were unable to recover it. 00:29:11.778 [2024-11-20 16:40:57.478254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.778 [2024-11-20 16:40:57.478261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.778 qpair failed and we were unable to recover it. 00:29:11.778 [2024-11-20 16:40:57.478562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.778 [2024-11-20 16:40:57.478569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.778 qpair failed and we were unable to recover it. 00:29:11.778 [2024-11-20 16:40:57.478755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.778 [2024-11-20 16:40:57.478762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.778 qpair failed and we were unable to recover it. 00:29:11.778 [2024-11-20 16:40:57.479100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.778 [2024-11-20 16:40:57.479107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.778 qpair failed and we were unable to recover it. 00:29:11.778 [2024-11-20 16:40:57.479412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.778 [2024-11-20 16:40:57.479418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.778 qpair failed and we were unable to recover it. 00:29:11.778 [2024-11-20 16:40:57.479738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.778 [2024-11-20 16:40:57.479745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.778 qpair failed and we were unable to recover it. 00:29:11.778 [2024-11-20 16:40:57.480044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.778 [2024-11-20 16:40:57.480051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.778 qpair failed and we were unable to recover it. 00:29:11.778 [2024-11-20 16:40:57.480369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.778 [2024-11-20 16:40:57.480376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.778 qpair failed and we were unable to recover it. 00:29:11.778 [2024-11-20 16:40:57.480675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.778 [2024-11-20 16:40:57.480682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.778 qpair failed and we were unable to recover it. 00:29:11.778 [2024-11-20 16:40:57.481039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.778 [2024-11-20 16:40:57.481046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.778 qpair failed and we were unable to recover it. 00:29:11.778 [2024-11-20 16:40:57.481319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.778 [2024-11-20 16:40:57.481326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.778 qpair failed and we were unable to recover it. 00:29:11.778 [2024-11-20 16:40:57.481645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.778 [2024-11-20 16:40:57.481652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.778 qpair failed and we were unable to recover it. 00:29:11.778 [2024-11-20 16:40:57.481922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.778 [2024-11-20 16:40:57.481928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.778 qpair failed and we were unable to recover it. 00:29:11.778 [2024-11-20 16:40:57.482137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.778 [2024-11-20 16:40:57.482145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.778 qpair failed and we were unable to recover it. 00:29:11.778 [2024-11-20 16:40:57.482452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.778 [2024-11-20 16:40:57.482458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.778 qpair failed and we were unable to recover it. 00:29:11.778 [2024-11-20 16:40:57.482747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.778 [2024-11-20 16:40:57.482754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.779 qpair failed and we were unable to recover it. 00:29:11.779 [2024-11-20 16:40:57.483044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.779 [2024-11-20 16:40:57.483055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.779 qpair failed and we were unable to recover it. 00:29:11.779 [2024-11-20 16:40:57.483360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.779 [2024-11-20 16:40:57.483368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.779 qpair failed and we were unable to recover it. 00:29:11.779 [2024-11-20 16:40:57.483678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.779 [2024-11-20 16:40:57.483684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.779 qpair failed and we were unable to recover it. 00:29:11.779 [2024-11-20 16:40:57.483991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.779 [2024-11-20 16:40:57.484001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.779 qpair failed and we were unable to recover it. 00:29:11.779 [2024-11-20 16:40:57.484350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.779 [2024-11-20 16:40:57.484357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.779 qpair failed and we were unable to recover it. 00:29:11.779 [2024-11-20 16:40:57.484643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.779 [2024-11-20 16:40:57.484651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.779 qpair failed and we were unable to recover it. 00:29:11.779 [2024-11-20 16:40:57.484966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.779 [2024-11-20 16:40:57.484973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.779 qpair failed and we were unable to recover it. 00:29:11.779 [2024-11-20 16:40:57.485253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.779 [2024-11-20 16:40:57.485261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.779 qpair failed and we were unable to recover it. 00:29:11.779 [2024-11-20 16:40:57.485421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.779 [2024-11-20 16:40:57.485428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.779 qpair failed and we were unable to recover it. 00:29:11.779 [2024-11-20 16:40:57.485616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.779 [2024-11-20 16:40:57.485623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.779 qpair failed and we were unable to recover it. 00:29:11.779 [2024-11-20 16:40:57.485918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.779 [2024-11-20 16:40:57.485924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.779 qpair failed and we were unable to recover it. 00:29:11.779 [2024-11-20 16:40:57.486239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.779 [2024-11-20 16:40:57.486246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.779 qpair failed and we were unable to recover it. 00:29:11.779 [2024-11-20 16:40:57.486403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.779 [2024-11-20 16:40:57.486410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.779 qpair failed and we were unable to recover it. 00:29:11.779 [2024-11-20 16:40:57.486800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.779 [2024-11-20 16:40:57.486807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.779 qpair failed and we were unable to recover it. 00:29:11.779 [2024-11-20 16:40:57.487104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.779 [2024-11-20 16:40:57.487111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.779 qpair failed and we were unable to recover it. 00:29:11.779 [2024-11-20 16:40:57.487434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.779 [2024-11-20 16:40:57.487441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.779 qpair failed and we were unable to recover it. 00:29:11.779 [2024-11-20 16:40:57.487743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.779 [2024-11-20 16:40:57.487750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.779 qpair failed and we were unable to recover it. 00:29:11.779 [2024-11-20 16:40:57.488052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.779 [2024-11-20 16:40:57.488059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.779 qpair failed and we were unable to recover it. 00:29:11.779 [2024-11-20 16:40:57.488368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.779 [2024-11-20 16:40:57.488375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.779 qpair failed and we were unable to recover it. 00:29:11.779 [2024-11-20 16:40:57.488576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.779 [2024-11-20 16:40:57.488583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.779 qpair failed and we were unable to recover it. 00:29:11.779 [2024-11-20 16:40:57.488773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.779 [2024-11-20 16:40:57.488779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.779 qpair failed and we were unable to recover it. 00:29:11.779 [2024-11-20 16:40:57.489063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.779 [2024-11-20 16:40:57.489070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.779 qpair failed and we were unable to recover it. 00:29:11.779 [2024-11-20 16:40:57.489394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.779 [2024-11-20 16:40:57.489401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.779 qpair failed and we were unable to recover it. 00:29:11.779 [2024-11-20 16:40:57.489746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.779 [2024-11-20 16:40:57.489753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.779 qpair failed and we were unable to recover it. 00:29:11.779 [2024-11-20 16:40:57.490063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.779 [2024-11-20 16:40:57.490070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.779 qpair failed and we were unable to recover it. 00:29:11.779 [2024-11-20 16:40:57.490404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.779 [2024-11-20 16:40:57.490411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.779 qpair failed and we were unable to recover it. 00:29:11.779 [2024-11-20 16:40:57.490688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.779 [2024-11-20 16:40:57.490694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.779 qpair failed and we were unable to recover it. 00:29:11.779 [2024-11-20 16:40:57.490993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.779 [2024-11-20 16:40:57.491000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.779 qpair failed and we were unable to recover it. 00:29:11.779 [2024-11-20 16:40:57.491315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.779 [2024-11-20 16:40:57.491323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.779 qpair failed and we were unable to recover it. 00:29:11.779 [2024-11-20 16:40:57.491685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.779 [2024-11-20 16:40:57.491692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.779 qpair failed and we were unable to recover it. 00:29:11.779 [2024-11-20 16:40:57.491995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.779 [2024-11-20 16:40:57.492003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.779 qpair failed and we were unable to recover it. 00:29:11.779 [2024-11-20 16:40:57.492310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.779 [2024-11-20 16:40:57.492316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.779 qpair failed and we were unable to recover it. 00:29:11.779 [2024-11-20 16:40:57.492597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.779 [2024-11-20 16:40:57.492610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.779 qpair failed and we were unable to recover it. 00:29:11.779 [2024-11-20 16:40:57.492938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.779 [2024-11-20 16:40:57.492944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.779 qpair failed and we were unable to recover it. 00:29:11.779 [2024-11-20 16:40:57.493308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.779 [2024-11-20 16:40:57.493315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.779 qpair failed and we were unable to recover it. 00:29:11.779 [2024-11-20 16:40:57.493603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.779 [2024-11-20 16:40:57.493610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.779 qpair failed and we were unable to recover it. 00:29:11.779 [2024-11-20 16:40:57.493931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.779 [2024-11-20 16:40:57.493937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.779 qpair failed and we were unable to recover it. 00:29:11.780 [2024-11-20 16:40:57.494232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.780 [2024-11-20 16:40:57.494239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.780 qpair failed and we were unable to recover it. 00:29:11.780 [2024-11-20 16:40:57.494389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.780 [2024-11-20 16:40:57.494396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.780 qpair failed and we were unable to recover it. 00:29:11.780 [2024-11-20 16:40:57.494677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.780 [2024-11-20 16:40:57.494684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.780 qpair failed and we were unable to recover it. 00:29:11.780 [2024-11-20 16:40:57.494881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.780 [2024-11-20 16:40:57.494888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.780 qpair failed and we were unable to recover it. 00:29:11.780 [2024-11-20 16:40:57.495208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.780 [2024-11-20 16:40:57.495215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.780 qpair failed and we were unable to recover it. 00:29:11.780 [2024-11-20 16:40:57.495526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.780 [2024-11-20 16:40:57.495533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.780 qpair failed and we were unable to recover it. 00:29:11.780 [2024-11-20 16:40:57.495822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.780 [2024-11-20 16:40:57.495832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.780 qpair failed and we were unable to recover it. 00:29:11.780 [2024-11-20 16:40:57.496144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.780 [2024-11-20 16:40:57.496151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.780 qpair failed and we were unable to recover it. 00:29:11.780 [2024-11-20 16:40:57.496482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.780 [2024-11-20 16:40:57.496489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.780 qpair failed and we were unable to recover it. 00:29:11.780 [2024-11-20 16:40:57.496769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.780 [2024-11-20 16:40:57.496776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.780 qpair failed and we were unable to recover it. 00:29:11.780 [2024-11-20 16:40:57.496955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.780 [2024-11-20 16:40:57.496961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.780 qpair failed and we were unable to recover it. 00:29:11.780 [2024-11-20 16:40:57.497290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.780 [2024-11-20 16:40:57.497297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.780 qpair failed and we were unable to recover it. 00:29:11.780 [2024-11-20 16:40:57.497465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.780 [2024-11-20 16:40:57.497473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.780 qpair failed and we were unable to recover it. 00:29:11.780 [2024-11-20 16:40:57.497800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.780 [2024-11-20 16:40:57.497807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.780 qpair failed and we were unable to recover it. 00:29:11.780 [2024-11-20 16:40:57.498199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.780 [2024-11-20 16:40:57.498206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.780 qpair failed and we were unable to recover it. 00:29:11.780 [2024-11-20 16:40:57.498490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.780 [2024-11-20 16:40:57.498497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.780 qpair failed and we were unable to recover it. 00:29:11.780 [2024-11-20 16:40:57.498821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.780 [2024-11-20 16:40:57.498828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.780 qpair failed and we were unable to recover it. 00:29:11.780 [2024-11-20 16:40:57.499138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.780 [2024-11-20 16:40:57.499145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.780 qpair failed and we were unable to recover it. 00:29:11.780 [2024-11-20 16:40:57.499309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.780 [2024-11-20 16:40:57.499318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.780 qpair failed and we were unable to recover it. 00:29:11.780 [2024-11-20 16:40:57.499617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.780 [2024-11-20 16:40:57.499624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.780 qpair failed and we were unable to recover it. 00:29:11.780 [2024-11-20 16:40:57.499804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.780 [2024-11-20 16:40:57.499812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.780 qpair failed and we were unable to recover it. 00:29:11.780 [2024-11-20 16:40:57.500111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.780 [2024-11-20 16:40:57.500119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.780 qpair failed and we were unable to recover it. 00:29:11.780 [2024-11-20 16:40:57.500420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.780 [2024-11-20 16:40:57.500427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.780 qpair failed and we were unable to recover it. 00:29:11.780 [2024-11-20 16:40:57.500612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.780 [2024-11-20 16:40:57.500619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.780 qpair failed and we were unable to recover it. 00:29:11.780 [2024-11-20 16:40:57.500900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.780 [2024-11-20 16:40:57.500907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.780 qpair failed and we were unable to recover it. 00:29:11.780 [2024-11-20 16:40:57.501143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.780 [2024-11-20 16:40:57.501150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.780 qpair failed and we were unable to recover it. 00:29:11.780 [2024-11-20 16:40:57.501481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.780 [2024-11-20 16:40:57.501487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.780 qpair failed and we were unable to recover it. 00:29:11.780 [2024-11-20 16:40:57.501795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.780 [2024-11-20 16:40:57.501802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.780 qpair failed and we were unable to recover it. 00:29:11.780 [2024-11-20 16:40:57.502101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.780 [2024-11-20 16:40:57.502108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.780 qpair failed and we were unable to recover it. 00:29:11.780 [2024-11-20 16:40:57.502409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.780 [2024-11-20 16:40:57.502416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.780 qpair failed and we were unable to recover it. 00:29:11.780 [2024-11-20 16:40:57.502715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.780 [2024-11-20 16:40:57.502721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.780 qpair failed and we were unable to recover it. 00:29:11.780 [2024-11-20 16:40:57.503033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.780 [2024-11-20 16:40:57.503041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.780 qpair failed and we were unable to recover it. 00:29:11.780 [2024-11-20 16:40:57.503356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.780 [2024-11-20 16:40:57.503362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.780 qpair failed and we were unable to recover it. 00:29:11.780 [2024-11-20 16:40:57.503668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.780 [2024-11-20 16:40:57.503676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.780 qpair failed and we were unable to recover it. 00:29:11.780 [2024-11-20 16:40:57.503970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.780 [2024-11-20 16:40:57.503977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.780 qpair failed and we were unable to recover it. 00:29:11.780 [2024-11-20 16:40:57.504258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.780 [2024-11-20 16:40:57.504265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.780 qpair failed and we were unable to recover it. 00:29:11.780 [2024-11-20 16:40:57.504578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.780 [2024-11-20 16:40:57.504585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.780 qpair failed and we were unable to recover it. 00:29:11.780 [2024-11-20 16:40:57.504902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.780 [2024-11-20 16:40:57.504909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.780 qpair failed and we were unable to recover it. 00:29:11.781 [2024-11-20 16:40:57.505245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.781 [2024-11-20 16:40:57.505252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.781 qpair failed and we were unable to recover it. 00:29:11.781 [2024-11-20 16:40:57.505622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.781 [2024-11-20 16:40:57.505629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.781 qpair failed and we were unable to recover it. 00:29:11.781 [2024-11-20 16:40:57.505916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.781 [2024-11-20 16:40:57.505922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.781 qpair failed and we were unable to recover it. 00:29:11.781 [2024-11-20 16:40:57.506095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.781 [2024-11-20 16:40:57.506103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.781 qpair failed and we were unable to recover it. 00:29:11.781 [2024-11-20 16:40:57.506417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.781 [2024-11-20 16:40:57.506424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.781 qpair failed and we were unable to recover it. 00:29:11.781 [2024-11-20 16:40:57.506589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.781 [2024-11-20 16:40:57.506596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.781 qpair failed and we were unable to recover it. 00:29:11.781 [2024-11-20 16:40:57.506883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.781 [2024-11-20 16:40:57.506890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.781 qpair failed and we were unable to recover it. 00:29:11.781 [2024-11-20 16:40:57.507215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.781 [2024-11-20 16:40:57.507222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.781 qpair failed and we were unable to recover it. 00:29:11.781 [2024-11-20 16:40:57.507541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.781 [2024-11-20 16:40:57.507550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.781 qpair failed and we were unable to recover it. 00:29:11.781 [2024-11-20 16:40:57.507827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.781 [2024-11-20 16:40:57.507840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.781 qpair failed and we were unable to recover it. 00:29:11.781 [2024-11-20 16:40:57.507997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.781 [2024-11-20 16:40:57.508006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.781 qpair failed and we were unable to recover it. 00:29:11.781 [2024-11-20 16:40:57.508301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.781 [2024-11-20 16:40:57.508307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.781 qpair failed and we were unable to recover it. 00:29:11.781 [2024-11-20 16:40:57.508468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.781 [2024-11-20 16:40:57.508475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.781 qpair failed and we were unable to recover it. 00:29:11.781 [2024-11-20 16:40:57.508808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.781 [2024-11-20 16:40:57.508815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.781 qpair failed and we were unable to recover it. 00:29:11.781 [2024-11-20 16:40:57.509098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.781 [2024-11-20 16:40:57.509105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.781 qpair failed and we were unable to recover it. 00:29:11.781 [2024-11-20 16:40:57.509404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.781 [2024-11-20 16:40:57.509410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.781 qpair failed and we were unable to recover it. 00:29:11.781 [2024-11-20 16:40:57.509581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.781 [2024-11-20 16:40:57.509587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.781 qpair failed and we were unable to recover it. 00:29:11.781 [2024-11-20 16:40:57.509807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.781 [2024-11-20 16:40:57.509814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.781 qpair failed and we were unable to recover it. 00:29:11.781 [2024-11-20 16:40:57.510119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.781 [2024-11-20 16:40:57.510127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.781 qpair failed and we were unable to recover it. 00:29:11.781 [2024-11-20 16:40:57.510418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.781 [2024-11-20 16:40:57.510426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.781 qpair failed and we were unable to recover it. 00:29:11.781 [2024-11-20 16:40:57.510727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.781 [2024-11-20 16:40:57.510734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.781 qpair failed and we were unable to recover it. 00:29:11.781 [2024-11-20 16:40:57.511044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.781 [2024-11-20 16:40:57.511051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.781 qpair failed and we were unable to recover it. 00:29:11.781 [2024-11-20 16:40:57.511359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.781 [2024-11-20 16:40:57.511365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.781 qpair failed and we were unable to recover it. 00:29:11.781 [2024-11-20 16:40:57.511559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.781 [2024-11-20 16:40:57.511565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.781 qpair failed and we were unable to recover it. 00:29:11.781 [2024-11-20 16:40:57.511887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.781 [2024-11-20 16:40:57.511893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.781 qpair failed and we were unable to recover it. 00:29:11.781 [2024-11-20 16:40:57.512099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.781 [2024-11-20 16:40:57.512106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.781 qpair failed and we were unable to recover it. 00:29:11.781 [2024-11-20 16:40:57.512427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.781 [2024-11-20 16:40:57.512435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.781 qpair failed and we were unable to recover it. 00:29:11.781 [2024-11-20 16:40:57.512747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.781 [2024-11-20 16:40:57.512754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.781 qpair failed and we were unable to recover it. 00:29:11.781 [2024-11-20 16:40:57.513050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.781 [2024-11-20 16:40:57.513057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.781 qpair failed and we were unable to recover it. 00:29:11.781 [2024-11-20 16:40:57.513377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.781 [2024-11-20 16:40:57.513384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.781 qpair failed and we were unable to recover it. 00:29:11.781 [2024-11-20 16:40:57.513748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.781 [2024-11-20 16:40:57.513755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.781 qpair failed and we were unable to recover it. 00:29:11.781 [2024-11-20 16:40:57.514054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.781 [2024-11-20 16:40:57.514061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.781 qpair failed and we were unable to recover it. 00:29:11.781 [2024-11-20 16:40:57.514310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.781 [2024-11-20 16:40:57.514316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.781 qpair failed and we were unable to recover it. 00:29:11.781 [2024-11-20 16:40:57.514614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.781 [2024-11-20 16:40:57.514621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.781 qpair failed and we were unable to recover it. 00:29:11.781 [2024-11-20 16:40:57.514926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.781 [2024-11-20 16:40:57.514933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.781 qpair failed and we were unable to recover it. 00:29:11.781 [2024-11-20 16:40:57.515247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.781 [2024-11-20 16:40:57.515255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.781 qpair failed and we were unable to recover it. 00:29:11.781 [2024-11-20 16:40:57.515545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.781 [2024-11-20 16:40:57.515553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.781 qpair failed and we were unable to recover it. 00:29:11.781 [2024-11-20 16:40:57.515864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.781 [2024-11-20 16:40:57.515871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.781 qpair failed and we were unable to recover it. 00:29:11.782 [2024-11-20 16:40:57.516184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.782 [2024-11-20 16:40:57.516191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.782 qpair failed and we were unable to recover it. 00:29:11.782 [2024-11-20 16:40:57.516542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.782 [2024-11-20 16:40:57.516548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.782 qpair failed and we were unable to recover it. 00:29:11.782 [2024-11-20 16:40:57.516835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.782 [2024-11-20 16:40:57.516843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.782 qpair failed and we were unable to recover it. 00:29:11.782 [2024-11-20 16:40:57.517155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.782 [2024-11-20 16:40:57.517162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.782 qpair failed and we were unable to recover it. 00:29:11.782 [2024-11-20 16:40:57.517458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.782 [2024-11-20 16:40:57.517472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.782 qpair failed and we were unable to recover it. 00:29:11.782 [2024-11-20 16:40:57.517769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.782 [2024-11-20 16:40:57.517775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.782 qpair failed and we were unable to recover it. 00:29:11.782 [2024-11-20 16:40:57.518083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.782 [2024-11-20 16:40:57.518091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.782 qpair failed and we were unable to recover it. 00:29:11.782 [2024-11-20 16:40:57.518397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.782 [2024-11-20 16:40:57.518404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.782 qpair failed and we were unable to recover it. 00:29:11.782 [2024-11-20 16:40:57.518694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.782 [2024-11-20 16:40:57.518701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.782 qpair failed and we were unable to recover it. 00:29:11.782 [2024-11-20 16:40:57.519010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.782 [2024-11-20 16:40:57.519019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.782 qpair failed and we were unable to recover it. 00:29:11.782 [2024-11-20 16:40:57.519336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.782 [2024-11-20 16:40:57.519344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.782 qpair failed and we were unable to recover it. 00:29:11.782 [2024-11-20 16:40:57.519648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.782 [2024-11-20 16:40:57.519655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.782 qpair failed and we were unable to recover it. 00:29:11.782 [2024-11-20 16:40:57.519951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.782 [2024-11-20 16:40:57.519958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.782 qpair failed and we were unable to recover it. 00:29:11.782 [2024-11-20 16:40:57.520234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.782 [2024-11-20 16:40:57.520241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.782 qpair failed and we were unable to recover it. 00:29:11.782 [2024-11-20 16:40:57.520570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.782 [2024-11-20 16:40:57.520577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.782 qpair failed and we were unable to recover it. 00:29:11.782 [2024-11-20 16:40:57.520885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.782 [2024-11-20 16:40:57.520892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.782 qpair failed and we were unable to recover it. 00:29:11.782 [2024-11-20 16:40:57.521079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.782 [2024-11-20 16:40:57.521086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.782 qpair failed and we were unable to recover it. 00:29:11.782 [2024-11-20 16:40:57.521418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.782 [2024-11-20 16:40:57.521424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.782 qpair failed and we were unable to recover it. 00:29:11.782 [2024-11-20 16:40:57.521618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.782 [2024-11-20 16:40:57.521625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.782 qpair failed and we were unable to recover it. 00:29:11.782 [2024-11-20 16:40:57.521808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.782 [2024-11-20 16:40:57.521815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.782 qpair failed and we were unable to recover it. 00:29:11.782 [2024-11-20 16:40:57.522109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.782 [2024-11-20 16:40:57.522116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.782 qpair failed and we were unable to recover it. 00:29:11.782 [2024-11-20 16:40:57.522406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.782 [2024-11-20 16:40:57.522414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.782 qpair failed and we were unable to recover it. 00:29:11.782 [2024-11-20 16:40:57.522702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.782 [2024-11-20 16:40:57.522708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.782 qpair failed and we were unable to recover it. 00:29:11.782 [2024-11-20 16:40:57.523095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.782 [2024-11-20 16:40:57.523102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.782 qpair failed and we were unable to recover it. 00:29:11.782 [2024-11-20 16:40:57.523271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.782 [2024-11-20 16:40:57.523278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.782 qpair failed and we were unable to recover it. 00:29:11.782 [2024-11-20 16:40:57.523592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.782 [2024-11-20 16:40:57.523598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.782 qpair failed and we were unable to recover it. 00:29:11.782 [2024-11-20 16:40:57.523991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.782 [2024-11-20 16:40:57.523998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.782 qpair failed and we were unable to recover it. 00:29:11.782 [2024-11-20 16:40:57.524280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.782 [2024-11-20 16:40:57.524287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.782 qpair failed and we were unable to recover it. 00:29:11.782 [2024-11-20 16:40:57.524474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.782 [2024-11-20 16:40:57.524481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.782 qpair failed and we were unable to recover it. 00:29:11.782 [2024-11-20 16:40:57.524796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.782 [2024-11-20 16:40:57.524803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.782 qpair failed and we were unable to recover it. 00:29:11.782 [2024-11-20 16:40:57.525113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.782 [2024-11-20 16:40:57.525119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.782 qpair failed and we were unable to recover it. 00:29:11.782 [2024-11-20 16:40:57.525428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.782 [2024-11-20 16:40:57.525436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.782 qpair failed and we were unable to recover it. 00:29:11.782 [2024-11-20 16:40:57.525762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.782 [2024-11-20 16:40:57.525769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.783 qpair failed and we were unable to recover it. 00:29:11.783 [2024-11-20 16:40:57.526050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.783 [2024-11-20 16:40:57.526057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.783 qpair failed and we were unable to recover it. 00:29:11.783 [2024-11-20 16:40:57.526381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.783 [2024-11-20 16:40:57.526388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.783 qpair failed and we were unable to recover it. 00:29:11.783 [2024-11-20 16:40:57.526694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.783 [2024-11-20 16:40:57.526701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.783 qpair failed and we were unable to recover it. 00:29:11.783 [2024-11-20 16:40:57.526901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.783 [2024-11-20 16:40:57.526908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.783 qpair failed and we were unable to recover it. 00:29:11.783 [2024-11-20 16:40:57.527218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.783 [2024-11-20 16:40:57.527225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.783 qpair failed and we were unable to recover it. 00:29:11.783 [2024-11-20 16:40:57.527518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.783 [2024-11-20 16:40:57.527525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.783 qpair failed and we were unable to recover it. 00:29:11.783 [2024-11-20 16:40:57.527840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.783 [2024-11-20 16:40:57.527847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.783 qpair failed and we were unable to recover it. 00:29:11.783 [2024-11-20 16:40:57.528137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.783 [2024-11-20 16:40:57.528144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.783 qpair failed and we were unable to recover it. 00:29:11.783 [2024-11-20 16:40:57.528533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.783 [2024-11-20 16:40:57.528539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.783 qpair failed and we were unable to recover it. 00:29:11.783 [2024-11-20 16:40:57.528849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.783 [2024-11-20 16:40:57.528855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.783 qpair failed and we were unable to recover it. 00:29:11.783 [2024-11-20 16:40:57.529167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.783 [2024-11-20 16:40:57.529174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.783 qpair failed and we were unable to recover it. 00:29:11.783 [2024-11-20 16:40:57.529370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.783 [2024-11-20 16:40:57.529378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.783 qpair failed and we were unable to recover it. 00:29:11.783 [2024-11-20 16:40:57.529780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.783 [2024-11-20 16:40:57.529787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.783 qpair failed and we were unable to recover it. 00:29:11.783 [2024-11-20 16:40:57.530122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.783 [2024-11-20 16:40:57.530129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.783 qpair failed and we were unable to recover it. 00:29:11.783 [2024-11-20 16:40:57.530437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.783 [2024-11-20 16:40:57.530444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.783 qpair failed and we were unable to recover it. 00:29:11.783 [2024-11-20 16:40:57.530767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.783 [2024-11-20 16:40:57.530775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.783 qpair failed and we were unable to recover it. 00:29:11.783 [2024-11-20 16:40:57.531083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.783 [2024-11-20 16:40:57.531091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.783 qpair failed and we were unable to recover it. 00:29:11.783 [2024-11-20 16:40:57.531264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.783 [2024-11-20 16:40:57.531273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.783 qpair failed and we were unable to recover it. 00:29:11.783 [2024-11-20 16:40:57.531574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.783 [2024-11-20 16:40:57.531580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.783 qpair failed and we were unable to recover it. 00:29:11.783 [2024-11-20 16:40:57.531775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.783 [2024-11-20 16:40:57.531783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.783 qpair failed and we were unable to recover it. 00:29:11.783 [2024-11-20 16:40:57.532106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.783 [2024-11-20 16:40:57.532114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.783 qpair failed and we were unable to recover it. 00:29:11.783 [2024-11-20 16:40:57.532304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.783 [2024-11-20 16:40:57.532311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.783 qpair failed and we were unable to recover it. 00:29:11.783 [2024-11-20 16:40:57.532382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.783 [2024-11-20 16:40:57.532389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.783 qpair failed and we were unable to recover it. 00:29:11.783 [2024-11-20 16:40:57.532654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.783 [2024-11-20 16:40:57.532661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.783 qpair failed and we were unable to recover it. 00:29:11.783 [2024-11-20 16:40:57.532957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.783 [2024-11-20 16:40:57.532963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.783 qpair failed and we were unable to recover it. 00:29:11.783 [2024-11-20 16:40:57.533268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.783 [2024-11-20 16:40:57.533275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.783 qpair failed and we were unable to recover it. 00:29:11.783 [2024-11-20 16:40:57.533573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.783 [2024-11-20 16:40:57.533580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.783 qpair failed and we were unable to recover it. 00:29:11.783 [2024-11-20 16:40:57.533741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.783 [2024-11-20 16:40:57.533749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.783 qpair failed and we were unable to recover it. 00:29:11.783 [2024-11-20 16:40:57.533927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.783 [2024-11-20 16:40:57.533934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.783 qpair failed and we were unable to recover it. 00:29:11.783 [2024-11-20 16:40:57.534347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.783 [2024-11-20 16:40:57.534355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.783 qpair failed and we were unable to recover it. 00:29:11.783 [2024-11-20 16:40:57.534572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.783 [2024-11-20 16:40:57.534579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.783 qpair failed and we were unable to recover it. 00:29:11.783 [2024-11-20 16:40:57.534882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.783 [2024-11-20 16:40:57.534889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.783 qpair failed and we were unable to recover it. 00:29:11.783 [2024-11-20 16:40:57.535083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.783 [2024-11-20 16:40:57.535090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.783 qpair failed and we were unable to recover it. 00:29:11.783 [2024-11-20 16:40:57.535398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.783 [2024-11-20 16:40:57.535405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.783 qpair failed and we were unable to recover it. 00:29:11.783 [2024-11-20 16:40:57.535745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.783 [2024-11-20 16:40:57.535753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.783 qpair failed and we were unable to recover it. 00:29:11.783 [2024-11-20 16:40:57.536066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.783 [2024-11-20 16:40:57.536074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.783 qpair failed and we were unable to recover it. 00:29:11.783 [2024-11-20 16:40:57.536251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.783 [2024-11-20 16:40:57.536258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.783 qpair failed and we were unable to recover it. 00:29:11.783 [2024-11-20 16:40:57.536570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.784 [2024-11-20 16:40:57.536577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.784 qpair failed and we were unable to recover it. 00:29:11.784 [2024-11-20 16:40:57.536785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.784 [2024-11-20 16:40:57.536792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.784 qpair failed and we were unable to recover it. 00:29:11.784 [2024-11-20 16:40:57.536980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.784 [2024-11-20 16:40:57.536991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.784 qpair failed and we were unable to recover it. 00:29:11.784 [2024-11-20 16:40:57.537298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.784 [2024-11-20 16:40:57.537304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.784 qpair failed and we were unable to recover it. 00:29:11.784 [2024-11-20 16:40:57.537707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.784 [2024-11-20 16:40:57.537714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.784 qpair failed and we were unable to recover it. 00:29:11.784 [2024-11-20 16:40:57.537899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.784 [2024-11-20 16:40:57.537906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.784 qpair failed and we were unable to recover it. 00:29:11.784 [2024-11-20 16:40:57.538227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.784 [2024-11-20 16:40:57.538235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.784 qpair failed and we were unable to recover it. 00:29:11.784 [2024-11-20 16:40:57.538550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.784 [2024-11-20 16:40:57.538557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.784 qpair failed and we were unable to recover it. 00:29:11.784 [2024-11-20 16:40:57.538719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.784 [2024-11-20 16:40:57.538726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.784 qpair failed and we were unable to recover it. 00:29:11.784 [2024-11-20 16:40:57.539029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.784 [2024-11-20 16:40:57.539036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.784 qpair failed and we were unable to recover it. 00:29:11.784 [2024-11-20 16:40:57.539373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.784 [2024-11-20 16:40:57.539380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.784 qpair failed and we were unable to recover it. 00:29:11.784 [2024-11-20 16:40:57.539676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.784 [2024-11-20 16:40:57.539683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.784 qpair failed and we were unable to recover it. 00:29:11.784 [2024-11-20 16:40:57.540001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.784 [2024-11-20 16:40:57.540008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.784 qpair failed and we were unable to recover it. 00:29:11.784 [2024-11-20 16:40:57.540305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.784 [2024-11-20 16:40:57.540312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.784 qpair failed and we were unable to recover it. 00:29:11.784 [2024-11-20 16:40:57.540471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.784 [2024-11-20 16:40:57.540479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.784 qpair failed and we were unable to recover it. 00:29:11.784 [2024-11-20 16:40:57.540792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.784 [2024-11-20 16:40:57.540800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.784 qpair failed and we were unable to recover it. 00:29:11.784 [2024-11-20 16:40:57.540973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.784 [2024-11-20 16:40:57.540980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.784 qpair failed and we were unable to recover it. 00:29:11.784 [2024-11-20 16:40:57.541293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.784 [2024-11-20 16:40:57.541299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.784 qpair failed and we were unable to recover it. 00:29:11.784 [2024-11-20 16:40:57.541580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.784 [2024-11-20 16:40:57.541587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.784 qpair failed and we were unable to recover it. 00:29:11.784 [2024-11-20 16:40:57.541884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.784 [2024-11-20 16:40:57.541891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.784 qpair failed and we were unable to recover it. 00:29:11.784 [2024-11-20 16:40:57.542180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.784 [2024-11-20 16:40:57.542189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.784 qpair failed and we were unable to recover it. 00:29:11.784 [2024-11-20 16:40:57.542349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.784 [2024-11-20 16:40:57.542357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.784 qpair failed and we were unable to recover it. 00:29:11.784 [2024-11-20 16:40:57.542676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.784 [2024-11-20 16:40:57.542683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.784 qpair failed and we were unable to recover it. 00:29:11.784 [2024-11-20 16:40:57.542887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.784 [2024-11-20 16:40:57.542894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.784 qpair failed and we were unable to recover it. 00:29:11.784 [2024-11-20 16:40:57.543241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.784 [2024-11-20 16:40:57.543248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.784 qpair failed and we were unable to recover it. 00:29:11.784 [2024-11-20 16:40:57.543535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.784 [2024-11-20 16:40:57.543542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.784 qpair failed and we were unable to recover it. 00:29:11.784 [2024-11-20 16:40:57.543857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.784 [2024-11-20 16:40:57.543865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.784 qpair failed and we were unable to recover it. 00:29:11.784 [2024-11-20 16:40:57.544176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.784 [2024-11-20 16:40:57.544183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.784 qpair failed and we were unable to recover it. 00:29:11.784 [2024-11-20 16:40:57.544498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.784 [2024-11-20 16:40:57.544505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.784 qpair failed and we were unable to recover it. 00:29:11.784 [2024-11-20 16:40:57.544693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.784 [2024-11-20 16:40:57.544701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.784 qpair failed and we were unable to recover it. 00:29:11.784 [2024-11-20 16:40:57.544876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.784 [2024-11-20 16:40:57.544884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.784 qpair failed and we were unable to recover it. 00:29:11.784 [2024-11-20 16:40:57.545183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.784 [2024-11-20 16:40:57.545191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.784 qpair failed and we were unable to recover it. 00:29:11.784 [2024-11-20 16:40:57.545513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.784 [2024-11-20 16:40:57.545520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.784 qpair failed and we were unable to recover it. 00:29:11.784 [2024-11-20 16:40:57.545610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.784 [2024-11-20 16:40:57.545617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:11.784 qpair failed and we were unable to recover it. 00:29:11.784 [2024-11-20 16:40:57.545707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8aef30 is same with the state(6) to be set 00:29:11.784 Read completed with error (sct=0, sc=8) 00:29:11.784 starting I/O failed 00:29:11.784 Read completed with error (sct=0, sc=8) 00:29:11.784 starting I/O failed 00:29:11.784 Read completed with error (sct=0, sc=8) 00:29:11.784 starting I/O failed 00:29:11.784 Read completed with error (sct=0, sc=8) 00:29:11.784 starting I/O failed 00:29:11.784 Read completed with error (sct=0, sc=8) 00:29:11.784 starting I/O failed 00:29:11.784 Read completed with error (sct=0, sc=8) 00:29:11.784 starting I/O failed 00:29:11.784 Read completed with error (sct=0, sc=8) 00:29:11.784 starting I/O failed 00:29:11.784 Read completed with error (sct=0, sc=8) 00:29:11.784 starting I/O failed 00:29:11.784 Read completed with error (sct=0, sc=8) 00:29:11.784 starting I/O failed 00:29:11.784 Write completed with error (sct=0, sc=8) 00:29:11.784 starting I/O failed 00:29:11.784 Write completed with error (sct=0, sc=8) 00:29:11.784 starting I/O failed 00:29:11.784 Read completed with error (sct=0, sc=8) 00:29:11.784 starting I/O failed 00:29:11.785 Read completed with error (sct=0, sc=8) 00:29:11.785 starting I/O failed 00:29:11.785 Write completed with error (sct=0, sc=8) 00:29:11.785 starting I/O failed 00:29:11.785 Write completed with error (sct=0, sc=8) 00:29:11.785 starting I/O failed 00:29:11.785 Write completed with error (sct=0, sc=8) 00:29:11.785 starting I/O failed 00:29:11.785 Write completed with error (sct=0, sc=8) 00:29:11.785 starting I/O failed 00:29:11.785 Read completed with error (sct=0, sc=8) 00:29:11.785 starting I/O failed 00:29:11.785 Write completed with error (sct=0, sc=8) 00:29:11.785 starting I/O failed 00:29:11.785 Read completed with error (sct=0, sc=8) 00:29:11.785 starting I/O failed 00:29:11.785 Write completed with error (sct=0, sc=8) 00:29:11.785 starting I/O failed 00:29:11.785 Read completed with error (sct=0, sc=8) 00:29:11.785 starting I/O failed 00:29:11.785 Read completed with error (sct=0, sc=8) 00:29:11.785 starting I/O failed 00:29:11.785 Read completed with error (sct=0, sc=8) 00:29:11.785 starting I/O failed 00:29:11.785 Read completed with error (sct=0, sc=8) 00:29:11.785 starting I/O failed 00:29:11.785 Read completed with error (sct=0, sc=8) 00:29:11.785 starting I/O failed 00:29:11.785 Write completed with error (sct=0, sc=8) 00:29:11.785 starting I/O failed 00:29:11.785 Write completed with error (sct=0, sc=8) 00:29:11.785 starting I/O failed 00:29:11.785 Read completed with error (sct=0, sc=8) 00:29:11.785 starting I/O failed 00:29:11.785 Read completed with error (sct=0, sc=8) 00:29:11.785 starting I/O failed 00:29:11.785 Write completed with error (sct=0, sc=8) 00:29:11.785 starting I/O failed 00:29:11.785 Write completed with error (sct=0, sc=8) 00:29:11.785 starting I/O failed 00:29:11.785 [2024-11-20 16:40:57.546584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:11.785 [2024-11-20 16:40:57.546900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.785 [2024-11-20 16:40:57.546921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.785 qpair failed and we were unable to recover it. 00:29:11.785 [2024-11-20 16:40:57.547103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.785 [2024-11-20 16:40:57.547116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.785 qpair failed and we were unable to recover it. 00:29:11.785 [2024-11-20 16:40:57.547407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.785 [2024-11-20 16:40:57.547417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.785 qpair failed and we were unable to recover it. 00:29:11.785 [2024-11-20 16:40:57.547754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.785 [2024-11-20 16:40:57.547763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.785 qpair failed and we were unable to recover it. 00:29:11.785 [2024-11-20 16:40:57.548103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.785 [2024-11-20 16:40:57.548114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.785 qpair failed and we were unable to recover it. 00:29:11.785 [2024-11-20 16:40:57.548433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.785 [2024-11-20 16:40:57.548443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.785 qpair failed and we were unable to recover it. 00:29:11.785 [2024-11-20 16:40:57.548782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.785 [2024-11-20 16:40:57.548792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.785 qpair failed and we were unable to recover it. 00:29:11.785 [2024-11-20 16:40:57.549099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.785 [2024-11-20 16:40:57.549109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.785 qpair failed and we were unable to recover it. 00:29:11.785 [2024-11-20 16:40:57.549408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.785 [2024-11-20 16:40:57.549418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.785 qpair failed and we were unable to recover it. 00:29:11.785 [2024-11-20 16:40:57.549716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.785 [2024-11-20 16:40:57.549727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.785 qpair failed and we were unable to recover it. 00:29:11.785 [2024-11-20 16:40:57.550048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.785 [2024-11-20 16:40:57.550059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.785 qpair failed and we were unable to recover it. 00:29:11.785 [2024-11-20 16:40:57.550397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.785 [2024-11-20 16:40:57.550406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.785 qpair failed and we were unable to recover it. 00:29:11.785 [2024-11-20 16:40:57.550601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.785 [2024-11-20 16:40:57.550611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.785 qpair failed and we were unable to recover it. 00:29:11.785 [2024-11-20 16:40:57.550824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.785 [2024-11-20 16:40:57.550834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.785 qpair failed and we were unable to recover it. 00:29:11.785 [2024-11-20 16:40:57.551175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.785 [2024-11-20 16:40:57.551185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.785 qpair failed and we were unable to recover it. 00:29:11.785 [2024-11-20 16:40:57.551503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.785 [2024-11-20 16:40:57.551513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.785 qpair failed and we were unable to recover it. 00:29:11.785 [2024-11-20 16:40:57.551690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.785 [2024-11-20 16:40:57.551700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.785 qpair failed and we were unable to recover it. 00:29:11.785 [2024-11-20 16:40:57.552013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.785 [2024-11-20 16:40:57.552023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.785 qpair failed and we were unable to recover it. 00:29:11.785 [2024-11-20 16:40:57.552344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.785 [2024-11-20 16:40:57.552353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.785 qpair failed and we were unable to recover it. 00:29:11.785 [2024-11-20 16:40:57.552637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.785 [2024-11-20 16:40:57.552657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.785 qpair failed and we were unable to recover it. 00:29:11.785 [2024-11-20 16:40:57.552968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.785 [2024-11-20 16:40:57.552978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.785 qpair failed and we were unable to recover it. 00:29:11.785 [2024-11-20 16:40:57.553353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.785 [2024-11-20 16:40:57.553364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.785 qpair failed and we were unable to recover it. 00:29:11.785 [2024-11-20 16:40:57.553635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.785 [2024-11-20 16:40:57.553646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.785 qpair failed and we were unable to recover it. 00:29:11.785 [2024-11-20 16:40:57.553967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.785 [2024-11-20 16:40:57.553978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.785 qpair failed and we were unable to recover it. 00:29:11.785 [2024-11-20 16:40:57.554302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.785 [2024-11-20 16:40:57.554312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.785 qpair failed and we were unable to recover it. 00:29:11.785 [2024-11-20 16:40:57.554617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.785 [2024-11-20 16:40:57.554627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.785 qpair failed and we were unable to recover it. 00:29:11.785 [2024-11-20 16:40:57.554947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.785 [2024-11-20 16:40:57.554957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.785 qpair failed and we were unable to recover it. 00:29:11.785 [2024-11-20 16:40:57.555155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.785 [2024-11-20 16:40:57.555165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.785 qpair failed and we were unable to recover it. 00:29:11.785 [2024-11-20 16:40:57.555468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.785 [2024-11-20 16:40:57.555479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.785 qpair failed and we were unable to recover it. 00:29:11.785 [2024-11-20 16:40:57.555782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.785 [2024-11-20 16:40:57.555793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.785 qpair failed and we were unable to recover it. 00:29:11.785 [2024-11-20 16:40:57.555993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.785 [2024-11-20 16:40:57.556004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.785 qpair failed and we were unable to recover it. 00:29:11.785 [2024-11-20 16:40:57.556186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.786 [2024-11-20 16:40:57.556196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.786 qpair failed and we were unable to recover it. 00:29:11.786 [2024-11-20 16:40:57.556380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.786 [2024-11-20 16:40:57.556389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.786 qpair failed and we were unable to recover it. 00:29:11.786 [2024-11-20 16:40:57.556576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.786 [2024-11-20 16:40:57.556586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.786 qpair failed and we were unable to recover it. 00:29:11.786 [2024-11-20 16:40:57.556857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.786 [2024-11-20 16:40:57.556866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.786 qpair failed and we were unable to recover it. 00:29:11.786 [2024-11-20 16:40:57.557151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.786 [2024-11-20 16:40:57.557161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.786 qpair failed and we were unable to recover it. 00:29:11.786 [2024-11-20 16:40:57.557474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.786 [2024-11-20 16:40:57.557484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.786 qpair failed and we were unable to recover it. 00:29:11.786 [2024-11-20 16:40:57.557821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.786 [2024-11-20 16:40:57.557831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.786 qpair failed and we were unable to recover it. 00:29:11.786 [2024-11-20 16:40:57.558013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.786 [2024-11-20 16:40:57.558024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.786 qpair failed and we were unable to recover it. 00:29:11.786 [2024-11-20 16:40:57.558316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.786 [2024-11-20 16:40:57.558325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.786 qpair failed and we were unable to recover it. 00:29:11.786 [2024-11-20 16:40:57.558628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.786 [2024-11-20 16:40:57.558644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.786 qpair failed and we were unable to recover it. 00:29:11.786 [2024-11-20 16:40:57.558948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.786 [2024-11-20 16:40:57.558959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.786 qpair failed and we were unable to recover it. 00:29:11.786 [2024-11-20 16:40:57.559345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.786 [2024-11-20 16:40:57.559355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.786 qpair failed and we were unable to recover it. 00:29:11.786 [2024-11-20 16:40:57.559679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.786 [2024-11-20 16:40:57.559689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.786 qpair failed and we were unable to recover it. 00:29:11.786 [2024-11-20 16:40:57.559889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.786 [2024-11-20 16:40:57.559899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.786 qpair failed and we were unable to recover it. 00:29:11.786 [2024-11-20 16:40:57.560230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.786 [2024-11-20 16:40:57.560241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.786 qpair failed and we were unable to recover it. 00:29:11.786 [2024-11-20 16:40:57.560549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.786 [2024-11-20 16:40:57.560559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.786 qpair failed and we were unable to recover it. 00:29:11.786 [2024-11-20 16:40:57.560736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.786 [2024-11-20 16:40:57.560746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.786 qpair failed and we were unable to recover it. 00:29:11.786 [2024-11-20 16:40:57.561089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.786 [2024-11-20 16:40:57.561099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.786 qpair failed and we were unable to recover it. 00:29:11.786 [2024-11-20 16:40:57.561407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.786 [2024-11-20 16:40:57.561417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.786 qpair failed and we were unable to recover it. 00:29:11.786 [2024-11-20 16:40:57.561723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.786 [2024-11-20 16:40:57.561733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.786 qpair failed and we were unable to recover it. 00:29:11.786 [2024-11-20 16:40:57.562042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.786 [2024-11-20 16:40:57.562052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.786 qpair failed and we were unable to recover it. 00:29:11.786 [2024-11-20 16:40:57.562373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.786 [2024-11-20 16:40:57.562382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.786 qpair failed and we were unable to recover it. 00:29:11.786 [2024-11-20 16:40:57.562726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.786 [2024-11-20 16:40:57.562736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.786 qpair failed and we were unable to recover it. 00:29:11.786 [2024-11-20 16:40:57.562949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.786 [2024-11-20 16:40:57.562958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.786 qpair failed and we were unable to recover it. 00:29:11.786 [2024-11-20 16:40:57.563136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.786 [2024-11-20 16:40:57.563148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.786 qpair failed and we were unable to recover it. 00:29:11.786 [2024-11-20 16:40:57.563338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.786 [2024-11-20 16:40:57.563348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.786 qpair failed and we were unable to recover it. 00:29:11.786 [2024-11-20 16:40:57.563609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.786 [2024-11-20 16:40:57.563619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.786 qpair failed and we were unable to recover it. 00:29:11.786 [2024-11-20 16:40:57.563939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.786 [2024-11-20 16:40:57.563949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.786 qpair failed and we were unable to recover it. 00:29:11.786 [2024-11-20 16:40:57.564256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.786 [2024-11-20 16:40:57.564267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.786 qpair failed and we were unable to recover it. 00:29:11.786 [2024-11-20 16:40:57.564576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.786 [2024-11-20 16:40:57.564589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.786 qpair failed and we were unable to recover it. 00:29:11.786 [2024-11-20 16:40:57.564895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.786 [2024-11-20 16:40:57.564905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.786 qpair failed and we were unable to recover it. 00:29:11.786 [2024-11-20 16:40:57.565205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.786 [2024-11-20 16:40:57.565216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.786 qpair failed and we were unable to recover it. 00:29:11.786 [2024-11-20 16:40:57.565530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.786 [2024-11-20 16:40:57.565540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.786 qpair failed and we were unable to recover it. 00:29:11.786 [2024-11-20 16:40:57.565831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.786 [2024-11-20 16:40:57.565841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.786 qpair failed and we were unable to recover it. 00:29:11.786 [2024-11-20 16:40:57.566152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.786 [2024-11-20 16:40:57.566161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.786 qpair failed and we were unable to recover it. 00:29:11.787 [2024-11-20 16:40:57.566340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.787 [2024-11-20 16:40:57.566349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.787 qpair failed and we were unable to recover it. 00:29:11.787 [2024-11-20 16:40:57.566712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.787 [2024-11-20 16:40:57.566722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.787 qpair failed and we were unable to recover it. 00:29:11.787 [2024-11-20 16:40:57.567047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.787 [2024-11-20 16:40:57.567057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.787 qpair failed and we were unable to recover it. 00:29:11.787 [2024-11-20 16:40:57.567344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.787 [2024-11-20 16:40:57.567353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.787 qpair failed and we were unable to recover it. 00:29:11.787 [2024-11-20 16:40:57.567731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.787 [2024-11-20 16:40:57.567741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.787 qpair failed and we were unable to recover it. 00:29:11.787 [2024-11-20 16:40:57.568027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.787 [2024-11-20 16:40:57.568037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.787 qpair failed and we were unable to recover it. 00:29:11.787 [2024-11-20 16:40:57.568360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.787 [2024-11-20 16:40:57.568369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.787 qpair failed and we were unable to recover it. 00:29:11.787 [2024-11-20 16:40:57.568556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.787 [2024-11-20 16:40:57.568565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.787 qpair failed and we were unable to recover it. 00:29:11.787 [2024-11-20 16:40:57.568967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.787 [2024-11-20 16:40:57.568976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.787 qpair failed and we were unable to recover it. 00:29:11.787 [2024-11-20 16:40:57.569337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.787 [2024-11-20 16:40:57.569347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.787 qpair failed and we were unable to recover it. 00:29:11.787 [2024-11-20 16:40:57.569666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.787 [2024-11-20 16:40:57.569676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.787 qpair failed and we were unable to recover it. 00:29:11.787 [2024-11-20 16:40:57.570008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.787 [2024-11-20 16:40:57.570019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.787 qpair failed and we were unable to recover it. 00:29:11.787 [2024-11-20 16:40:57.570338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.787 [2024-11-20 16:40:57.570348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.787 qpair failed and we were unable to recover it. 00:29:11.787 [2024-11-20 16:40:57.570653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.787 [2024-11-20 16:40:57.570662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.787 qpair failed and we were unable to recover it. 00:29:11.787 [2024-11-20 16:40:57.570959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.787 [2024-11-20 16:40:57.570968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.787 qpair failed and we were unable to recover it. 00:29:11.787 [2024-11-20 16:40:57.571249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.787 [2024-11-20 16:40:57.571259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.787 qpair failed and we were unable to recover it. 00:29:11.787 [2024-11-20 16:40:57.571570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.787 [2024-11-20 16:40:57.571580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.787 qpair failed and we were unable to recover it. 00:29:11.787 [2024-11-20 16:40:57.571699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.787 [2024-11-20 16:40:57.571708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.787 qpair failed and we were unable to recover it. 00:29:11.787 [2024-11-20 16:40:57.571973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.787 [2024-11-20 16:40:57.571987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.787 qpair failed and we were unable to recover it. 00:29:11.787 [2024-11-20 16:40:57.572275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.787 [2024-11-20 16:40:57.572284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.787 qpair failed and we were unable to recover it. 00:29:11.787 [2024-11-20 16:40:57.572572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.787 [2024-11-20 16:40:57.572582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.787 qpair failed and we were unable to recover it. 00:29:11.787 [2024-11-20 16:40:57.572875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.787 [2024-11-20 16:40:57.572886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.787 qpair failed and we were unable to recover it. 00:29:11.787 [2024-11-20 16:40:57.573190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.787 [2024-11-20 16:40:57.573201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.787 qpair failed and we were unable to recover it. 00:29:11.787 [2024-11-20 16:40:57.573532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.787 [2024-11-20 16:40:57.573542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.787 qpair failed and we were unable to recover it. 00:29:11.787 [2024-11-20 16:40:57.573812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.787 [2024-11-20 16:40:57.573822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.787 qpair failed and we were unable to recover it. 00:29:11.787 [2024-11-20 16:40:57.574144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.787 [2024-11-20 16:40:57.574154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.787 qpair failed and we were unable to recover it. 00:29:11.787 [2024-11-20 16:40:57.574365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.787 [2024-11-20 16:40:57.574375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.787 qpair failed and we were unable to recover it. 00:29:11.787 [2024-11-20 16:40:57.574687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.787 [2024-11-20 16:40:57.574697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.787 qpair failed and we were unable to recover it. 00:29:11.787 [2024-11-20 16:40:57.574987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.787 [2024-11-20 16:40:57.574997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.787 qpair failed and we were unable to recover it. 00:29:11.787 [2024-11-20 16:40:57.575225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.787 [2024-11-20 16:40:57.575235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.787 qpair failed and we were unable to recover it. 00:29:11.787 [2024-11-20 16:40:57.575312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.787 [2024-11-20 16:40:57.575322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.787 qpair failed and we were unable to recover it. 00:29:11.787 [2024-11-20 16:40:57.575584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.787 [2024-11-20 16:40:57.575595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.787 qpair failed and we were unable to recover it. 00:29:11.787 [2024-11-20 16:40:57.575898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.787 [2024-11-20 16:40:57.575909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.787 qpair failed and we were unable to recover it. 00:29:11.787 [2024-11-20 16:40:57.576207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.787 [2024-11-20 16:40:57.576217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.787 qpair failed and we were unable to recover it. 00:29:11.787 [2024-11-20 16:40:57.576525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.787 [2024-11-20 16:40:57.576535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.787 qpair failed and we were unable to recover it. 00:29:11.787 [2024-11-20 16:40:57.576840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.787 [2024-11-20 16:40:57.576850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.787 qpair failed and we were unable to recover it. 00:29:11.787 [2024-11-20 16:40:57.577034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.787 [2024-11-20 16:40:57.577044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.787 qpair failed and we were unable to recover it. 00:29:11.787 [2024-11-20 16:40:57.577216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.787 [2024-11-20 16:40:57.577227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.787 qpair failed and we were unable to recover it. 00:29:11.787 [2024-11-20 16:40:57.577343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.788 [2024-11-20 16:40:57.577352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.788 qpair failed and we were unable to recover it. 00:29:11.788 [2024-11-20 16:40:57.577632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.788 [2024-11-20 16:40:57.577642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.788 qpair failed and we were unable to recover it. 00:29:11.788 [2024-11-20 16:40:57.577919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.788 [2024-11-20 16:40:57.577929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.788 qpair failed and we were unable to recover it. 00:29:11.788 [2024-11-20 16:40:57.578113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.788 [2024-11-20 16:40:57.578124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.788 qpair failed and we were unable to recover it. 00:29:11.788 [2024-11-20 16:40:57.578311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.788 [2024-11-20 16:40:57.578321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.788 qpair failed and we were unable to recover it. 00:29:11.788 [2024-11-20 16:40:57.578722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.788 [2024-11-20 16:40:57.578732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.788 qpair failed and we were unable to recover it. 00:29:11.788 [2024-11-20 16:40:57.579032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.788 [2024-11-20 16:40:57.579042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.788 qpair failed and we were unable to recover it. 00:29:11.788 [2024-11-20 16:40:57.579357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.788 [2024-11-20 16:40:57.579367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.788 qpair failed and we were unable to recover it. 00:29:11.788 [2024-11-20 16:40:57.579667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.788 [2024-11-20 16:40:57.579677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.788 qpair failed and we were unable to recover it. 00:29:11.788 [2024-11-20 16:40:57.579965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.788 [2024-11-20 16:40:57.579986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.788 qpair failed and we were unable to recover it. 00:29:11.788 [2024-11-20 16:40:57.580274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.788 [2024-11-20 16:40:57.580283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.788 qpair failed and we were unable to recover it. 00:29:11.788 [2024-11-20 16:40:57.580574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.788 [2024-11-20 16:40:57.580584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.788 qpair failed and we were unable to recover it. 00:29:11.788 [2024-11-20 16:40:57.580794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.788 [2024-11-20 16:40:57.580804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.788 qpair failed and we were unable to recover it. 00:29:11.788 [2024-11-20 16:40:57.581088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.788 [2024-11-20 16:40:57.581099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.788 qpair failed and we were unable to recover it. 00:29:11.788 [2024-11-20 16:40:57.581390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.788 [2024-11-20 16:40:57.581400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.788 qpair failed and we were unable to recover it. 00:29:11.788 [2024-11-20 16:40:57.581715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.788 [2024-11-20 16:40:57.581725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.788 qpair failed and we were unable to recover it. 00:29:11.788 [2024-11-20 16:40:57.581993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.788 [2024-11-20 16:40:57.582003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.788 qpair failed and we were unable to recover it. 00:29:11.788 [2024-11-20 16:40:57.582381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.788 [2024-11-20 16:40:57.582391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.788 qpair failed and we were unable to recover it. 00:29:11.788 [2024-11-20 16:40:57.582743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.788 [2024-11-20 16:40:57.582752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.788 qpair failed and we were unable to recover it. 00:29:11.788 [2024-11-20 16:40:57.582962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.788 [2024-11-20 16:40:57.582973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.788 qpair failed and we were unable to recover it. 00:29:11.788 [2024-11-20 16:40:57.583263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.788 [2024-11-20 16:40:57.583273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.788 qpair failed and we were unable to recover it. 00:29:11.788 [2024-11-20 16:40:57.583450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.788 [2024-11-20 16:40:57.583459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.788 qpair failed and we were unable to recover it. 00:29:11.788 [2024-11-20 16:40:57.583726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.788 [2024-11-20 16:40:57.583737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.788 qpair failed and we were unable to recover it. 00:29:11.788 [2024-11-20 16:40:57.584033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.788 [2024-11-20 16:40:57.584043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.788 qpair failed and we were unable to recover it. 00:29:11.788 [2024-11-20 16:40:57.584213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.788 [2024-11-20 16:40:57.584226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.788 qpair failed and we were unable to recover it. 00:29:11.788 [2024-11-20 16:40:57.584461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.788 [2024-11-20 16:40:57.584470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.788 qpair failed and we were unable to recover it. 00:29:11.788 [2024-11-20 16:40:57.584763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.788 [2024-11-20 16:40:57.584773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.788 qpair failed and we were unable to recover it. 00:29:11.788 [2024-11-20 16:40:57.584955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.788 [2024-11-20 16:40:57.584965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.788 qpair failed and we were unable to recover it. 00:29:11.788 [2024-11-20 16:40:57.585038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.788 [2024-11-20 16:40:57.585049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.788 qpair failed and we were unable to recover it. 00:29:11.788 [2024-11-20 16:40:57.585370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.788 [2024-11-20 16:40:57.585379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.788 qpair failed and we were unable to recover it. 00:29:11.788 [2024-11-20 16:40:57.585683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.788 [2024-11-20 16:40:57.585694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.788 qpair failed and we were unable to recover it. 00:29:11.788 [2024-11-20 16:40:57.585883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.788 [2024-11-20 16:40:57.585893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.788 qpair failed and we were unable to recover it. 00:29:11.788 [2024-11-20 16:40:57.586198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.788 [2024-11-20 16:40:57.586208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.788 qpair failed and we were unable to recover it. 00:29:11.788 [2024-11-20 16:40:57.586436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.788 [2024-11-20 16:40:57.586447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.788 qpair failed and we were unable to recover it. 00:29:11.788 [2024-11-20 16:40:57.586774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.788 [2024-11-20 16:40:57.586783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.788 qpair failed and we were unable to recover it. 00:29:11.788 [2024-11-20 16:40:57.587109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.789 [2024-11-20 16:40:57.587119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.789 qpair failed and we were unable to recover it. 00:29:11.789 [2024-11-20 16:40:57.587450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.789 [2024-11-20 16:40:57.587460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.789 qpair failed and we were unable to recover it. 00:29:11.789 [2024-11-20 16:40:57.587758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.789 [2024-11-20 16:40:57.587768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.789 qpair failed and we were unable to recover it. 00:29:11.789 [2024-11-20 16:40:57.588001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.789 [2024-11-20 16:40:57.588012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.789 qpair failed and we were unable to recover it. 00:29:11.789 [2024-11-20 16:40:57.588325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.789 [2024-11-20 16:40:57.588335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.789 qpair failed and we were unable to recover it. 00:29:11.789 [2024-11-20 16:40:57.588531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.789 [2024-11-20 16:40:57.588541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.789 qpair failed and we were unable to recover it. 00:29:11.789 [2024-11-20 16:40:57.588837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.789 [2024-11-20 16:40:57.588848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.789 qpair failed and we were unable to recover it. 00:29:11.789 [2024-11-20 16:40:57.589159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.789 [2024-11-20 16:40:57.589169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.789 qpair failed and we were unable to recover it. 00:29:11.789 [2024-11-20 16:40:57.589549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.789 [2024-11-20 16:40:57.589558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.789 qpair failed and we were unable to recover it. 00:29:11.789 [2024-11-20 16:40:57.589916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.789 [2024-11-20 16:40:57.589927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.789 qpair failed and we were unable to recover it. 00:29:11.789 [2024-11-20 16:40:57.590366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.789 [2024-11-20 16:40:57.590376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.789 qpair failed and we were unable to recover it. 00:29:11.789 [2024-11-20 16:40:57.590560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.789 [2024-11-20 16:40:57.590570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.789 qpair failed and we were unable to recover it. 00:29:11.789 [2024-11-20 16:40:57.590719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.789 [2024-11-20 16:40:57.590729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.789 qpair failed and we were unable to recover it. 00:29:11.789 [2024-11-20 16:40:57.591012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.789 [2024-11-20 16:40:57.591022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.789 qpair failed and we were unable to recover it. 00:29:11.789 [2024-11-20 16:40:57.591217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.789 [2024-11-20 16:40:57.591227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.789 qpair failed and we were unable to recover it. 00:29:11.789 [2024-11-20 16:40:57.591572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.789 [2024-11-20 16:40:57.591581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.789 qpair failed and we were unable to recover it. 00:29:11.789 [2024-11-20 16:40:57.591908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.789 [2024-11-20 16:40:57.591920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.789 qpair failed and we were unable to recover it. 00:29:11.789 [2024-11-20 16:40:57.592231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.789 [2024-11-20 16:40:57.592241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.789 qpair failed and we were unable to recover it. 00:29:11.789 [2024-11-20 16:40:57.592521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.789 [2024-11-20 16:40:57.592531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.789 qpair failed and we were unable to recover it. 00:29:11.789 [2024-11-20 16:40:57.592853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.789 [2024-11-20 16:40:57.592863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.789 qpair failed and we were unable to recover it. 00:29:11.789 [2024-11-20 16:40:57.593162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.789 [2024-11-20 16:40:57.593172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.789 qpair failed and we were unable to recover it. 00:29:11.789 [2024-11-20 16:40:57.593489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.789 [2024-11-20 16:40:57.593499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.789 qpair failed and we were unable to recover it. 00:29:11.789 [2024-11-20 16:40:57.593782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.789 [2024-11-20 16:40:57.593791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.789 qpair failed and we were unable to recover it. 00:29:11.789 [2024-11-20 16:40:57.594101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.789 [2024-11-20 16:40:57.594111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.789 qpair failed and we were unable to recover it. 00:29:11.789 [2024-11-20 16:40:57.594363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.789 [2024-11-20 16:40:57.594373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.789 qpair failed and we were unable to recover it. 00:29:11.789 [2024-11-20 16:40:57.594573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.789 [2024-11-20 16:40:57.594584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.789 qpair failed and we were unable to recover it. 00:29:11.789 [2024-11-20 16:40:57.594893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.789 [2024-11-20 16:40:57.594903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.789 qpair failed and we were unable to recover it. 00:29:11.789 [2024-11-20 16:40:57.595197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.789 [2024-11-20 16:40:57.595208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.789 qpair failed and we were unable to recover it. 00:29:11.789 [2024-11-20 16:40:57.595497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.789 [2024-11-20 16:40:57.595507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.789 qpair failed and we were unable to recover it. 00:29:11.789 [2024-11-20 16:40:57.595788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.789 [2024-11-20 16:40:57.595805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.789 qpair failed and we were unable to recover it. 00:29:11.789 [2024-11-20 16:40:57.596129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.789 [2024-11-20 16:40:57.596139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.789 qpair failed and we were unable to recover it. 00:29:11.789 [2024-11-20 16:40:57.596418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.789 [2024-11-20 16:40:57.596427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.789 qpair failed and we were unable to recover it. 00:29:11.789 [2024-11-20 16:40:57.596596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.789 [2024-11-20 16:40:57.596607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.789 qpair failed and we were unable to recover it. 00:29:11.789 [2024-11-20 16:40:57.596926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.789 [2024-11-20 16:40:57.596935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.789 qpair failed and we were unable to recover it. 00:29:11.789 [2024-11-20 16:40:57.597261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.789 [2024-11-20 16:40:57.597271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.789 qpair failed and we were unable to recover it. 00:29:11.789 [2024-11-20 16:40:57.597546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.789 [2024-11-20 16:40:57.597556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.789 qpair failed and we were unable to recover it. 00:29:11.789 [2024-11-20 16:40:57.597872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.790 [2024-11-20 16:40:57.597882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.790 qpair failed and we were unable to recover it. 00:29:11.790 [2024-11-20 16:40:57.598165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.790 [2024-11-20 16:40:57.598175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.790 qpair failed and we were unable to recover it. 00:29:11.790 [2024-11-20 16:40:57.598481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.790 [2024-11-20 16:40:57.598490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.790 qpair failed and we were unable to recover it. 00:29:11.790 [2024-11-20 16:40:57.598800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.790 [2024-11-20 16:40:57.598810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.790 qpair failed and we were unable to recover it. 00:29:11.790 [2024-11-20 16:40:57.599178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.790 [2024-11-20 16:40:57.599188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.790 qpair failed and we were unable to recover it. 00:29:11.790 [2024-11-20 16:40:57.599480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.790 [2024-11-20 16:40:57.599490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.790 qpair failed and we were unable to recover it. 00:29:11.790 [2024-11-20 16:40:57.599785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.790 [2024-11-20 16:40:57.599794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.790 qpair failed and we were unable to recover it. 00:29:11.790 [2024-11-20 16:40:57.600007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.790 [2024-11-20 16:40:57.600017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.790 qpair failed and we were unable to recover it. 00:29:11.790 [2024-11-20 16:40:57.600293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.790 [2024-11-20 16:40:57.600302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.790 qpair failed and we were unable to recover it. 00:29:11.790 [2024-11-20 16:40:57.600613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.790 [2024-11-20 16:40:57.600623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.790 qpair failed and we were unable to recover it. 00:29:11.790 [2024-11-20 16:40:57.600954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.790 [2024-11-20 16:40:57.600963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.790 qpair failed and we were unable to recover it. 00:29:11.790 [2024-11-20 16:40:57.601271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.790 [2024-11-20 16:40:57.601282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.790 qpair failed and we were unable to recover it. 00:29:11.790 [2024-11-20 16:40:57.601548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.790 [2024-11-20 16:40:57.601558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.790 qpair failed and we were unable to recover it. 00:29:11.790 [2024-11-20 16:40:57.601838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.790 [2024-11-20 16:40:57.601848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.790 qpair failed and we were unable to recover it. 00:29:11.790 [2024-11-20 16:40:57.602101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.790 [2024-11-20 16:40:57.602111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.790 qpair failed and we were unable to recover it. 00:29:11.790 [2024-11-20 16:40:57.602433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.790 [2024-11-20 16:40:57.602443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.790 qpair failed and we were unable to recover it. 00:29:11.790 [2024-11-20 16:40:57.602737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.790 [2024-11-20 16:40:57.602748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.790 qpair failed and we were unable to recover it. 00:29:11.790 [2024-11-20 16:40:57.602934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.790 [2024-11-20 16:40:57.602945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.790 qpair failed and we were unable to recover it. 00:29:11.790 [2024-11-20 16:40:57.603228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.790 [2024-11-20 16:40:57.603239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.790 qpair failed and we were unable to recover it. 00:29:11.790 [2024-11-20 16:40:57.603525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.790 [2024-11-20 16:40:57.603536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.790 qpair failed and we were unable to recover it. 00:29:11.790 [2024-11-20 16:40:57.603818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.790 [2024-11-20 16:40:57.603828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.790 qpair failed and we were unable to recover it. 00:29:11.790 [2024-11-20 16:40:57.604146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.790 [2024-11-20 16:40:57.604160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.790 qpair failed and we were unable to recover it. 00:29:11.790 [2024-11-20 16:40:57.604367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.790 [2024-11-20 16:40:57.604376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.790 qpair failed and we were unable to recover it. 00:29:11.790 [2024-11-20 16:40:57.604715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.790 [2024-11-20 16:40:57.604725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.790 qpair failed and we were unable to recover it. 00:29:11.790 [2024-11-20 16:40:57.604937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.790 [2024-11-20 16:40:57.604947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.790 qpair failed and we were unable to recover it. 00:29:11.790 [2024-11-20 16:40:57.605271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.790 [2024-11-20 16:40:57.605281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.790 qpair failed and we were unable to recover it. 00:29:11.790 [2024-11-20 16:40:57.605464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.790 [2024-11-20 16:40:57.605476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.790 qpair failed and we were unable to recover it. 00:29:11.790 [2024-11-20 16:40:57.605826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.790 [2024-11-20 16:40:57.605838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.790 qpair failed and we were unable to recover it. 00:29:11.790 [2024-11-20 16:40:57.606065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.790 [2024-11-20 16:40:57.606077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.790 qpair failed and we were unable to recover it. 00:29:11.790 [2024-11-20 16:40:57.606373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.790 [2024-11-20 16:40:57.606383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.790 qpair failed and we were unable to recover it. 00:29:11.790 [2024-11-20 16:40:57.606758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.790 [2024-11-20 16:40:57.606771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.790 qpair failed and we were unable to recover it. 00:29:11.790 [2024-11-20 16:40:57.607195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.790 [2024-11-20 16:40:57.607208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.790 qpair failed and we were unable to recover it. 00:29:11.790 [2024-11-20 16:40:57.607510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.790 [2024-11-20 16:40:57.607520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.790 qpair failed and we were unable to recover it. 00:29:11.790 [2024-11-20 16:40:57.607694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.790 [2024-11-20 16:40:57.607705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.790 qpair failed and we were unable to recover it. 00:29:11.790 [2024-11-20 16:40:57.608102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.790 [2024-11-20 16:40:57.608113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.790 qpair failed and we were unable to recover it. 00:29:11.790 [2024-11-20 16:40:57.608386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.791 [2024-11-20 16:40:57.608396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.791 qpair failed and we were unable to recover it. 00:29:11.791 [2024-11-20 16:40:57.608598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.791 [2024-11-20 16:40:57.608608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.791 qpair failed and we were unable to recover it. 00:29:11.791 [2024-11-20 16:40:57.608906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.791 [2024-11-20 16:40:57.608917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.791 qpair failed and we were unable to recover it. 00:29:11.791 [2024-11-20 16:40:57.609225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.791 [2024-11-20 16:40:57.609235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.791 qpair failed and we were unable to recover it. 00:29:11.791 [2024-11-20 16:40:57.609545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.791 [2024-11-20 16:40:57.609555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.791 qpair failed and we were unable to recover it. 00:29:11.791 [2024-11-20 16:40:57.609883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.791 [2024-11-20 16:40:57.609893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.791 qpair failed and we were unable to recover it. 00:29:11.791 [2024-11-20 16:40:57.610195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.791 [2024-11-20 16:40:57.610205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.791 qpair failed and we were unable to recover it. 00:29:11.791 [2024-11-20 16:40:57.610505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.791 [2024-11-20 16:40:57.610515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.791 qpair failed and we were unable to recover it. 00:29:11.791 [2024-11-20 16:40:57.610804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.791 [2024-11-20 16:40:57.610814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.791 qpair failed and we were unable to recover it. 00:29:11.791 [2024-11-20 16:40:57.611022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.791 [2024-11-20 16:40:57.611033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.791 qpair failed and we were unable to recover it. 00:29:11.791 [2024-11-20 16:40:57.611334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.791 [2024-11-20 16:40:57.611343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.791 qpair failed and we were unable to recover it. 00:29:11.791 [2024-11-20 16:40:57.611635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.791 [2024-11-20 16:40:57.611646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.791 qpair failed and we were unable to recover it. 00:29:11.791 [2024-11-20 16:40:57.611953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.791 [2024-11-20 16:40:57.611962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.791 qpair failed and we were unable to recover it. 00:29:11.791 [2024-11-20 16:40:57.612150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.791 [2024-11-20 16:40:57.612163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.791 qpair failed and we were unable to recover it. 00:29:11.791 [2024-11-20 16:40:57.612501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.791 [2024-11-20 16:40:57.612511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.791 qpair failed and we were unable to recover it. 00:29:11.791 [2024-11-20 16:40:57.612812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.791 [2024-11-20 16:40:57.612822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.791 qpair failed and we were unable to recover it. 00:29:11.791 [2024-11-20 16:40:57.613127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.791 [2024-11-20 16:40:57.613138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.791 qpair failed and we were unable to recover it. 00:29:11.791 [2024-11-20 16:40:57.613424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.791 [2024-11-20 16:40:57.613434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.791 qpair failed and we were unable to recover it. 00:29:11.791 [2024-11-20 16:40:57.613737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.791 [2024-11-20 16:40:57.613747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.791 qpair failed and we were unable to recover it. 00:29:11.791 [2024-11-20 16:40:57.614006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.791 [2024-11-20 16:40:57.614016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.791 qpair failed and we were unable to recover it. 00:29:11.791 [2024-11-20 16:40:57.614357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.791 [2024-11-20 16:40:57.614367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.791 qpair failed and we were unable to recover it. 00:29:11.791 [2024-11-20 16:40:57.614649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.791 [2024-11-20 16:40:57.614667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.791 qpair failed and we were unable to recover it. 00:29:11.791 [2024-11-20 16:40:57.614881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.791 [2024-11-20 16:40:57.614890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.791 qpair failed and we were unable to recover it. 00:29:11.791 [2024-11-20 16:40:57.615194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.791 [2024-11-20 16:40:57.615205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.791 qpair failed and we were unable to recover it. 00:29:11.791 [2024-11-20 16:40:57.615508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.791 [2024-11-20 16:40:57.615517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.791 qpair failed and we were unable to recover it. 00:29:11.791 [2024-11-20 16:40:57.615830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.791 [2024-11-20 16:40:57.615839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.791 qpair failed and we were unable to recover it. 00:29:11.791 [2024-11-20 16:40:57.616128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.791 [2024-11-20 16:40:57.616139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.791 qpair failed and we were unable to recover it. 00:29:11.791 [2024-11-20 16:40:57.616452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.791 [2024-11-20 16:40:57.616463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.791 qpair failed and we were unable to recover it. 00:29:11.791 [2024-11-20 16:40:57.616765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.791 [2024-11-20 16:40:57.616775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.791 qpair failed and we were unable to recover it. 00:29:11.791 [2024-11-20 16:40:57.617068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.791 [2024-11-20 16:40:57.617079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.791 qpair failed and we were unable to recover it. 00:29:11.791 [2024-11-20 16:40:57.617356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.791 [2024-11-20 16:40:57.617367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.791 qpair failed and we were unable to recover it. 00:29:11.791 [2024-11-20 16:40:57.617675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.792 [2024-11-20 16:40:57.617685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.792 qpair failed and we were unable to recover it. 00:29:11.792 [2024-11-20 16:40:57.617885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.792 [2024-11-20 16:40:57.617895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.792 qpair failed and we were unable to recover it. 00:29:11.792 [2024-11-20 16:40:57.618071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.792 [2024-11-20 16:40:57.618082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.792 qpair failed and we were unable to recover it. 00:29:11.792 [2024-11-20 16:40:57.618395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.792 [2024-11-20 16:40:57.618405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.792 qpair failed and we were unable to recover it. 00:29:11.792 [2024-11-20 16:40:57.618688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.792 [2024-11-20 16:40:57.618697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.792 qpair failed and we were unable to recover it. 00:29:11.792 [2024-11-20 16:40:57.618899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.792 [2024-11-20 16:40:57.618909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.792 qpair failed and we were unable to recover it. 00:29:11.792 [2024-11-20 16:40:57.619095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.792 [2024-11-20 16:40:57.619106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.792 qpair failed and we were unable to recover it. 00:29:11.792 [2024-11-20 16:40:57.619433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.792 [2024-11-20 16:40:57.619443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.792 qpair failed and we were unable to recover it. 00:29:11.792 [2024-11-20 16:40:57.619721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.792 [2024-11-20 16:40:57.619731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.792 qpair failed and we were unable to recover it. 00:29:11.792 [2024-11-20 16:40:57.620031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.792 [2024-11-20 16:40:57.620042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.792 qpair failed and we were unable to recover it. 00:29:11.792 [2024-11-20 16:40:57.620409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.792 [2024-11-20 16:40:57.620420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.792 qpair failed and we were unable to recover it. 00:29:11.792 [2024-11-20 16:40:57.620722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.792 [2024-11-20 16:40:57.620732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.792 qpair failed and we were unable to recover it. 00:29:11.792 [2024-11-20 16:40:57.621010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.792 [2024-11-20 16:40:57.621020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.792 qpair failed and we were unable to recover it. 00:29:11.792 [2024-11-20 16:40:57.621340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.792 [2024-11-20 16:40:57.621350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.792 qpair failed and we were unable to recover it. 00:29:11.792 [2024-11-20 16:40:57.621547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.792 [2024-11-20 16:40:57.621557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.792 qpair failed and we were unable to recover it. 00:29:11.792 [2024-11-20 16:40:57.621866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.792 [2024-11-20 16:40:57.621877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.792 qpair failed and we were unable to recover it. 00:29:11.792 [2024-11-20 16:40:57.622146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.792 [2024-11-20 16:40:57.622157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.792 qpair failed and we were unable to recover it. 00:29:11.792 [2024-11-20 16:40:57.622462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.792 [2024-11-20 16:40:57.622473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.792 qpair failed and we were unable to recover it. 00:29:11.792 [2024-11-20 16:40:57.622815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.792 [2024-11-20 16:40:57.622826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.792 qpair failed and we were unable to recover it. 00:29:11.792 [2024-11-20 16:40:57.623107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.792 [2024-11-20 16:40:57.623117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.792 qpair failed and we were unable to recover it. 00:29:11.792 [2024-11-20 16:40:57.623417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.792 [2024-11-20 16:40:57.623427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.792 qpair failed and we were unable to recover it. 00:29:11.792 [2024-11-20 16:40:57.623731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.792 [2024-11-20 16:40:57.623741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.792 qpair failed and we were unable to recover it. 00:29:11.792 [2024-11-20 16:40:57.624070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.792 [2024-11-20 16:40:57.624081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.792 qpair failed and we were unable to recover it. 00:29:11.792 [2024-11-20 16:40:57.624388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.792 [2024-11-20 16:40:57.624400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.792 qpair failed and we were unable to recover it. 00:29:11.792 [2024-11-20 16:40:57.624676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.792 [2024-11-20 16:40:57.624686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.792 qpair failed and we were unable to recover it. 00:29:11.792 [2024-11-20 16:40:57.624999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.792 [2024-11-20 16:40:57.625010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.792 qpair failed and we were unable to recover it. 00:29:11.792 [2024-11-20 16:40:57.625321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.792 [2024-11-20 16:40:57.625331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.792 qpair failed and we were unable to recover it. 00:29:11.792 [2024-11-20 16:40:57.625645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.792 [2024-11-20 16:40:57.625655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.792 qpair failed and we were unable to recover it. 00:29:11.792 [2024-11-20 16:40:57.626023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.792 [2024-11-20 16:40:57.626034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.792 qpair failed and we were unable to recover it. 00:29:11.792 [2024-11-20 16:40:57.626240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.792 [2024-11-20 16:40:57.626250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.792 qpair failed and we were unable to recover it. 00:29:11.792 [2024-11-20 16:40:57.626567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.792 [2024-11-20 16:40:57.626576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.792 qpair failed and we were unable to recover it. 00:29:11.792 [2024-11-20 16:40:57.626881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.792 [2024-11-20 16:40:57.626892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.792 qpair failed and we were unable to recover it. 00:29:11.792 [2024-11-20 16:40:57.627183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.792 [2024-11-20 16:40:57.627194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.792 qpair failed and we were unable to recover it. 00:29:11.792 [2024-11-20 16:40:57.627506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.792 [2024-11-20 16:40:57.627517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.792 qpair failed and we were unable to recover it. 00:29:11.793 [2024-11-20 16:40:57.627797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.793 [2024-11-20 16:40:57.627807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.793 qpair failed and we were unable to recover it. 00:29:11.793 [2024-11-20 16:40:57.628114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.793 [2024-11-20 16:40:57.628124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.793 qpair failed and we were unable to recover it. 00:29:11.793 [2024-11-20 16:40:57.628430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.793 [2024-11-20 16:40:57.628440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.793 qpair failed and we were unable to recover it. 00:29:11.793 [2024-11-20 16:40:57.628768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.793 [2024-11-20 16:40:57.628778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.793 qpair failed and we were unable to recover it. 00:29:11.793 [2024-11-20 16:40:57.629005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.793 [2024-11-20 16:40:57.629015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.793 qpair failed and we were unable to recover it. 00:29:11.793 [2024-11-20 16:40:57.629322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.793 [2024-11-20 16:40:57.629332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.793 qpair failed and we were unable to recover it. 00:29:11.793 [2024-11-20 16:40:57.629611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.793 [2024-11-20 16:40:57.629620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.793 qpair failed and we were unable to recover it. 00:29:11.793 [2024-11-20 16:40:57.629937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.793 [2024-11-20 16:40:57.629948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.793 qpair failed and we were unable to recover it. 00:29:11.793 [2024-11-20 16:40:57.630249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.793 [2024-11-20 16:40:57.630259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.793 qpair failed and we were unable to recover it. 00:29:11.793 [2024-11-20 16:40:57.630536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.793 [2024-11-20 16:40:57.630545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.793 qpair failed and we were unable to recover it. 00:29:11.793 [2024-11-20 16:40:57.630860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.793 [2024-11-20 16:40:57.630870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.793 qpair failed and we were unable to recover it. 00:29:11.793 [2024-11-20 16:40:57.631166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.793 [2024-11-20 16:40:57.631212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.793 qpair failed and we were unable to recover it. 00:29:11.793 [2024-11-20 16:40:57.631402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.793 [2024-11-20 16:40:57.631413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.793 qpair failed and we were unable to recover it. 00:29:11.793 [2024-11-20 16:40:57.631714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.793 [2024-11-20 16:40:57.631724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.793 qpair failed and we were unable to recover it. 00:29:11.793 [2024-11-20 16:40:57.632003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.793 [2024-11-20 16:40:57.632013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.793 qpair failed and we were unable to recover it. 00:29:11.793 [2024-11-20 16:40:57.632392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.793 [2024-11-20 16:40:57.632402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.793 qpair failed and we were unable to recover it. 00:29:11.793 [2024-11-20 16:40:57.632691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.793 [2024-11-20 16:40:57.632702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.793 qpair failed and we were unable to recover it. 00:29:11.793 [2024-11-20 16:40:57.632953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.793 [2024-11-20 16:40:57.632964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.793 qpair failed and we were unable to recover it. 00:29:11.793 [2024-11-20 16:40:57.633346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.793 [2024-11-20 16:40:57.633356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.793 qpair failed and we were unable to recover it. 00:29:11.793 [2024-11-20 16:40:57.633648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.793 [2024-11-20 16:40:57.633657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.793 qpair failed and we were unable to recover it. 00:29:11.793 [2024-11-20 16:40:57.633960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.793 [2024-11-20 16:40:57.633970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.793 qpair failed and we were unable to recover it. 00:29:11.793 [2024-11-20 16:40:57.634144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.793 [2024-11-20 16:40:57.634161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.793 qpair failed and we were unable to recover it. 00:29:11.793 [2024-11-20 16:40:57.634364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.793 [2024-11-20 16:40:57.634374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.793 qpair failed and we were unable to recover it. 00:29:11.793 [2024-11-20 16:40:57.634790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.793 [2024-11-20 16:40:57.634801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.793 qpair failed and we were unable to recover it. 00:29:11.793 [2024-11-20 16:40:57.635167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.793 [2024-11-20 16:40:57.635178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.793 qpair failed and we were unable to recover it. 00:29:11.793 [2024-11-20 16:40:57.635480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.793 [2024-11-20 16:40:57.635490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.793 qpair failed and we were unable to recover it. 00:29:11.793 [2024-11-20 16:40:57.635680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.793 [2024-11-20 16:40:57.635690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.793 qpair failed and we were unable to recover it. 00:29:11.793 [2024-11-20 16:40:57.636013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.793 [2024-11-20 16:40:57.636024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.793 qpair failed and we were unable to recover it. 00:29:11.793 [2024-11-20 16:40:57.636361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.793 [2024-11-20 16:40:57.636371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.793 qpair failed and we were unable to recover it. 00:29:11.793 [2024-11-20 16:40:57.636677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.793 [2024-11-20 16:40:57.636687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.793 qpair failed and we were unable to recover it. 00:29:11.793 [2024-11-20 16:40:57.636893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.793 [2024-11-20 16:40:57.636902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.793 qpair failed and we were unable to recover it. 00:29:11.793 [2024-11-20 16:40:57.637222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.793 [2024-11-20 16:40:57.637233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.793 qpair failed and we were unable to recover it. 00:29:11.793 [2024-11-20 16:40:57.637539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.793 [2024-11-20 16:40:57.637549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.793 qpair failed and we were unable to recover it. 00:29:11.793 [2024-11-20 16:40:57.637887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.793 [2024-11-20 16:40:57.637897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.793 qpair failed and we were unable to recover it. 00:29:11.793 [2024-11-20 16:40:57.638190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.793 [2024-11-20 16:40:57.638200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.793 qpair failed and we were unable to recover it. 00:29:11.793 [2024-11-20 16:40:57.638503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.793 [2024-11-20 16:40:57.638513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.793 qpair failed and we were unable to recover it. 00:29:11.793 [2024-11-20 16:40:57.638791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.793 [2024-11-20 16:40:57.638801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.793 qpair failed and we were unable to recover it. 00:29:11.793 [2024-11-20 16:40:57.639006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.794 [2024-11-20 16:40:57.639017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.794 qpair failed and we were unable to recover it. 00:29:11.794 [2024-11-20 16:40:57.639251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.794 [2024-11-20 16:40:57.639261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.794 qpair failed and we were unable to recover it. 00:29:11.794 [2024-11-20 16:40:57.639582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.794 [2024-11-20 16:40:57.639592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.794 qpair failed and we were unable to recover it. 00:29:11.794 [2024-11-20 16:40:57.639910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.794 [2024-11-20 16:40:57.639919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.794 qpair failed and we were unable to recover it. 00:29:11.794 [2024-11-20 16:40:57.640118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.794 [2024-11-20 16:40:57.640129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.794 qpair failed and we were unable to recover it. 00:29:11.794 [2024-11-20 16:40:57.640408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.794 [2024-11-20 16:40:57.640417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.794 qpair failed and we were unable to recover it. 00:29:11.794 [2024-11-20 16:40:57.640734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.794 [2024-11-20 16:40:57.640743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.794 qpair failed and we were unable to recover it. 00:29:11.794 [2024-11-20 16:40:57.641102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.794 [2024-11-20 16:40:57.641112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.794 qpair failed and we were unable to recover it. 00:29:11.794 [2024-11-20 16:40:57.641496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.794 [2024-11-20 16:40:57.641506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.794 qpair failed and we were unable to recover it. 00:29:11.794 [2024-11-20 16:40:57.641787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.794 [2024-11-20 16:40:57.641796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.794 qpair failed and we were unable to recover it. 00:29:11.794 [2024-11-20 16:40:57.642102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.794 [2024-11-20 16:40:57.642112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.794 qpair failed and we were unable to recover it. 00:29:11.794 [2024-11-20 16:40:57.642292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.794 [2024-11-20 16:40:57.642303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.794 qpair failed and we were unable to recover it. 00:29:11.794 [2024-11-20 16:40:57.642562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.794 [2024-11-20 16:40:57.642572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.794 qpair failed and we were unable to recover it. 00:29:11.794 [2024-11-20 16:40:57.642881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.794 [2024-11-20 16:40:57.642891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.794 qpair failed and we were unable to recover it. 00:29:11.794 [2024-11-20 16:40:57.643189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.794 [2024-11-20 16:40:57.643199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.794 qpair failed and we were unable to recover it. 00:29:11.794 [2024-11-20 16:40:57.643477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.794 [2024-11-20 16:40:57.643489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.794 qpair failed and we were unable to recover it. 00:29:11.794 [2024-11-20 16:40:57.643790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.794 [2024-11-20 16:40:57.643801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.794 qpair failed and we were unable to recover it. 00:29:11.794 [2024-11-20 16:40:57.644132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.794 [2024-11-20 16:40:57.644142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.794 qpair failed and we were unable to recover it. 00:29:11.794 [2024-11-20 16:40:57.644443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.794 [2024-11-20 16:40:57.644453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.794 qpair failed and we were unable to recover it. 00:29:11.794 [2024-11-20 16:40:57.644773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.794 [2024-11-20 16:40:57.644783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.794 qpair failed and we were unable to recover it. 00:29:11.794 [2024-11-20 16:40:57.645076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.794 [2024-11-20 16:40:57.645088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.794 qpair failed and we were unable to recover it. 00:29:11.794 [2024-11-20 16:40:57.645398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.794 [2024-11-20 16:40:57.645409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.794 qpair failed and we were unable to recover it. 00:29:11.794 [2024-11-20 16:40:57.645709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.794 [2024-11-20 16:40:57.645719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.794 qpair failed and we were unable to recover it. 00:29:11.794 [2024-11-20 16:40:57.646042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.794 [2024-11-20 16:40:57.646053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.794 qpair failed and we were unable to recover it. 00:29:11.794 [2024-11-20 16:40:57.646364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.794 [2024-11-20 16:40:57.646374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.794 qpair failed and we were unable to recover it. 00:29:11.794 [2024-11-20 16:40:57.646679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.794 [2024-11-20 16:40:57.646688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.794 qpair failed and we were unable to recover it. 00:29:11.794 [2024-11-20 16:40:57.646909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.794 [2024-11-20 16:40:57.646919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.794 qpair failed and we were unable to recover it. 00:29:11.794 [2024-11-20 16:40:57.647238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.794 [2024-11-20 16:40:57.647248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.794 qpair failed and we were unable to recover it. 00:29:11.794 [2024-11-20 16:40:57.647433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.794 [2024-11-20 16:40:57.647444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.794 qpair failed and we were unable to recover it. 00:29:11.794 [2024-11-20 16:40:57.647784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.794 [2024-11-20 16:40:57.647795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.794 qpair failed and we were unable to recover it. 00:29:11.794 [2024-11-20 16:40:57.648096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.794 [2024-11-20 16:40:57.648106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.794 qpair failed and we were unable to recover it. 00:29:11.794 [2024-11-20 16:40:57.648426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.794 [2024-11-20 16:40:57.648436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.794 qpair failed and we were unable to recover it. 00:29:11.794 [2024-11-20 16:40:57.648721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.794 [2024-11-20 16:40:57.648731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.794 qpair failed and we were unable to recover it. 00:29:11.794 [2024-11-20 16:40:57.649033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.794 [2024-11-20 16:40:57.649044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.794 qpair failed and we were unable to recover it. 00:29:11.794 [2024-11-20 16:40:57.649333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.794 [2024-11-20 16:40:57.649343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.794 qpair failed and we were unable to recover it. 00:29:11.794 [2024-11-20 16:40:57.649622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.794 [2024-11-20 16:40:57.649632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.794 qpair failed and we were unable to recover it. 00:29:11.795 [2024-11-20 16:40:57.649941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.795 [2024-11-20 16:40:57.649951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.795 qpair failed and we were unable to recover it. 00:29:11.795 [2024-11-20 16:40:57.650333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.795 [2024-11-20 16:40:57.650343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.795 qpair failed and we were unable to recover it. 00:29:11.795 [2024-11-20 16:40:57.650642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.795 [2024-11-20 16:40:57.650653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.795 qpair failed and we were unable to recover it. 00:29:11.795 [2024-11-20 16:40:57.650878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.795 [2024-11-20 16:40:57.650888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.795 qpair failed and we were unable to recover it. 00:29:11.795 [2024-11-20 16:40:57.651213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.795 [2024-11-20 16:40:57.651223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.795 qpair failed and we were unable to recover it. 00:29:11.795 [2024-11-20 16:40:57.651521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.795 [2024-11-20 16:40:57.651531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.795 qpair failed and we were unable to recover it. 00:29:11.795 [2024-11-20 16:40:57.651845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.795 [2024-11-20 16:40:57.651855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.795 qpair failed and we were unable to recover it. 00:29:11.795 [2024-11-20 16:40:57.652149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.795 [2024-11-20 16:40:57.652159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.795 qpair failed and we were unable to recover it. 00:29:11.795 [2024-11-20 16:40:57.652370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.795 [2024-11-20 16:40:57.652380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.795 qpair failed and we were unable to recover it. 00:29:11.795 [2024-11-20 16:40:57.652620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.795 [2024-11-20 16:40:57.652631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.795 qpair failed and we were unable to recover it. 00:29:11.795 [2024-11-20 16:40:57.652838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.795 [2024-11-20 16:40:57.652848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.795 qpair failed and we were unable to recover it. 00:29:11.795 [2024-11-20 16:40:57.653160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.795 [2024-11-20 16:40:57.653170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.795 qpair failed and we were unable to recover it. 00:29:11.795 [2024-11-20 16:40:57.653482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.795 [2024-11-20 16:40:57.653492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.795 qpair failed and we were unable to recover it. 00:29:11.795 [2024-11-20 16:40:57.653789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.795 [2024-11-20 16:40:57.653799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.795 qpair failed and we were unable to recover it. 00:29:11.795 [2024-11-20 16:40:57.654111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.795 [2024-11-20 16:40:57.654121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.795 qpair failed and we were unable to recover it. 00:29:11.795 [2024-11-20 16:40:57.654400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.795 [2024-11-20 16:40:57.654416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.795 qpair failed and we were unable to recover it. 00:29:11.795 [2024-11-20 16:40:57.654720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.795 [2024-11-20 16:40:57.654730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.795 qpair failed and we were unable to recover it. 00:29:11.795 [2024-11-20 16:40:57.655012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.795 [2024-11-20 16:40:57.655022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.795 qpair failed and we were unable to recover it. 00:29:11.795 [2024-11-20 16:40:57.655273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.795 [2024-11-20 16:40:57.655283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.795 qpair failed and we were unable to recover it. 00:29:11.795 [2024-11-20 16:40:57.655561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.795 [2024-11-20 16:40:57.655572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.795 qpair failed and we were unable to recover it. 00:29:11.795 [2024-11-20 16:40:57.655883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.795 [2024-11-20 16:40:57.655893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.795 qpair failed and we were unable to recover it. 00:29:11.795 [2024-11-20 16:40:57.656202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.795 [2024-11-20 16:40:57.656213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.795 qpair failed and we were unable to recover it. 00:29:11.795 [2024-11-20 16:40:57.656524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.795 [2024-11-20 16:40:57.656534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.795 qpair failed and we were unable to recover it. 00:29:11.795 [2024-11-20 16:40:57.656725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.795 [2024-11-20 16:40:57.656737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.795 qpair failed and we were unable to recover it. 00:29:11.795 [2024-11-20 16:40:57.657066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.795 [2024-11-20 16:40:57.657076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.795 qpair failed and we were unable to recover it. 00:29:11.795 [2024-11-20 16:40:57.657393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.795 [2024-11-20 16:40:57.657404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.795 qpair failed and we were unable to recover it. 00:29:11.795 [2024-11-20 16:40:57.657574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.795 [2024-11-20 16:40:57.657583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.795 qpair failed and we were unable to recover it. 00:29:11.795 [2024-11-20 16:40:57.657775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.795 [2024-11-20 16:40:57.657784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.795 qpair failed and we were unable to recover it. 00:29:11.795 [2024-11-20 16:40:57.658147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.795 [2024-11-20 16:40:57.658158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.795 qpair failed and we were unable to recover it. 00:29:11.795 [2024-11-20 16:40:57.658458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.795 [2024-11-20 16:40:57.658467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.795 qpair failed and we were unable to recover it. 00:29:11.795 [2024-11-20 16:40:57.658778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.795 [2024-11-20 16:40:57.658788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.795 qpair failed and we were unable to recover it. 00:29:11.795 [2024-11-20 16:40:57.659094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.795 [2024-11-20 16:40:57.659104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.795 qpair failed and we were unable to recover it. 00:29:11.795 [2024-11-20 16:40:57.659397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.795 [2024-11-20 16:40:57.659407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.795 qpair failed and we were unable to recover it. 00:29:11.795 [2024-11-20 16:40:57.659682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.795 [2024-11-20 16:40:57.659692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.795 qpair failed and we were unable to recover it. 00:29:11.795 [2024-11-20 16:40:57.659970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.795 [2024-11-20 16:40:57.659979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.795 qpair failed and we were unable to recover it. 00:29:11.795 [2024-11-20 16:40:57.660267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.796 [2024-11-20 16:40:57.660277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.796 qpair failed and we were unable to recover it. 00:29:11.796 [2024-11-20 16:40:57.660584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.796 [2024-11-20 16:40:57.660593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.796 qpair failed and we were unable to recover it. 00:29:11.796 [2024-11-20 16:40:57.660872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.796 [2024-11-20 16:40:57.660881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.796 qpair failed and we were unable to recover it. 00:29:11.796 [2024-11-20 16:40:57.661169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.796 [2024-11-20 16:40:57.661179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.796 qpair failed and we were unable to recover it. 00:29:11.796 [2024-11-20 16:40:57.661524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.796 [2024-11-20 16:40:57.661535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.796 qpair failed and we were unable to recover it. 00:29:11.796 [2024-11-20 16:40:57.661837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.796 [2024-11-20 16:40:57.661848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.796 qpair failed and we were unable to recover it. 00:29:11.796 [2024-11-20 16:40:57.662132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.796 [2024-11-20 16:40:57.662142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.796 qpair failed and we were unable to recover it. 00:29:11.796 [2024-11-20 16:40:57.662301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.796 [2024-11-20 16:40:57.662311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.796 qpair failed and we were unable to recover it. 00:29:11.796 [2024-11-20 16:40:57.662799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.796 [2024-11-20 16:40:57.662910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe98c000b90 with addr=10.0.0.2, port=4420 00:29:11.796 qpair failed and we were unable to recover it. 00:29:11.796 [2024-11-20 16:40:57.663355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.796 [2024-11-20 16:40:57.663393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe98c000b90 with addr=10.0.0.2, port=4420 00:29:11.796 qpair failed and we were unable to recover it. 00:29:11.796 [2024-11-20 16:40:57.663735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.796 [2024-11-20 16:40:57.663767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe98c000b90 with addr=10.0.0.2, port=4420 00:29:11.796 qpair failed and we were unable to recover it. 00:29:11.796 [2024-11-20 16:40:57.663993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.796 [2024-11-20 16:40:57.664007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.796 qpair failed and we were unable to recover it. 00:29:11.796 [2024-11-20 16:40:57.664286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.796 [2024-11-20 16:40:57.664295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.796 qpair failed and we were unable to recover it. 00:29:11.796 [2024-11-20 16:40:57.664631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.796 [2024-11-20 16:40:57.664640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.796 qpair failed and we were unable to recover it. 00:29:11.796 [2024-11-20 16:40:57.664825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.796 [2024-11-20 16:40:57.664835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.796 qpair failed and we were unable to recover it. 00:29:11.796 [2024-11-20 16:40:57.665050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.796 [2024-11-20 16:40:57.665060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.796 qpair failed and we were unable to recover it. 00:29:11.796 [2024-11-20 16:40:57.665350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.796 [2024-11-20 16:40:57.665359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.796 qpair failed and we were unable to recover it. 00:29:11.796 [2024-11-20 16:40:57.665619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.796 [2024-11-20 16:40:57.665631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.796 qpair failed and we were unable to recover it. 00:29:11.796 [2024-11-20 16:40:57.665936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.796 [2024-11-20 16:40:57.665946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.796 qpair failed and we were unable to recover it. 00:29:11.796 [2024-11-20 16:40:57.666249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.796 [2024-11-20 16:40:57.666259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.796 qpair failed and we were unable to recover it. 00:29:11.796 [2024-11-20 16:40:57.666441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.796 [2024-11-20 16:40:57.666451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.796 qpair failed and we were unable to recover it. 00:29:11.796 [2024-11-20 16:40:57.666802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.796 [2024-11-20 16:40:57.666812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.796 qpair failed and we were unable to recover it. 00:29:11.796 [2024-11-20 16:40:57.667015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.796 [2024-11-20 16:40:57.667025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.796 qpair failed and we were unable to recover it. 00:29:11.796 [2024-11-20 16:40:57.667288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.796 [2024-11-20 16:40:57.667298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.796 qpair failed and we were unable to recover it. 00:29:11.796 [2024-11-20 16:40:57.667622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.796 [2024-11-20 16:40:57.667632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.796 qpair failed and we were unable to recover it. 00:29:11.796 [2024-11-20 16:40:57.668002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.796 [2024-11-20 16:40:57.668012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.796 qpair failed and we were unable to recover it. 00:29:11.796 [2024-11-20 16:40:57.668331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.796 [2024-11-20 16:40:57.668340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.796 qpair failed and we were unable to recover it. 00:29:11.796 [2024-11-20 16:40:57.668652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.796 [2024-11-20 16:40:57.668661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.796 qpair failed and we were unable to recover it. 00:29:11.796 [2024-11-20 16:40:57.668954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.796 [2024-11-20 16:40:57.668964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.796 qpair failed and we were unable to recover it. 00:29:11.796 [2024-11-20 16:40:57.669265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.796 [2024-11-20 16:40:57.669275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.796 qpair failed and we were unable to recover it. 00:29:11.796 [2024-11-20 16:40:57.669461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.796 [2024-11-20 16:40:57.669471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.797 qpair failed and we were unable to recover it. 00:29:11.797 [2024-11-20 16:40:57.669757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.797 [2024-11-20 16:40:57.669767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.797 qpair failed and we were unable to recover it. 00:29:11.797 [2024-11-20 16:40:57.669956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.797 [2024-11-20 16:40:57.669967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.797 qpair failed and we were unable to recover it. 00:29:11.797 [2024-11-20 16:40:57.670167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.797 [2024-11-20 16:40:57.670177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.797 qpair failed and we were unable to recover it. 00:29:11.797 [2024-11-20 16:40:57.670474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.797 [2024-11-20 16:40:57.670483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.797 qpair failed and we were unable to recover it. 00:29:11.797 [2024-11-20 16:40:57.670801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.797 [2024-11-20 16:40:57.670812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.797 qpair failed and we were unable to recover it. 00:29:11.797 [2024-11-20 16:40:57.671199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.797 [2024-11-20 16:40:57.671209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.797 qpair failed and we were unable to recover it. 00:29:11.797 [2024-11-20 16:40:57.671541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.797 [2024-11-20 16:40:57.671551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.797 qpair failed and we were unable to recover it. 00:29:11.797 [2024-11-20 16:40:57.671859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.797 [2024-11-20 16:40:57.671869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.797 qpair failed and we were unable to recover it. 00:29:11.797 [2024-11-20 16:40:57.672046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.797 [2024-11-20 16:40:57.672055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.797 qpair failed and we were unable to recover it. 00:29:11.797 [2024-11-20 16:40:57.672318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.797 [2024-11-20 16:40:57.672328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.797 qpair failed and we were unable to recover it. 00:29:11.797 [2024-11-20 16:40:57.672612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.797 [2024-11-20 16:40:57.672621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.797 qpair failed and we were unable to recover it. 00:29:11.797 [2024-11-20 16:40:57.672942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.797 [2024-11-20 16:40:57.672951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.797 qpair failed and we were unable to recover it. 00:29:11.797 [2024-11-20 16:40:57.673242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.797 [2024-11-20 16:40:57.673252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.797 qpair failed and we were unable to recover it. 00:29:11.797 [2024-11-20 16:40:57.673586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.797 [2024-11-20 16:40:57.673596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.797 qpair failed and we were unable to recover it. 00:29:11.797 [2024-11-20 16:40:57.673902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.797 [2024-11-20 16:40:57.673911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.797 qpair failed and we were unable to recover it. 00:29:11.797 [2024-11-20 16:40:57.674221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.797 [2024-11-20 16:40:57.674231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.797 qpair failed and we were unable to recover it. 00:29:11.797 [2024-11-20 16:40:57.674502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.797 [2024-11-20 16:40:57.674512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.797 qpair failed and we were unable to recover it. 00:29:11.797 [2024-11-20 16:40:57.674716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.797 [2024-11-20 16:40:57.674727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.797 qpair failed and we were unable to recover it. 00:29:11.797 [2024-11-20 16:40:57.675064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.797 [2024-11-20 16:40:57.675074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.797 qpair failed and we were unable to recover it. 00:29:11.797 [2024-11-20 16:40:57.675361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.797 [2024-11-20 16:40:57.675370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.797 qpair failed and we were unable to recover it. 00:29:11.797 [2024-11-20 16:40:57.675629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.797 [2024-11-20 16:40:57.675639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.797 qpair failed and we were unable to recover it. 00:29:11.797 [2024-11-20 16:40:57.675971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.797 [2024-11-20 16:40:57.675984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.797 qpair failed and we were unable to recover it. 00:29:11.797 [2024-11-20 16:40:57.676161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.797 [2024-11-20 16:40:57.676172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.797 qpair failed and we were unable to recover it. 00:29:11.797 [2024-11-20 16:40:57.676479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.797 [2024-11-20 16:40:57.676490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.797 qpair failed and we were unable to recover it. 00:29:11.797 [2024-11-20 16:40:57.676785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.797 [2024-11-20 16:40:57.676795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.797 qpair failed and we were unable to recover it. 00:29:11.797 [2024-11-20 16:40:57.677082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.797 [2024-11-20 16:40:57.677093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.797 qpair failed and we were unable to recover it. 00:29:11.797 [2024-11-20 16:40:57.677298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.797 [2024-11-20 16:40:57.677308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.797 qpair failed and we were unable to recover it. 00:29:11.797 [2024-11-20 16:40:57.677619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.797 [2024-11-20 16:40:57.677628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.797 qpair failed and we were unable to recover it. 00:29:11.797 [2024-11-20 16:40:57.677799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.797 [2024-11-20 16:40:57.677810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.797 qpair failed and we were unable to recover it. 00:29:11.797 [2024-11-20 16:40:57.678076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.797 [2024-11-20 16:40:57.678086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.797 qpair failed and we were unable to recover it. 00:29:11.797 [2024-11-20 16:40:57.678380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.797 [2024-11-20 16:40:57.678390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.797 qpair failed and we were unable to recover it. 00:29:11.797 [2024-11-20 16:40:57.678667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.797 [2024-11-20 16:40:57.678677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.797 qpair failed and we were unable to recover it. 00:29:11.797 [2024-11-20 16:40:57.678980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.797 [2024-11-20 16:40:57.678994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.797 qpair failed and we were unable to recover it. 00:29:11.797 [2024-11-20 16:40:57.679289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.797 [2024-11-20 16:40:57.679299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.797 qpair failed and we were unable to recover it. 00:29:11.797 [2024-11-20 16:40:57.679590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.797 [2024-11-20 16:40:57.679600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.798 qpair failed and we were unable to recover it. 00:29:11.798 [2024-11-20 16:40:57.679903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.798 [2024-11-20 16:40:57.679912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.798 qpair failed and we were unable to recover it. 00:29:11.798 [2024-11-20 16:40:57.680220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.798 [2024-11-20 16:40:57.680231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.798 qpair failed and we were unable to recover it. 00:29:11.798 [2024-11-20 16:40:57.680508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.798 [2024-11-20 16:40:57.680519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.798 qpair failed and we were unable to recover it. 00:29:11.798 [2024-11-20 16:40:57.680817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.798 [2024-11-20 16:40:57.680833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.798 qpair failed and we were unable to recover it. 00:29:11.798 [2024-11-20 16:40:57.681135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.798 [2024-11-20 16:40:57.681145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.798 qpair failed and we were unable to recover it. 00:29:11.798 [2024-11-20 16:40:57.681422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.798 [2024-11-20 16:40:57.681432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.798 qpair failed and we were unable to recover it. 00:29:11.798 [2024-11-20 16:40:57.681712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.798 [2024-11-20 16:40:57.681722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.798 qpair failed and we were unable to recover it. 00:29:11.798 [2024-11-20 16:40:57.682036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.798 [2024-11-20 16:40:57.682046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.798 qpair failed and we were unable to recover it. 00:29:11.798 [2024-11-20 16:40:57.682392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.798 [2024-11-20 16:40:57.682403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.798 qpair failed and we were unable to recover it. 00:29:11.798 [2024-11-20 16:40:57.682706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.798 [2024-11-20 16:40:57.682715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.798 qpair failed and we were unable to recover it. 00:29:11.798 [2024-11-20 16:40:57.683021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.798 [2024-11-20 16:40:57.683031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.798 qpair failed and we were unable to recover it. 00:29:11.798 [2024-11-20 16:40:57.683233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.798 [2024-11-20 16:40:57.683244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.798 qpair failed and we were unable to recover it. 00:29:11.798 [2024-11-20 16:40:57.683446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.798 [2024-11-20 16:40:57.683455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.798 qpair failed and we were unable to recover it. 00:29:11.798 [2024-11-20 16:40:57.683714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.798 [2024-11-20 16:40:57.683724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.798 qpair failed and we were unable to recover it. 00:29:11.798 [2024-11-20 16:40:57.684027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.798 [2024-11-20 16:40:57.684038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.798 qpair failed and we were unable to recover it. 00:29:11.798 [2024-11-20 16:40:57.684327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.798 [2024-11-20 16:40:57.684336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.798 qpair failed and we were unable to recover it. 00:29:11.798 [2024-11-20 16:40:57.684622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.798 [2024-11-20 16:40:57.684631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.798 qpair failed and we were unable to recover it. 00:29:11.798 [2024-11-20 16:40:57.684947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.798 [2024-11-20 16:40:57.684956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.798 qpair failed and we were unable to recover it. 00:29:11.798 [2024-11-20 16:40:57.685272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.798 [2024-11-20 16:40:57.685283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.798 qpair failed and we were unable to recover it. 00:29:11.798 [2024-11-20 16:40:57.685598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.798 [2024-11-20 16:40:57.685610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.798 qpair failed and we were unable to recover it. 00:29:11.798 [2024-11-20 16:40:57.685881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.798 [2024-11-20 16:40:57.685890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.798 qpair failed and we were unable to recover it. 00:29:11.798 [2024-11-20 16:40:57.686192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.798 [2024-11-20 16:40:57.686202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.798 qpair failed and we were unable to recover it. 00:29:11.798 [2024-11-20 16:40:57.686486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.798 [2024-11-20 16:40:57.686505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.798 qpair failed and we were unable to recover it. 00:29:11.798 [2024-11-20 16:40:57.686813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.798 [2024-11-20 16:40:57.686822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.798 qpair failed and we were unable to recover it. 00:29:11.798 [2024-11-20 16:40:57.687133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.798 [2024-11-20 16:40:57.687143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.798 qpair failed and we were unable to recover it. 00:29:11.798 [2024-11-20 16:40:57.687449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.798 [2024-11-20 16:40:57.687459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.798 qpair failed and we were unable to recover it. 00:29:11.798 [2024-11-20 16:40:57.687746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.798 [2024-11-20 16:40:57.687756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.798 qpair failed and we were unable to recover it. 00:29:11.798 [2024-11-20 16:40:57.688063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.798 [2024-11-20 16:40:57.688073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.798 qpair failed and we were unable to recover it. 00:29:11.798 [2024-11-20 16:40:57.688350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.798 [2024-11-20 16:40:57.688359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.798 qpair failed and we were unable to recover it. 00:29:11.798 [2024-11-20 16:40:57.688647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.798 [2024-11-20 16:40:57.688657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.798 qpair failed and we were unable to recover it. 00:29:11.798 [2024-11-20 16:40:57.688993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.798 [2024-11-20 16:40:57.689003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.798 qpair failed and we were unable to recover it. 00:29:11.798 [2024-11-20 16:40:57.689315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.798 [2024-11-20 16:40:57.689325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.798 qpair failed and we were unable to recover it. 00:29:11.798 [2024-11-20 16:40:57.689602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.798 [2024-11-20 16:40:57.689611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.798 qpair failed and we were unable to recover it. 00:29:11.798 [2024-11-20 16:40:57.689816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.798 [2024-11-20 16:40:57.689826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.798 qpair failed and we were unable to recover it. 00:29:11.798 [2024-11-20 16:40:57.690150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.798 [2024-11-20 16:40:57.690160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.798 qpair failed and we were unable to recover it. 00:29:11.798 [2024-11-20 16:40:57.690443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.798 [2024-11-20 16:40:57.690454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.798 qpair failed and we were unable to recover it. 00:29:11.798 [2024-11-20 16:40:57.690753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.798 [2024-11-20 16:40:57.690763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.798 qpair failed and we were unable to recover it. 00:29:11.798 [2024-11-20 16:40:57.691084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.799 [2024-11-20 16:40:57.691094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.799 qpair failed and we were unable to recover it. 00:29:11.799 [2024-11-20 16:40:57.691423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.799 [2024-11-20 16:40:57.691433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.799 qpair failed and we were unable to recover it. 00:29:11.799 [2024-11-20 16:40:57.691736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.799 [2024-11-20 16:40:57.691745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.799 qpair failed and we were unable to recover it. 00:29:11.799 [2024-11-20 16:40:57.692054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.799 [2024-11-20 16:40:57.692064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.799 qpair failed and we were unable to recover it. 00:29:11.799 [2024-11-20 16:40:57.692282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.799 [2024-11-20 16:40:57.692292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.799 qpair failed and we were unable to recover it. 00:29:11.799 [2024-11-20 16:40:57.692606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.799 [2024-11-20 16:40:57.692615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.799 qpair failed and we were unable to recover it. 00:29:11.799 [2024-11-20 16:40:57.692896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.799 [2024-11-20 16:40:57.692906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.799 qpair failed and we were unable to recover it. 00:29:11.799 [2024-11-20 16:40:57.693064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.799 [2024-11-20 16:40:57.693082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.799 qpair failed and we were unable to recover it. 00:29:11.799 [2024-11-20 16:40:57.693386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.799 [2024-11-20 16:40:57.693396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.799 qpair failed and we were unable to recover it. 00:29:11.799 [2024-11-20 16:40:57.693688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.799 [2024-11-20 16:40:57.693698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.799 qpair failed and we were unable to recover it. 00:29:11.799 [2024-11-20 16:40:57.694024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.799 [2024-11-20 16:40:57.694036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.799 qpair failed and we were unable to recover it. 00:29:11.799 [2024-11-20 16:40:57.694330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.799 [2024-11-20 16:40:57.694339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.799 qpair failed and we were unable to recover it. 00:29:11.799 [2024-11-20 16:40:57.694641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.799 [2024-11-20 16:40:57.694651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.799 qpair failed and we were unable to recover it. 00:29:11.799 [2024-11-20 16:40:57.694955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.799 [2024-11-20 16:40:57.694965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.799 qpair failed and we were unable to recover it. 00:29:11.799 [2024-11-20 16:40:57.695253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.799 [2024-11-20 16:40:57.695263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.799 qpair failed and we were unable to recover it. 00:29:11.799 [2024-11-20 16:40:57.695544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.799 [2024-11-20 16:40:57.695596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.799 qpair failed and we were unable to recover it. 00:29:11.799 [2024-11-20 16:40:57.695938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.799 [2024-11-20 16:40:57.695947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.799 qpair failed and we were unable to recover it. 00:29:11.799 [2024-11-20 16:40:57.696244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.799 [2024-11-20 16:40:57.696255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.799 qpair failed and we were unable to recover it. 00:29:11.799 [2024-11-20 16:40:57.696555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.799 [2024-11-20 16:40:57.696565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.799 qpair failed and we were unable to recover it. 00:29:11.799 [2024-11-20 16:40:57.696864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.799 [2024-11-20 16:40:57.696873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.799 qpair failed and we were unable to recover it. 00:29:11.799 [2024-11-20 16:40:57.697152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.799 [2024-11-20 16:40:57.697162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.799 qpair failed and we were unable to recover it. 00:29:11.799 [2024-11-20 16:40:57.697452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.799 [2024-11-20 16:40:57.697461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.799 qpair failed and we were unable to recover it. 00:29:11.799 [2024-11-20 16:40:57.697648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.799 [2024-11-20 16:40:57.697659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.799 qpair failed and we were unable to recover it. 00:29:11.799 [2024-11-20 16:40:57.697871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.799 [2024-11-20 16:40:57.697883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.799 qpair failed and we were unable to recover it. 00:29:11.799 [2024-11-20 16:40:57.698200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.799 [2024-11-20 16:40:57.698210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.799 qpair failed and we were unable to recover it. 00:29:11.799 [2024-11-20 16:40:57.698503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.799 [2024-11-20 16:40:57.698513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.799 qpair failed and we were unable to recover it. 00:29:11.799 [2024-11-20 16:40:57.698816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.799 [2024-11-20 16:40:57.698826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.799 qpair failed and we were unable to recover it. 00:29:11.799 [2024-11-20 16:40:57.699130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.799 [2024-11-20 16:40:57.699140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.799 qpair failed and we were unable to recover it. 00:29:11.799 [2024-11-20 16:40:57.699434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.799 [2024-11-20 16:40:57.699443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.799 qpair failed and we were unable to recover it. 00:29:11.799 [2024-11-20 16:40:57.699640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.799 [2024-11-20 16:40:57.699650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.799 qpair failed and we were unable to recover it. 00:29:11.799 [2024-11-20 16:40:57.699990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.799 [2024-11-20 16:40:57.700000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.799 qpair failed and we were unable to recover it. 00:29:11.799 [2024-11-20 16:40:57.700362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.799 [2024-11-20 16:40:57.700372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.799 qpair failed and we were unable to recover it. 00:29:11.799 [2024-11-20 16:40:57.700680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.799 [2024-11-20 16:40:57.700690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.799 qpair failed and we were unable to recover it. 00:29:11.799 [2024-11-20 16:40:57.700969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.799 [2024-11-20 16:40:57.700992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.799 qpair failed and we were unable to recover it. 00:29:11.800 [2024-11-20 16:40:57.701326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.800 [2024-11-20 16:40:57.701335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.800 qpair failed and we were unable to recover it. 00:29:11.800 [2024-11-20 16:40:57.701644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.800 [2024-11-20 16:40:57.701654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.800 qpair failed and we were unable to recover it. 00:29:11.800 [2024-11-20 16:40:57.701956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.800 [2024-11-20 16:40:57.701966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.800 qpair failed and we were unable to recover it. 00:29:11.800 [2024-11-20 16:40:57.702308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.800 [2024-11-20 16:40:57.702318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.800 qpair failed and we were unable to recover it. 00:29:11.800 [2024-11-20 16:40:57.702601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.800 [2024-11-20 16:40:57.702610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.800 qpair failed and we were unable to recover it. 00:29:11.800 [2024-11-20 16:40:57.702886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.800 [2024-11-20 16:40:57.702896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.800 qpair failed and we were unable to recover it. 00:29:11.800 [2024-11-20 16:40:57.703097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.800 [2024-11-20 16:40:57.703108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.800 qpair failed and we were unable to recover it. 00:29:11.800 [2024-11-20 16:40:57.703425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.800 [2024-11-20 16:40:57.703434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.800 qpair failed and we were unable to recover it. 00:29:11.800 [2024-11-20 16:40:57.703701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.800 [2024-11-20 16:40:57.703711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.800 qpair failed and we were unable to recover it. 00:29:11.800 [2024-11-20 16:40:57.704004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.800 [2024-11-20 16:40:57.704014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.800 qpair failed and we were unable to recover it. 00:29:11.800 [2024-11-20 16:40:57.704277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.800 [2024-11-20 16:40:57.704286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.800 qpair failed and we were unable to recover it. 00:29:11.800 [2024-11-20 16:40:57.704651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.800 [2024-11-20 16:40:57.704661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.800 qpair failed and we were unable to recover it. 00:29:11.800 [2024-11-20 16:40:57.704897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.800 [2024-11-20 16:40:57.704906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.800 qpair failed and we were unable to recover it. 00:29:11.800 [2024-11-20 16:40:57.705231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.800 [2024-11-20 16:40:57.705241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.800 qpair failed and we were unable to recover it. 00:29:11.800 [2024-11-20 16:40:57.705517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.800 [2024-11-20 16:40:57.705527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.800 qpair failed and we were unable to recover it. 00:29:11.800 [2024-11-20 16:40:57.705812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.800 [2024-11-20 16:40:57.705822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.800 qpair failed and we were unable to recover it. 00:29:11.800 [2024-11-20 16:40:57.706122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.800 [2024-11-20 16:40:57.706135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.800 qpair failed and we were unable to recover it. 00:29:11.800 [2024-11-20 16:40:57.706432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.800 [2024-11-20 16:40:57.706443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.800 qpair failed and we were unable to recover it. 00:29:11.800 [2024-11-20 16:40:57.706719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.800 [2024-11-20 16:40:57.706729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.800 qpair failed and we were unable to recover it. 00:29:11.800 [2024-11-20 16:40:57.706920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.800 [2024-11-20 16:40:57.706930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.800 qpair failed and we were unable to recover it. 00:29:11.800 [2024-11-20 16:40:57.707209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.800 [2024-11-20 16:40:57.707220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.800 qpair failed and we were unable to recover it. 00:29:11.800 [2024-11-20 16:40:57.707515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.800 [2024-11-20 16:40:57.707525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.800 qpair failed and we were unable to recover it. 00:29:11.800 [2024-11-20 16:40:57.707706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.800 [2024-11-20 16:40:57.707717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.800 qpair failed and we were unable to recover it. 00:29:11.800 [2024-11-20 16:40:57.708065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.800 [2024-11-20 16:40:57.708075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.800 qpair failed and we were unable to recover it. 00:29:11.800 [2024-11-20 16:40:57.708293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.800 [2024-11-20 16:40:57.708302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.800 qpair failed and we were unable to recover it. 00:29:11.800 [2024-11-20 16:40:57.708660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.800 [2024-11-20 16:40:57.708669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.800 qpair failed and we were unable to recover it. 00:29:11.800 [2024-11-20 16:40:57.708980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.800 [2024-11-20 16:40:57.708998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.800 qpair failed and we were unable to recover it. 00:29:11.800 [2024-11-20 16:40:57.709325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.800 [2024-11-20 16:40:57.709334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.800 qpair failed and we were unable to recover it. 00:29:11.800 [2024-11-20 16:40:57.709637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.800 [2024-11-20 16:40:57.709647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.800 qpair failed and we were unable to recover it. 00:29:11.800 [2024-11-20 16:40:57.709948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.800 [2024-11-20 16:40:57.709959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.800 qpair failed and we were unable to recover it. 00:29:11.800 [2024-11-20 16:40:57.711028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.800 [2024-11-20 16:40:57.711053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.800 qpair failed and we were unable to recover it. 00:29:11.800 [2024-11-20 16:40:57.711367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.800 [2024-11-20 16:40:57.711379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.800 qpair failed and we were unable to recover it. 00:29:11.800 [2024-11-20 16:40:57.711685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.800 [2024-11-20 16:40:57.711695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.800 qpair failed and we were unable to recover it. 00:29:11.800 [2024-11-20 16:40:57.711974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.800 [2024-11-20 16:40:57.711989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.800 qpair failed and we were unable to recover it. 00:29:11.800 [2024-11-20 16:40:57.712273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.800 [2024-11-20 16:40:57.712283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.800 qpair failed and we were unable to recover it. 00:29:11.800 [2024-11-20 16:40:57.712464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.800 [2024-11-20 16:40:57.712475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.800 qpair failed and we were unable to recover it. 00:29:11.800 [2024-11-20 16:40:57.712680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.800 [2024-11-20 16:40:57.712689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.800 qpair failed and we were unable to recover it. 00:29:11.800 [2024-11-20 16:40:57.713001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.800 [2024-11-20 16:40:57.713012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.800 qpair failed and we were unable to recover it. 00:29:11.801 [2024-11-20 16:40:57.713339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.801 [2024-11-20 16:40:57.713350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.801 qpair failed and we were unable to recover it. 00:29:11.801 [2024-11-20 16:40:57.713559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.801 [2024-11-20 16:40:57.713569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.801 qpair failed and we were unable to recover it. 00:29:11.801 [2024-11-20 16:40:57.713908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.801 [2024-11-20 16:40:57.713918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.801 qpair failed and we were unable to recover it. 00:29:11.801 [2024-11-20 16:40:57.714129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.801 [2024-11-20 16:40:57.714140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.801 qpair failed and we were unable to recover it. 00:29:11.801 [2024-11-20 16:40:57.714473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.801 [2024-11-20 16:40:57.714482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.801 qpair failed and we were unable to recover it. 00:29:11.801 [2024-11-20 16:40:57.714786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.801 [2024-11-20 16:40:57.714796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.801 qpair failed and we were unable to recover it. 00:29:11.801 [2024-11-20 16:40:57.715120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:11.801 [2024-11-20 16:40:57.715130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:11.801 qpair failed and we were unable to recover it. 00:29:12.077 [2024-11-20 16:40:57.715429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.077 [2024-11-20 16:40:57.715441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.077 qpair failed and we were unable to recover it. 00:29:12.077 [2024-11-20 16:40:57.715739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.077 [2024-11-20 16:40:57.715749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.077 qpair failed and we were unable to recover it. 00:29:12.077 [2024-11-20 16:40:57.716025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.077 [2024-11-20 16:40:57.716035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.077 qpair failed and we were unable to recover it. 00:29:12.077 [2024-11-20 16:40:57.716350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.077 [2024-11-20 16:40:57.716360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.077 qpair failed and we were unable to recover it. 00:29:12.077 [2024-11-20 16:40:57.716664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.077 [2024-11-20 16:40:57.716673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.077 qpair failed and we were unable to recover it. 00:29:12.077 [2024-11-20 16:40:57.716976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.077 [2024-11-20 16:40:57.716990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.077 qpair failed and we were unable to recover it. 00:29:12.077 [2024-11-20 16:40:57.717360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.077 [2024-11-20 16:40:57.717371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.077 qpair failed and we were unable to recover it. 00:29:12.077 [2024-11-20 16:40:57.717691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.077 [2024-11-20 16:40:57.717700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.077 qpair failed and we were unable to recover it. 00:29:12.077 [2024-11-20 16:40:57.717902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.077 [2024-11-20 16:40:57.717912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.077 qpair failed and we were unable to recover it. 00:29:12.077 [2024-11-20 16:40:57.718239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.077 [2024-11-20 16:40:57.718250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.077 qpair failed and we were unable to recover it. 00:29:12.077 [2024-11-20 16:40:57.718560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.077 [2024-11-20 16:40:57.718570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.077 qpair failed and we were unable to recover it. 00:29:12.077 [2024-11-20 16:40:57.718876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.077 [2024-11-20 16:40:57.718885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.077 qpair failed and we were unable to recover it. 00:29:12.077 [2024-11-20 16:40:57.719235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.077 [2024-11-20 16:40:57.719248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.077 qpair failed and we were unable to recover it. 00:29:12.077 [2024-11-20 16:40:57.719519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.077 [2024-11-20 16:40:57.719528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.077 qpair failed and we were unable to recover it. 00:29:12.077 [2024-11-20 16:40:57.719837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.077 [2024-11-20 16:40:57.719847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.077 qpair failed and we were unable to recover it. 00:29:12.077 [2024-11-20 16:40:57.720156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.077 [2024-11-20 16:40:57.720167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.077 qpair failed and we were unable to recover it. 00:29:12.077 [2024-11-20 16:40:57.720487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.077 [2024-11-20 16:40:57.720496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.077 qpair failed and we were unable to recover it. 00:29:12.077 [2024-11-20 16:40:57.720800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.077 [2024-11-20 16:40:57.720809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.077 qpair failed and we were unable to recover it. 00:29:12.077 [2024-11-20 16:40:57.721095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.077 [2024-11-20 16:40:57.721105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.077 qpair failed and we were unable to recover it. 00:29:12.077 [2024-11-20 16:40:57.721409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.077 [2024-11-20 16:40:57.721419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.077 qpair failed and we were unable to recover it. 00:29:12.077 [2024-11-20 16:40:57.721632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.077 [2024-11-20 16:40:57.721642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.077 qpair failed and we were unable to recover it. 00:29:12.077 [2024-11-20 16:40:57.721941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.077 [2024-11-20 16:40:57.721951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.077 qpair failed and we were unable to recover it. 00:29:12.077 [2024-11-20 16:40:57.722235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.078 [2024-11-20 16:40:57.722246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.078 qpair failed and we were unable to recover it. 00:29:12.078 [2024-11-20 16:40:57.722561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.078 [2024-11-20 16:40:57.722571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.078 qpair failed and we were unable to recover it. 00:29:12.078 [2024-11-20 16:40:57.722940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.078 [2024-11-20 16:40:57.722950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.078 qpair failed and we were unable to recover it. 00:29:12.078 [2024-11-20 16:40:57.723264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.078 [2024-11-20 16:40:57.723274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.078 qpair failed and we were unable to recover it. 00:29:12.078 [2024-11-20 16:40:57.723578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.078 [2024-11-20 16:40:57.723589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.078 qpair failed and we were unable to recover it. 00:29:12.078 [2024-11-20 16:40:57.723913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.078 [2024-11-20 16:40:57.723923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.078 qpair failed and we were unable to recover it. 00:29:12.078 [2024-11-20 16:40:57.724097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.078 [2024-11-20 16:40:57.724109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.078 qpair failed and we were unable to recover it. 00:29:12.078 [2024-11-20 16:40:57.724376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.078 [2024-11-20 16:40:57.724387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.078 qpair failed and we were unable to recover it. 00:29:12.078 [2024-11-20 16:40:57.724694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.078 [2024-11-20 16:40:57.724704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.078 qpair failed and we were unable to recover it. 00:29:12.078 [2024-11-20 16:40:57.725025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.078 [2024-11-20 16:40:57.725035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.078 qpair failed and we were unable to recover it. 00:29:12.078 [2024-11-20 16:40:57.725346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.078 [2024-11-20 16:40:57.725356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.078 qpair failed and we were unable to recover it. 00:29:12.078 [2024-11-20 16:40:57.725638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.078 [2024-11-20 16:40:57.725648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.078 qpair failed and we were unable to recover it. 00:29:12.078 [2024-11-20 16:40:57.725961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.078 [2024-11-20 16:40:57.725972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.078 qpair failed and we were unable to recover it. 00:29:12.078 [2024-11-20 16:40:57.726274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.078 [2024-11-20 16:40:57.726284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.078 qpair failed and we were unable to recover it. 00:29:12.078 [2024-11-20 16:40:57.726568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.078 [2024-11-20 16:40:57.726578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.078 qpair failed and we were unable to recover it. 00:29:12.078 [2024-11-20 16:40:57.726886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.078 [2024-11-20 16:40:57.726895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.078 qpair failed and we were unable to recover it. 00:29:12.078 [2024-11-20 16:40:57.727197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.078 [2024-11-20 16:40:57.727207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.078 qpair failed and we were unable to recover it. 00:29:12.078 [2024-11-20 16:40:57.727557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.078 [2024-11-20 16:40:57.727569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.078 qpair failed and we were unable to recover it. 00:29:12.078 [2024-11-20 16:40:57.727849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.078 [2024-11-20 16:40:57.727858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.078 qpair failed and we were unable to recover it. 00:29:12.078 [2024-11-20 16:40:57.728193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.078 [2024-11-20 16:40:57.728203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.078 qpair failed and we were unable to recover it. 00:29:12.078 [2024-11-20 16:40:57.728489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.078 [2024-11-20 16:40:57.728500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.078 qpair failed and we were unable to recover it. 00:29:12.078 [2024-11-20 16:40:57.728785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.078 [2024-11-20 16:40:57.728796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.078 qpair failed and we were unable to recover it. 00:29:12.078 [2024-11-20 16:40:57.729011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.078 [2024-11-20 16:40:57.729022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.078 qpair failed and we were unable to recover it. 00:29:12.078 [2024-11-20 16:40:57.729330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.078 [2024-11-20 16:40:57.729340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.078 qpair failed and we were unable to recover it. 00:29:12.078 [2024-11-20 16:40:57.729645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.078 [2024-11-20 16:40:57.729655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.078 qpair failed and we were unable to recover it. 00:29:12.078 [2024-11-20 16:40:57.729843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.078 [2024-11-20 16:40:57.729853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.078 qpair failed and we were unable to recover it. 00:29:12.078 [2024-11-20 16:40:57.730150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.078 [2024-11-20 16:40:57.730160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.078 qpair failed and we were unable to recover it. 00:29:12.078 [2024-11-20 16:40:57.730446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.078 [2024-11-20 16:40:57.730455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.078 qpair failed and we were unable to recover it. 00:29:12.078 [2024-11-20 16:40:57.730758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.078 [2024-11-20 16:40:57.730769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.078 qpair failed and we were unable to recover it. 00:29:12.078 [2024-11-20 16:40:57.731050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.078 [2024-11-20 16:40:57.731060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.078 qpair failed and we were unable to recover it. 00:29:12.078 [2024-11-20 16:40:57.731368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.078 [2024-11-20 16:40:57.731377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.078 qpair failed and we were unable to recover it. 00:29:12.078 [2024-11-20 16:40:57.731643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.078 [2024-11-20 16:40:57.731653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.078 qpair failed and we were unable to recover it. 00:29:12.078 [2024-11-20 16:40:57.731893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.078 [2024-11-20 16:40:57.731903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.078 qpair failed and we were unable to recover it. 00:29:12.078 [2024-11-20 16:40:57.732205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.078 [2024-11-20 16:40:57.732215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.078 qpair failed and we were unable to recover it. 00:29:12.078 [2024-11-20 16:40:57.732429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.078 [2024-11-20 16:40:57.732439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.078 qpair failed and we were unable to recover it. 00:29:12.078 [2024-11-20 16:40:57.732743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.078 [2024-11-20 16:40:57.732753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.078 qpair failed and we were unable to recover it. 00:29:12.078 [2024-11-20 16:40:57.733063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.078 [2024-11-20 16:40:57.733074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.078 qpair failed and we were unable to recover it. 00:29:12.078 [2024-11-20 16:40:57.733331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.078 [2024-11-20 16:40:57.733341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.078 qpair failed and we were unable to recover it. 00:29:12.079 [2024-11-20 16:40:57.733626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.079 [2024-11-20 16:40:57.733636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.079 qpair failed and we were unable to recover it. 00:29:12.079 [2024-11-20 16:40:57.733944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.079 [2024-11-20 16:40:57.733954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.079 qpair failed and we were unable to recover it. 00:29:12.079 [2024-11-20 16:40:57.734236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.079 [2024-11-20 16:40:57.734246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.079 qpair failed and we were unable to recover it. 00:29:12.079 [2024-11-20 16:40:57.734589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.079 [2024-11-20 16:40:57.734598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.079 qpair failed and we were unable to recover it. 00:29:12.079 [2024-11-20 16:40:57.734901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.079 [2024-11-20 16:40:57.734911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.079 qpair failed and we were unable to recover it. 00:29:12.079 [2024-11-20 16:40:57.735099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.079 [2024-11-20 16:40:57.735112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.079 qpair failed and we were unable to recover it. 00:29:12.079 [2024-11-20 16:40:57.735271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.079 [2024-11-20 16:40:57.735280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.079 qpair failed and we were unable to recover it. 00:29:12.079 [2024-11-20 16:40:57.735576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.079 [2024-11-20 16:40:57.735586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.079 qpair failed and we were unable to recover it. 00:29:12.079 [2024-11-20 16:40:57.735899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.079 [2024-11-20 16:40:57.735909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.079 qpair failed and we were unable to recover it. 00:29:12.079 [2024-11-20 16:40:57.736285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.079 [2024-11-20 16:40:57.736295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.079 qpair failed and we were unable to recover it. 00:29:12.079 [2024-11-20 16:40:57.736517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.079 [2024-11-20 16:40:57.736526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.079 qpair failed and we were unable to recover it. 00:29:12.079 [2024-11-20 16:40:57.736878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.079 [2024-11-20 16:40:57.736887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.079 qpair failed and we were unable to recover it. 00:29:12.079 [2024-11-20 16:40:57.737195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.079 [2024-11-20 16:40:57.737205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.079 qpair failed and we were unable to recover it. 00:29:12.079 [2024-11-20 16:40:57.737388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.079 [2024-11-20 16:40:57.737397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.079 qpair failed and we were unable to recover it. 00:29:12.079 [2024-11-20 16:40:57.737698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.079 [2024-11-20 16:40:57.737707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.079 qpair failed and we were unable to recover it. 00:29:12.079 [2024-11-20 16:40:57.738023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.079 [2024-11-20 16:40:57.738034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.079 qpair failed and we were unable to recover it. 00:29:12.079 [2024-11-20 16:40:57.738346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.079 [2024-11-20 16:40:57.738356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.079 qpair failed and we were unable to recover it. 00:29:12.079 [2024-11-20 16:40:57.738633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.079 [2024-11-20 16:40:57.738650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.079 qpair failed and we were unable to recover it. 00:29:12.079 [2024-11-20 16:40:57.738825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.079 [2024-11-20 16:40:57.738836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.079 qpair failed and we were unable to recover it. 00:29:12.079 [2024-11-20 16:40:57.739160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.079 [2024-11-20 16:40:57.739170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.079 qpair failed and we were unable to recover it. 00:29:12.079 [2024-11-20 16:40:57.739519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.079 [2024-11-20 16:40:57.739532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.079 qpair failed and we were unable to recover it. 00:29:12.079 [2024-11-20 16:40:57.739805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.079 [2024-11-20 16:40:57.739814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.079 qpair failed and we were unable to recover it. 00:29:12.079 [2024-11-20 16:40:57.740121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.079 [2024-11-20 16:40:57.740131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.079 qpair failed and we were unable to recover it. 00:29:12.079 [2024-11-20 16:40:57.740468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.079 [2024-11-20 16:40:57.740478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.079 qpair failed and we were unable to recover it. 00:29:12.079 [2024-11-20 16:40:57.740828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.079 [2024-11-20 16:40:57.740838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.079 qpair failed and we were unable to recover it. 00:29:12.079 [2024-11-20 16:40:57.741123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.079 [2024-11-20 16:40:57.741133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.079 qpair failed and we were unable to recover it. 00:29:12.079 [2024-11-20 16:40:57.741511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.079 [2024-11-20 16:40:57.741521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.079 qpair failed and we were unable to recover it. 00:29:12.079 [2024-11-20 16:40:57.741817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.079 [2024-11-20 16:40:57.741827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.079 qpair failed and we were unable to recover it. 00:29:12.079 [2024-11-20 16:40:57.742157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.079 [2024-11-20 16:40:57.742167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.079 qpair failed and we were unable to recover it. 00:29:12.079 [2024-11-20 16:40:57.742482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.079 [2024-11-20 16:40:57.742492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.079 qpair failed and we were unable to recover it. 00:29:12.079 [2024-11-20 16:40:57.742852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.079 [2024-11-20 16:40:57.742863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.079 qpair failed and we were unable to recover it. 00:29:12.079 [2024-11-20 16:40:57.743149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.079 [2024-11-20 16:40:57.743159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.079 qpair failed and we were unable to recover it. 00:29:12.079 [2024-11-20 16:40:57.743452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.079 [2024-11-20 16:40:57.743469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.079 qpair failed and we were unable to recover it. 00:29:12.079 [2024-11-20 16:40:57.743779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.079 [2024-11-20 16:40:57.743789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.079 qpair failed and we were unable to recover it. 00:29:12.079 [2024-11-20 16:40:57.744096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.079 [2024-11-20 16:40:57.744107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.079 qpair failed and we were unable to recover it. 00:29:12.079 [2024-11-20 16:40:57.744427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.079 [2024-11-20 16:40:57.744436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.079 qpair failed and we were unable to recover it. 00:29:12.079 [2024-11-20 16:40:57.744725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.079 [2024-11-20 16:40:57.744736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.079 qpair failed and we were unable to recover it. 00:29:12.079 [2024-11-20 16:40:57.745044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.079 [2024-11-20 16:40:57.745054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.080 qpair failed and we were unable to recover it. 00:29:12.080 [2024-11-20 16:40:57.745335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.080 [2024-11-20 16:40:57.745345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.080 qpair failed and we were unable to recover it. 00:29:12.080 [2024-11-20 16:40:57.745647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.080 [2024-11-20 16:40:57.745656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.080 qpair failed and we were unable to recover it. 00:29:12.080 [2024-11-20 16:40:57.745972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.080 [2024-11-20 16:40:57.745986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.080 qpair failed and we were unable to recover it. 00:29:12.080 [2024-11-20 16:40:57.746175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.080 [2024-11-20 16:40:57.746186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.080 qpair failed and we were unable to recover it. 00:29:12.080 [2024-11-20 16:40:57.746363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.080 [2024-11-20 16:40:57.746373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.080 qpair failed and we were unable to recover it. 00:29:12.080 [2024-11-20 16:40:57.746639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.080 [2024-11-20 16:40:57.746648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.080 qpair failed and we were unable to recover it. 00:29:12.080 [2024-11-20 16:40:57.746967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.080 [2024-11-20 16:40:57.746977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.080 qpair failed and we were unable to recover it. 00:29:12.080 [2024-11-20 16:40:57.747265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.080 [2024-11-20 16:40:57.747276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.080 qpair failed and we were unable to recover it. 00:29:12.080 [2024-11-20 16:40:57.747545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.080 [2024-11-20 16:40:57.747555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.080 qpair failed and we were unable to recover it. 00:29:12.080 [2024-11-20 16:40:57.747866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.080 [2024-11-20 16:40:57.747877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.080 qpair failed and we were unable to recover it. 00:29:12.080 [2024-11-20 16:40:57.748050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.080 [2024-11-20 16:40:57.748062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.080 qpair failed and we were unable to recover it. 00:29:12.080 [2024-11-20 16:40:57.748338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.080 [2024-11-20 16:40:57.748348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.080 qpair failed and we were unable to recover it. 00:29:12.080 [2024-11-20 16:40:57.748665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.080 [2024-11-20 16:40:57.748675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.080 qpair failed and we were unable to recover it. 00:29:12.080 [2024-11-20 16:40:57.749048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.080 [2024-11-20 16:40:57.749058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.080 qpair failed and we were unable to recover it. 00:29:12.080 [2024-11-20 16:40:57.749141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.080 [2024-11-20 16:40:57.749151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.080 qpair failed and we were unable to recover it. 00:29:12.080 [2024-11-20 16:40:57.749479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.080 [2024-11-20 16:40:57.749489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.080 qpair failed and we were unable to recover it. 00:29:12.080 [2024-11-20 16:40:57.749762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.080 [2024-11-20 16:40:57.749771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.080 qpair failed and we were unable to recover it. 00:29:12.080 [2024-11-20 16:40:57.750091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.080 [2024-11-20 16:40:57.750101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.080 qpair failed and we were unable to recover it. 00:29:12.080 [2024-11-20 16:40:57.750400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.080 [2024-11-20 16:40:57.750410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.080 qpair failed and we were unable to recover it. 00:29:12.080 [2024-11-20 16:40:57.750696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.080 [2024-11-20 16:40:57.750705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.080 qpair failed and we were unable to recover it. 00:29:12.080 [2024-11-20 16:40:57.751005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.080 [2024-11-20 16:40:57.751015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.080 qpair failed and we were unable to recover it. 00:29:12.080 [2024-11-20 16:40:57.751204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.080 [2024-11-20 16:40:57.751215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.080 qpair failed and we were unable to recover it. 00:29:12.080 [2024-11-20 16:40:57.751499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.080 [2024-11-20 16:40:57.751509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.080 qpair failed and we were unable to recover it. 00:29:12.080 [2024-11-20 16:40:57.751810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.080 [2024-11-20 16:40:57.751821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.080 qpair failed and we were unable to recover it. 00:29:12.080 [2024-11-20 16:40:57.752123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.080 [2024-11-20 16:40:57.752134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.080 qpair failed and we were unable to recover it. 00:29:12.080 [2024-11-20 16:40:57.752421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.080 [2024-11-20 16:40:57.752431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.080 qpair failed and we were unable to recover it. 00:29:12.080 [2024-11-20 16:40:57.752748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.080 [2024-11-20 16:40:57.752766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.080 qpair failed and we were unable to recover it. 00:29:12.080 [2024-11-20 16:40:57.753096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.080 [2024-11-20 16:40:57.753106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.080 qpair failed and we were unable to recover it. 00:29:12.080 [2024-11-20 16:40:57.753396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.080 [2024-11-20 16:40:57.753406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.080 qpair failed and we were unable to recover it. 00:29:12.080 [2024-11-20 16:40:57.753710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.080 [2024-11-20 16:40:57.753719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.080 qpair failed and we were unable to recover it. 00:29:12.080 [2024-11-20 16:40:57.753974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.080 [2024-11-20 16:40:57.753987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.080 qpair failed and we were unable to recover it. 00:29:12.080 [2024-11-20 16:40:57.754301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.080 [2024-11-20 16:40:57.754311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.080 qpair failed and we were unable to recover it. 00:29:12.080 [2024-11-20 16:40:57.754481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.080 [2024-11-20 16:40:57.754492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.080 qpair failed and we were unable to recover it. 00:29:12.080 [2024-11-20 16:40:57.754822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.080 [2024-11-20 16:40:57.754832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.080 qpair failed and we were unable to recover it. 00:29:12.080 [2024-11-20 16:40:57.755120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.080 [2024-11-20 16:40:57.755130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.080 qpair failed and we were unable to recover it. 00:29:12.080 [2024-11-20 16:40:57.755323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.080 [2024-11-20 16:40:57.755332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.080 qpair failed and we were unable to recover it. 00:29:12.080 [2024-11-20 16:40:57.755670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.080 [2024-11-20 16:40:57.755680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.080 qpair failed and we were unable to recover it. 00:29:12.080 [2024-11-20 16:40:57.755865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.080 [2024-11-20 16:40:57.755875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.080 qpair failed and we were unable to recover it. 00:29:12.081 [2024-11-20 16:40:57.756177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.081 [2024-11-20 16:40:57.756188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.081 qpair failed and we were unable to recover it. 00:29:12.081 [2024-11-20 16:40:57.756468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.081 [2024-11-20 16:40:57.756486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.081 qpair failed and we were unable to recover it. 00:29:12.081 [2024-11-20 16:40:57.757328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.081 [2024-11-20 16:40:57.757349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.081 qpair failed and we were unable to recover it. 00:29:12.081 [2024-11-20 16:40:57.757680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.081 [2024-11-20 16:40:57.757691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.081 qpair failed and we were unable to recover it. 00:29:12.081 [2024-11-20 16:40:57.758013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.081 [2024-11-20 16:40:57.758024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.081 qpair failed and we were unable to recover it. 00:29:12.081 [2024-11-20 16:40:57.758338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.081 [2024-11-20 16:40:57.758348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.081 qpair failed and we were unable to recover it. 00:29:12.081 [2024-11-20 16:40:57.758650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.081 [2024-11-20 16:40:57.758659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.081 qpair failed and we were unable to recover it. 00:29:12.081 [2024-11-20 16:40:57.758973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.081 [2024-11-20 16:40:57.758987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.081 qpair failed and we were unable to recover it. 00:29:12.081 [2024-11-20 16:40:57.759350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.081 [2024-11-20 16:40:57.759361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.081 qpair failed and we were unable to recover it. 00:29:12.081 [2024-11-20 16:40:57.759665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.081 [2024-11-20 16:40:57.759676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.081 qpair failed and we were unable to recover it. 00:29:12.081 [2024-11-20 16:40:57.759977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.081 [2024-11-20 16:40:57.759992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.081 qpair failed and we were unable to recover it. 00:29:12.081 [2024-11-20 16:40:57.760304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.081 [2024-11-20 16:40:57.760313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.081 qpair failed and we were unable to recover it. 00:29:12.081 [2024-11-20 16:40:57.760613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.081 [2024-11-20 16:40:57.760626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.081 qpair failed and we were unable to recover it. 00:29:12.081 [2024-11-20 16:40:57.760965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.081 [2024-11-20 16:40:57.760976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.081 qpair failed and we were unable to recover it. 00:29:12.081 [2024-11-20 16:40:57.761293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.081 [2024-11-20 16:40:57.761303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.081 qpair failed and we were unable to recover it. 00:29:12.081 [2024-11-20 16:40:57.761586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.081 [2024-11-20 16:40:57.761596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.081 qpair failed and we were unable to recover it. 00:29:12.081 [2024-11-20 16:40:57.761776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.081 [2024-11-20 16:40:57.761787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.081 qpair failed and we were unable to recover it. 00:29:12.081 [2024-11-20 16:40:57.762114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.081 [2024-11-20 16:40:57.762125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.081 qpair failed and we were unable to recover it. 00:29:12.081 [2024-11-20 16:40:57.762430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.081 [2024-11-20 16:40:57.762440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.081 qpair failed and we were unable to recover it. 00:29:12.081 [2024-11-20 16:40:57.762628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.081 [2024-11-20 16:40:57.762639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.081 qpair failed and we were unable to recover it. 00:29:12.081 [2024-11-20 16:40:57.762924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.081 [2024-11-20 16:40:57.762933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.081 qpair failed and we were unable to recover it. 00:29:12.081 [2024-11-20 16:40:57.763091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.081 [2024-11-20 16:40:57.763102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.081 qpair failed and we were unable to recover it. 00:29:12.081 [2024-11-20 16:40:57.763282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.081 [2024-11-20 16:40:57.763292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.081 qpair failed and we were unable to recover it. 00:29:12.081 [2024-11-20 16:40:57.763585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.081 [2024-11-20 16:40:57.763595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.081 qpair failed and we were unable to recover it. 00:29:12.081 [2024-11-20 16:40:57.763918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.081 [2024-11-20 16:40:57.763928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.081 qpair failed and we were unable to recover it. 00:29:12.081 [2024-11-20 16:40:57.764235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.081 [2024-11-20 16:40:57.764245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.081 qpair failed and we were unable to recover it. 00:29:12.081 [2024-11-20 16:40:57.764529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.081 [2024-11-20 16:40:57.764540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.081 qpair failed and we were unable to recover it. 00:29:12.081 [2024-11-20 16:40:57.764854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.081 [2024-11-20 16:40:57.764863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.081 qpair failed and we were unable to recover it. 00:29:12.081 [2024-11-20 16:40:57.765097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.081 [2024-11-20 16:40:57.765107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.081 qpair failed and we were unable to recover it. 00:29:12.081 [2024-11-20 16:40:57.765425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.081 [2024-11-20 16:40:57.765435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.081 qpair failed and we were unable to recover it. 00:29:12.081 [2024-11-20 16:40:57.765741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.081 [2024-11-20 16:40:57.765751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.081 qpair failed and we were unable to recover it. 00:29:12.081 [2024-11-20 16:40:57.766056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.081 [2024-11-20 16:40:57.766066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.081 qpair failed and we were unable to recover it. 00:29:12.081 [2024-11-20 16:40:57.766346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.081 [2024-11-20 16:40:57.766355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.081 qpair failed and we were unable to recover it. 00:29:12.081 [2024-11-20 16:40:57.766671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.081 [2024-11-20 16:40:57.766681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.081 qpair failed and we were unable to recover it. 00:29:12.081 [2024-11-20 16:40:57.766902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.081 [2024-11-20 16:40:57.766912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.081 qpair failed and we were unable to recover it. 00:29:12.081 [2024-11-20 16:40:57.767149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.081 [2024-11-20 16:40:57.767159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.082 qpair failed and we were unable to recover it. 00:29:12.082 [2024-11-20 16:40:57.767459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.082 [2024-11-20 16:40:57.767469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.082 qpair failed and we were unable to recover it. 00:29:12.082 [2024-11-20 16:40:57.767776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.082 [2024-11-20 16:40:57.767785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.082 qpair failed and we were unable to recover it. 00:29:12.082 [2024-11-20 16:40:57.767966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.082 [2024-11-20 16:40:57.767975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.082 qpair failed and we were unable to recover it. 00:29:12.082 [2024-11-20 16:40:57.768203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.082 [2024-11-20 16:40:57.768213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.082 qpair failed and we were unable to recover it. 00:29:12.082 [2024-11-20 16:40:57.768438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.082 [2024-11-20 16:40:57.768448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.082 qpair failed and we were unable to recover it. 00:29:12.082 [2024-11-20 16:40:57.768772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.082 [2024-11-20 16:40:57.768782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.082 qpair failed and we were unable to recover it. 00:29:12.082 [2024-11-20 16:40:57.768988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.082 [2024-11-20 16:40:57.768999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.082 qpair failed and we were unable to recover it. 00:29:12.082 [2024-11-20 16:40:57.769300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.082 [2024-11-20 16:40:57.769309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.082 qpair failed and we were unable to recover it. 00:29:12.082 [2024-11-20 16:40:57.769599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.082 [2024-11-20 16:40:57.769610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.082 qpair failed and we were unable to recover it. 00:29:12.082 [2024-11-20 16:40:57.769913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.082 [2024-11-20 16:40:57.769922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.082 qpair failed and we were unable to recover it. 00:29:12.082 [2024-11-20 16:40:57.770247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.082 [2024-11-20 16:40:57.770258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.082 qpair failed and we were unable to recover it. 00:29:12.082 [2024-11-20 16:40:57.770535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.082 [2024-11-20 16:40:57.770545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.082 qpair failed and we were unable to recover it. 00:29:12.082 [2024-11-20 16:40:57.770848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.082 [2024-11-20 16:40:57.770857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.082 qpair failed and we were unable to recover it. 00:29:12.082 [2024-11-20 16:40:57.771141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.082 [2024-11-20 16:40:57.771152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.082 qpair failed and we were unable to recover it. 00:29:12.082 [2024-11-20 16:40:57.771438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.082 [2024-11-20 16:40:57.771448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.082 qpair failed and we were unable to recover it. 00:29:12.082 [2024-11-20 16:40:57.771756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.082 [2024-11-20 16:40:57.771765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.082 qpair failed and we were unable to recover it. 00:29:12.082 [2024-11-20 16:40:57.772071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.082 [2024-11-20 16:40:57.772081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.082 qpair failed and we were unable to recover it. 00:29:12.082 [2024-11-20 16:40:57.772395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.082 [2024-11-20 16:40:57.772406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.082 qpair failed and we were unable to recover it. 00:29:12.082 [2024-11-20 16:40:57.772712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.082 [2024-11-20 16:40:57.772722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.082 qpair failed and we were unable to recover it. 00:29:12.082 [2024-11-20 16:40:57.773029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.082 [2024-11-20 16:40:57.773039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.082 qpair failed and we were unable to recover it. 00:29:12.082 [2024-11-20 16:40:57.773331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.082 [2024-11-20 16:40:57.773341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.082 qpair failed and we were unable to recover it. 00:29:12.082 [2024-11-20 16:40:57.773650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.082 [2024-11-20 16:40:57.773659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.082 qpair failed and we were unable to recover it. 00:29:12.082 [2024-11-20 16:40:57.773964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.082 [2024-11-20 16:40:57.773973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.082 qpair failed and we were unable to recover it. 00:29:12.082 [2024-11-20 16:40:57.774365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.082 [2024-11-20 16:40:57.774375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.082 qpair failed and we were unable to recover it. 00:29:12.082 [2024-11-20 16:40:57.774686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.082 [2024-11-20 16:40:57.774696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.082 qpair failed and we were unable to recover it. 00:29:12.082 [2024-11-20 16:40:57.774998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.082 [2024-11-20 16:40:57.775008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.082 qpair failed and we were unable to recover it. 00:29:12.082 [2024-11-20 16:40:57.775309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.082 [2024-11-20 16:40:57.775319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.082 qpair failed and we were unable to recover it. 00:29:12.082 [2024-11-20 16:40:57.775529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.082 [2024-11-20 16:40:57.775539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.082 qpair failed and we were unable to recover it. 00:29:12.082 [2024-11-20 16:40:57.775842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.082 [2024-11-20 16:40:57.775852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.082 qpair failed and we were unable to recover it. 00:29:12.082 [2024-11-20 16:40:57.776170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.082 [2024-11-20 16:40:57.776180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.082 qpair failed and we were unable to recover it. 00:29:12.082 [2024-11-20 16:40:57.776458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.082 [2024-11-20 16:40:57.776468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.082 qpair failed and we were unable to recover it. 00:29:12.082 [2024-11-20 16:40:57.776779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.082 [2024-11-20 16:40:57.776790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.082 qpair failed and we were unable to recover it. 00:29:12.082 [2024-11-20 16:40:57.777067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.082 [2024-11-20 16:40:57.777078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.082 qpair failed and we were unable to recover it. 00:29:12.082 [2024-11-20 16:40:57.777424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.082 [2024-11-20 16:40:57.777434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.082 qpair failed and we were unable to recover it. 00:29:12.082 [2024-11-20 16:40:57.777724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.083 [2024-11-20 16:40:57.777734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.083 qpair failed and we were unable to recover it. 00:29:12.083 [2024-11-20 16:40:57.778089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.083 [2024-11-20 16:40:57.778099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.083 qpair failed and we were unable to recover it. 00:29:12.083 [2024-11-20 16:40:57.778380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.083 [2024-11-20 16:40:57.778390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.083 qpair failed and we were unable to recover it. 00:29:12.083 [2024-11-20 16:40:57.778690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.083 [2024-11-20 16:40:57.778699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.083 qpair failed and we were unable to recover it. 00:29:12.083 [2024-11-20 16:40:57.779021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.083 [2024-11-20 16:40:57.779031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.083 qpair failed and we were unable to recover it. 00:29:12.083 [2024-11-20 16:40:57.779329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.083 [2024-11-20 16:40:57.779339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.083 qpair failed and we were unable to recover it. 00:29:12.083 [2024-11-20 16:40:57.779587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.083 [2024-11-20 16:40:57.779597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.083 qpair failed and we were unable to recover it. 00:29:12.083 [2024-11-20 16:40:57.779877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.083 [2024-11-20 16:40:57.779887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.083 qpair failed and we were unable to recover it. 00:29:12.083 [2024-11-20 16:40:57.780095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.083 [2024-11-20 16:40:57.780105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.083 qpair failed and we were unable to recover it. 00:29:12.083 [2024-11-20 16:40:57.780441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.083 [2024-11-20 16:40:57.780451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.083 qpair failed and we were unable to recover it. 00:29:12.083 [2024-11-20 16:40:57.780772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.083 [2024-11-20 16:40:57.780792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.083 qpair failed and we were unable to recover it. 00:29:12.083 [2024-11-20 16:40:57.781086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.083 [2024-11-20 16:40:57.781097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.083 qpair failed and we were unable to recover it. 00:29:12.083 [2024-11-20 16:40:57.781411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.083 [2024-11-20 16:40:57.781421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.083 qpair failed and we were unable to recover it. 00:29:12.083 [2024-11-20 16:40:57.781769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.083 [2024-11-20 16:40:57.781779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.083 qpair failed and we were unable to recover it. 00:29:12.083 [2024-11-20 16:40:57.782082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.083 [2024-11-20 16:40:57.782093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.083 qpair failed and we were unable to recover it. 00:29:12.083 [2024-11-20 16:40:57.782387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.083 [2024-11-20 16:40:57.782404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.083 qpair failed and we were unable to recover it. 00:29:12.083 [2024-11-20 16:40:57.782741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.083 [2024-11-20 16:40:57.782750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.083 qpair failed and we were unable to recover it. 00:29:12.083 [2024-11-20 16:40:57.782943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.083 [2024-11-20 16:40:57.782953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.083 qpair failed and we were unable to recover it. 00:29:12.083 [2024-11-20 16:40:57.783274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.083 [2024-11-20 16:40:57.783284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.083 qpair failed and we were unable to recover it. 00:29:12.083 [2024-11-20 16:40:57.783583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.083 [2024-11-20 16:40:57.783592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.083 qpair failed and we were unable to recover it. 00:29:12.083 [2024-11-20 16:40:57.783907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.083 [2024-11-20 16:40:57.783918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.083 qpair failed and we were unable to recover it. 00:29:12.083 [2024-11-20 16:40:57.784242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.083 [2024-11-20 16:40:57.784252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.083 qpair failed and we were unable to recover it. 00:29:12.083 [2024-11-20 16:40:57.784529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.083 [2024-11-20 16:40:57.784538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.083 qpair failed and we were unable to recover it. 00:29:12.083 [2024-11-20 16:40:57.784848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.083 [2024-11-20 16:40:57.784858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.083 qpair failed and we were unable to recover it. 00:29:12.083 [2024-11-20 16:40:57.785150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.083 [2024-11-20 16:40:57.785160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.083 qpair failed and we were unable to recover it. 00:29:12.083 [2024-11-20 16:40:57.785502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.083 [2024-11-20 16:40:57.785512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.083 qpair failed and we were unable to recover it. 00:29:12.083 [2024-11-20 16:40:57.785807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.083 [2024-11-20 16:40:57.785817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.083 qpair failed and we were unable to recover it. 00:29:12.083 [2024-11-20 16:40:57.785969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.083 [2024-11-20 16:40:57.785979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.083 qpair failed and we were unable to recover it. 00:29:12.083 [2024-11-20 16:40:57.786333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.083 [2024-11-20 16:40:57.786343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.083 qpair failed and we were unable to recover it. 00:29:12.083 [2024-11-20 16:40:57.786673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.083 [2024-11-20 16:40:57.786683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.083 qpair failed and we were unable to recover it. 00:29:12.083 [2024-11-20 16:40:57.786992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.083 [2024-11-20 16:40:57.787002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.083 qpair failed and we were unable to recover it. 00:29:12.083 [2024-11-20 16:40:57.787312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.083 [2024-11-20 16:40:57.787321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.083 qpair failed and we were unable to recover it. 00:29:12.083 [2024-11-20 16:40:57.787703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.083 [2024-11-20 16:40:57.787713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.083 qpair failed and we were unable to recover it. 00:29:12.083 [2024-11-20 16:40:57.787995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.083 [2024-11-20 16:40:57.788005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.083 qpair failed and we were unable to recover it. 00:29:12.083 [2024-11-20 16:40:57.788319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.083 [2024-11-20 16:40:57.788328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.083 qpair failed and we were unable to recover it. 00:29:12.083 [2024-11-20 16:40:57.788593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.083 [2024-11-20 16:40:57.788603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.083 qpair failed and we were unable to recover it. 00:29:12.083 [2024-11-20 16:40:57.788905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.083 [2024-11-20 16:40:57.788915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.083 qpair failed and we were unable to recover it. 00:29:12.083 [2024-11-20 16:40:57.789105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.083 [2024-11-20 16:40:57.789115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.083 qpair failed and we were unable to recover it. 00:29:12.083 [2024-11-20 16:40:57.789482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.084 [2024-11-20 16:40:57.789492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.084 qpair failed and we were unable to recover it. 00:29:12.084 [2024-11-20 16:40:57.789798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.084 [2024-11-20 16:40:57.789808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.084 qpair failed and we were unable to recover it. 00:29:12.084 [2024-11-20 16:40:57.790144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.084 [2024-11-20 16:40:57.790154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.084 qpair failed and we were unable to recover it. 00:29:12.084 [2024-11-20 16:40:57.790461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.084 [2024-11-20 16:40:57.790470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.084 qpair failed and we were unable to recover it. 00:29:12.084 [2024-11-20 16:40:57.790771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.084 [2024-11-20 16:40:57.790781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.084 qpair failed and we were unable to recover it. 00:29:12.084 [2024-11-20 16:40:57.791048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.084 [2024-11-20 16:40:57.791058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.084 qpair failed and we were unable to recover it. 00:29:12.084 [2024-11-20 16:40:57.791244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.084 [2024-11-20 16:40:57.791254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.084 qpair failed and we were unable to recover it. 00:29:12.084 [2024-11-20 16:40:57.791569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.084 [2024-11-20 16:40:57.791578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.084 qpair failed and we were unable to recover it. 00:29:12.084 [2024-11-20 16:40:57.791773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.084 [2024-11-20 16:40:57.791783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.084 qpair failed and we were unable to recover it. 00:29:12.084 [2024-11-20 16:40:57.791974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.084 [2024-11-20 16:40:57.791999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.084 qpair failed and we were unable to recover it. 00:29:12.084 [2024-11-20 16:40:57.792326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.084 [2024-11-20 16:40:57.792335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.084 qpair failed and we were unable to recover it. 00:29:12.084 [2024-11-20 16:40:57.792618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.084 [2024-11-20 16:40:57.792629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.084 qpair failed and we were unable to recover it. 00:29:12.084 [2024-11-20 16:40:57.792818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.084 [2024-11-20 16:40:57.792827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.084 qpair failed and we were unable to recover it. 00:29:12.084 [2024-11-20 16:40:57.793044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.084 [2024-11-20 16:40:57.793057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.084 qpair failed and we were unable to recover it. 00:29:12.084 [2024-11-20 16:40:57.793468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.084 [2024-11-20 16:40:57.793477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.084 qpair failed and we were unable to recover it. 00:29:12.084 [2024-11-20 16:40:57.793795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.084 [2024-11-20 16:40:57.793806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.084 qpair failed and we were unable to recover it. 00:29:12.084 [2024-11-20 16:40:57.794115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.084 [2024-11-20 16:40:57.794125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.084 qpair failed and we were unable to recover it. 00:29:12.084 [2024-11-20 16:40:57.794419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.084 [2024-11-20 16:40:57.794428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.084 qpair failed and we were unable to recover it. 00:29:12.084 [2024-11-20 16:40:57.794749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.084 [2024-11-20 16:40:57.794760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.084 qpair failed and we were unable to recover it. 00:29:12.084 [2024-11-20 16:40:57.795095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.084 [2024-11-20 16:40:57.795106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.084 qpair failed and we were unable to recover it. 00:29:12.084 [2024-11-20 16:40:57.795317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.084 [2024-11-20 16:40:57.795327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.084 qpair failed and we were unable to recover it. 00:29:12.084 [2024-11-20 16:40:57.795515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.084 [2024-11-20 16:40:57.795525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.084 qpair failed and we were unable to recover it. 00:29:12.084 [2024-11-20 16:40:57.795801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.084 [2024-11-20 16:40:57.795812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.084 qpair failed and we were unable to recover it. 00:29:12.084 [2024-11-20 16:40:57.796054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.084 [2024-11-20 16:40:57.796064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.084 qpair failed and we were unable to recover it. 00:29:12.084 [2024-11-20 16:40:57.796261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.084 [2024-11-20 16:40:57.796271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.084 qpair failed and we were unable to recover it. 00:29:12.084 [2024-11-20 16:40:57.796470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.084 [2024-11-20 16:40:57.796480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.084 qpair failed and we were unable to recover it. 00:29:12.084 [2024-11-20 16:40:57.796777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.084 [2024-11-20 16:40:57.796786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.084 qpair failed and we were unable to recover it. 00:29:12.084 [2024-11-20 16:40:57.797090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.084 [2024-11-20 16:40:57.797100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.084 qpair failed and we were unable to recover it. 00:29:12.084 [2024-11-20 16:40:57.797452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.084 [2024-11-20 16:40:57.797462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.084 qpair failed and we were unable to recover it. 00:29:12.084 [2024-11-20 16:40:57.797743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.084 [2024-11-20 16:40:57.797753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.084 qpair failed and we were unable to recover it. 00:29:12.084 [2024-11-20 16:40:57.797972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.084 [2024-11-20 16:40:57.797989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.084 qpair failed and we were unable to recover it. 00:29:12.084 [2024-11-20 16:40:57.798304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.084 [2024-11-20 16:40:57.798315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.084 qpair failed and we were unable to recover it. 00:29:12.084 [2024-11-20 16:40:57.798598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.084 [2024-11-20 16:40:57.798608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.084 qpair failed and we were unable to recover it. 00:29:12.084 [2024-11-20 16:40:57.798912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.084 [2024-11-20 16:40:57.798921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.084 qpair failed and we were unable to recover it. 00:29:12.084 [2024-11-20 16:40:57.799134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.084 [2024-11-20 16:40:57.799144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.084 qpair failed and we were unable to recover it. 00:29:12.084 [2024-11-20 16:40:57.799472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.084 [2024-11-20 16:40:57.799482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.084 qpair failed and we were unable to recover it. 00:29:12.084 [2024-11-20 16:40:57.799761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.084 [2024-11-20 16:40:57.799777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.084 qpair failed and we were unable to recover it. 00:29:12.084 [2024-11-20 16:40:57.800040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.084 [2024-11-20 16:40:57.800050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.084 qpair failed and we were unable to recover it. 00:29:12.084 [2024-11-20 16:40:57.800411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.085 [2024-11-20 16:40:57.800421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.085 qpair failed and we were unable to recover it. 00:29:12.085 [2024-11-20 16:40:57.800606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.085 [2024-11-20 16:40:57.800616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.085 qpair failed and we were unable to recover it. 00:29:12.085 [2024-11-20 16:40:57.800927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.085 [2024-11-20 16:40:57.800939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.085 qpair failed and we were unable to recover it. 00:29:12.085 [2024-11-20 16:40:57.801278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.085 [2024-11-20 16:40:57.801288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.085 qpair failed and we were unable to recover it. 00:29:12.085 [2024-11-20 16:40:57.801599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.085 [2024-11-20 16:40:57.801609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.085 qpair failed and we were unable to recover it. 00:29:12.085 [2024-11-20 16:40:57.801916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.085 [2024-11-20 16:40:57.801925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.085 qpair failed and we were unable to recover it. 00:29:12.085 [2024-11-20 16:40:57.802275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.085 [2024-11-20 16:40:57.802285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.085 qpair failed and we were unable to recover it. 00:29:12.085 [2024-11-20 16:40:57.802696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.085 [2024-11-20 16:40:57.802706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.085 qpair failed and we were unable to recover it. 00:29:12.085 [2024-11-20 16:40:57.802999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.085 [2024-11-20 16:40:57.803009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.085 qpair failed and we were unable to recover it. 00:29:12.085 [2024-11-20 16:40:57.803186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.085 [2024-11-20 16:40:57.803195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.085 qpair failed and we were unable to recover it. 00:29:12.085 [2024-11-20 16:40:57.803466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.085 [2024-11-20 16:40:57.803476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.085 qpair failed and we were unable to recover it. 00:29:12.085 [2024-11-20 16:40:57.803687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.085 [2024-11-20 16:40:57.803697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.085 qpair failed and we were unable to recover it. 00:29:12.085 [2024-11-20 16:40:57.804043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.085 [2024-11-20 16:40:57.804053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.085 qpair failed and we were unable to recover it. 00:29:12.085 [2024-11-20 16:40:57.804353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.085 [2024-11-20 16:40:57.804363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.085 qpair failed and we were unable to recover it. 00:29:12.085 [2024-11-20 16:40:57.804680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.085 [2024-11-20 16:40:57.804689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.085 qpair failed and we were unable to recover it. 00:29:12.085 [2024-11-20 16:40:57.805016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.085 [2024-11-20 16:40:57.805026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.085 qpair failed and we were unable to recover it. 00:29:12.085 [2024-11-20 16:40:57.805351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.085 [2024-11-20 16:40:57.805361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.085 qpair failed and we were unable to recover it. 00:29:12.085 [2024-11-20 16:40:57.805667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.085 [2024-11-20 16:40:57.805686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.085 qpair failed and we were unable to recover it. 00:29:12.085 [2024-11-20 16:40:57.805971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.085 [2024-11-20 16:40:57.805980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.085 qpair failed and we were unable to recover it. 00:29:12.085 [2024-11-20 16:40:57.806277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.085 [2024-11-20 16:40:57.806287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.085 qpair failed and we were unable to recover it. 00:29:12.085 [2024-11-20 16:40:57.806565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.085 [2024-11-20 16:40:57.806575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.085 qpair failed and we were unable to recover it. 00:29:12.085 [2024-11-20 16:40:57.806913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.085 [2024-11-20 16:40:57.806923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.085 qpair failed and we were unable to recover it. 00:29:12.085 [2024-11-20 16:40:57.807285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.085 [2024-11-20 16:40:57.807295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.085 qpair failed and we were unable to recover it. 00:29:12.085 [2024-11-20 16:40:57.807597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.085 [2024-11-20 16:40:57.807607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.085 qpair failed and we were unable to recover it. 00:29:12.085 [2024-11-20 16:40:57.807946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.085 [2024-11-20 16:40:57.807956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.085 qpair failed and we were unable to recover it. 00:29:12.085 [2024-11-20 16:40:57.808269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.085 [2024-11-20 16:40:57.808279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.085 qpair failed and we were unable to recover it. 00:29:12.085 [2024-11-20 16:40:57.808580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.085 [2024-11-20 16:40:57.808589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.085 qpair failed and we were unable to recover it. 00:29:12.085 [2024-11-20 16:40:57.808870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.085 [2024-11-20 16:40:57.808880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.085 qpair failed and we were unable to recover it. 00:29:12.085 [2024-11-20 16:40:57.809089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.085 [2024-11-20 16:40:57.809100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.085 qpair failed and we were unable to recover it. 00:29:12.085 [2024-11-20 16:40:57.809410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.085 [2024-11-20 16:40:57.809420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.085 qpair failed and we were unable to recover it. 00:29:12.085 [2024-11-20 16:40:57.809701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.085 [2024-11-20 16:40:57.809710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.085 qpair failed and we were unable to recover it. 00:29:12.085 [2024-11-20 16:40:57.810020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.085 [2024-11-20 16:40:57.810030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.085 qpair failed and we were unable to recover it. 00:29:12.085 [2024-11-20 16:40:57.810138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.085 [2024-11-20 16:40:57.810148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.085 qpair failed and we were unable to recover it. 00:29:12.085 [2024-11-20 16:40:57.810452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.085 [2024-11-20 16:40:57.810463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.085 qpair failed and we were unable to recover it. 00:29:12.085 [2024-11-20 16:40:57.810767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.085 [2024-11-20 16:40:57.810777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.085 qpair failed and we were unable to recover it. 00:29:12.085 [2024-11-20 16:40:57.810951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.085 [2024-11-20 16:40:57.810962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.085 qpair failed and we were unable to recover it. 00:29:12.085 [2024-11-20 16:40:57.811178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.085 [2024-11-20 16:40:57.811188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.085 qpair failed and we were unable to recover it. 00:29:12.085 [2024-11-20 16:40:57.811480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.085 [2024-11-20 16:40:57.811490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.085 qpair failed and we were unable to recover it. 00:29:12.085 [2024-11-20 16:40:57.811786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.085 [2024-11-20 16:40:57.811795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.086 qpair failed and we were unable to recover it. 00:29:12.086 [2024-11-20 16:40:57.812077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.086 [2024-11-20 16:40:57.812086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.086 qpair failed and we were unable to recover it. 00:29:12.086 [2024-11-20 16:40:57.812380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.086 [2024-11-20 16:40:57.812390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.086 qpair failed and we were unable to recover it. 00:29:12.086 [2024-11-20 16:40:57.812570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.086 [2024-11-20 16:40:57.812580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.086 qpair failed and we were unable to recover it. 00:29:12.086 [2024-11-20 16:40:57.812873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.086 [2024-11-20 16:40:57.812883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.086 qpair failed and we were unable to recover it. 00:29:12.086 [2024-11-20 16:40:57.813201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.086 [2024-11-20 16:40:57.813214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.086 qpair failed and we were unable to recover it. 00:29:12.086 [2024-11-20 16:40:57.813403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.086 [2024-11-20 16:40:57.813413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.086 qpair failed and we were unable to recover it. 00:29:12.086 [2024-11-20 16:40:57.813736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.086 [2024-11-20 16:40:57.813746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.086 qpair failed and we were unable to recover it. 00:29:12.086 [2024-11-20 16:40:57.814041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.086 [2024-11-20 16:40:57.814051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.086 qpair failed and we were unable to recover it. 00:29:12.086 [2024-11-20 16:40:57.814259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.086 [2024-11-20 16:40:57.814269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.086 qpair failed and we were unable to recover it. 00:29:12.086 [2024-11-20 16:40:57.814432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.086 [2024-11-20 16:40:57.814441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.086 qpair failed and we were unable to recover it. 00:29:12.086 [2024-11-20 16:40:57.814805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.086 [2024-11-20 16:40:57.814814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.086 qpair failed and we were unable to recover it. 00:29:12.086 [2024-11-20 16:40:57.815125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.086 [2024-11-20 16:40:57.815135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.086 qpair failed and we were unable to recover it. 00:29:12.086 [2024-11-20 16:40:57.815442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.086 [2024-11-20 16:40:57.815452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.086 qpair failed and we were unable to recover it. 00:29:12.086 [2024-11-20 16:40:57.815719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.086 [2024-11-20 16:40:57.815729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.086 qpair failed and we were unable to recover it. 00:29:12.086 [2024-11-20 16:40:57.815895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.086 [2024-11-20 16:40:57.815907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.086 qpair failed and we were unable to recover it. 00:29:12.086 [2024-11-20 16:40:57.816286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.086 [2024-11-20 16:40:57.816296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.086 qpair failed and we were unable to recover it. 00:29:12.086 [2024-11-20 16:40:57.816474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.086 [2024-11-20 16:40:57.816484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.086 qpair failed and we were unable to recover it. 00:29:12.086 [2024-11-20 16:40:57.816902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.086 [2024-11-20 16:40:57.816912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.086 qpair failed and we were unable to recover it. 00:29:12.086 [2024-11-20 16:40:57.817027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.086 [2024-11-20 16:40:57.817037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.086 qpair failed and we were unable to recover it. 00:29:12.086 [2024-11-20 16:40:57.817203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.086 [2024-11-20 16:40:57.817213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.086 qpair failed and we were unable to recover it. 00:29:12.086 [2024-11-20 16:40:57.817411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.086 [2024-11-20 16:40:57.817422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.086 qpair failed and we were unable to recover it. 00:29:12.086 [2024-11-20 16:40:57.817735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.086 [2024-11-20 16:40:57.817744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.086 qpair failed and we were unable to recover it. 00:29:12.086 [2024-11-20 16:40:57.818052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.086 [2024-11-20 16:40:57.818062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.086 qpair failed and we were unable to recover it. 00:29:12.086 [2024-11-20 16:40:57.818385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.086 [2024-11-20 16:40:57.818395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.086 qpair failed and we were unable to recover it. 00:29:12.086 [2024-11-20 16:40:57.818600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.086 [2024-11-20 16:40:57.818610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.086 qpair failed and we were unable to recover it. 00:29:12.086 [2024-11-20 16:40:57.818936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.086 [2024-11-20 16:40:57.818946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.086 qpair failed and we were unable to recover it. 00:29:12.086 [2024-11-20 16:40:57.819244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.086 [2024-11-20 16:40:57.819255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.086 qpair failed and we were unable to recover it. 00:29:12.086 [2024-11-20 16:40:57.819417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.086 [2024-11-20 16:40:57.819428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.086 qpair failed and we were unable to recover it. 00:29:12.086 [2024-11-20 16:40:57.819763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.086 [2024-11-20 16:40:57.819773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.086 qpair failed and we were unable to recover it. 00:29:12.086 [2024-11-20 16:40:57.820092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.086 [2024-11-20 16:40:57.820102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.086 qpair failed and we were unable to recover it. 00:29:12.086 [2024-11-20 16:40:57.820425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.086 [2024-11-20 16:40:57.820435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.086 qpair failed and we were unable to recover it. 00:29:12.086 [2024-11-20 16:40:57.820746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.086 [2024-11-20 16:40:57.820757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.086 qpair failed and we were unable to recover it. 00:29:12.086 [2024-11-20 16:40:57.821072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.087 [2024-11-20 16:40:57.821082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.087 qpair failed and we were unable to recover it. 00:29:12.087 [2024-11-20 16:40:57.821370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.087 [2024-11-20 16:40:57.821380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.087 qpair failed and we were unable to recover it. 00:29:12.087 [2024-11-20 16:40:57.821690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.087 [2024-11-20 16:40:57.821700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.087 qpair failed and we were unable to recover it. 00:29:12.087 [2024-11-20 16:40:57.821989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.087 [2024-11-20 16:40:57.821999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.087 qpair failed and we were unable to recover it. 00:29:12.087 [2024-11-20 16:40:57.822305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.087 [2024-11-20 16:40:57.822314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.087 qpair failed and we were unable to recover it. 00:29:12.087 [2024-11-20 16:40:57.822671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.087 [2024-11-20 16:40:57.822680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.087 qpair failed and we were unable to recover it. 00:29:12.087 [2024-11-20 16:40:57.822951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.087 [2024-11-20 16:40:57.822960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.087 qpair failed and we were unable to recover it. 00:29:12.087 [2024-11-20 16:40:57.823164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.087 [2024-11-20 16:40:57.823180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.087 qpair failed and we were unable to recover it. 00:29:12.087 [2024-11-20 16:40:57.823411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.087 [2024-11-20 16:40:57.823421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.087 qpair failed and we were unable to recover it. 00:29:12.087 [2024-11-20 16:40:57.823691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.087 [2024-11-20 16:40:57.823701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.087 qpair failed and we were unable to recover it. 00:29:12.087 [2024-11-20 16:40:57.824005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.087 [2024-11-20 16:40:57.824015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.087 qpair failed and we were unable to recover it. 00:29:12.087 [2024-11-20 16:40:57.824196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.087 [2024-11-20 16:40:57.824206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.087 qpair failed and we were unable to recover it. 00:29:12.087 [2024-11-20 16:40:57.824393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.087 [2024-11-20 16:40:57.824403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.087 qpair failed and we were unable to recover it. 00:29:12.087 [2024-11-20 16:40:57.824782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.087 [2024-11-20 16:40:57.824792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.087 qpair failed and we were unable to recover it. 00:29:12.087 [2024-11-20 16:40:57.825104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.087 [2024-11-20 16:40:57.825114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.087 qpair failed and we were unable to recover it. 00:29:12.087 [2024-11-20 16:40:57.825414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.087 [2024-11-20 16:40:57.825423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.087 qpair failed and we were unable to recover it. 00:29:12.087 [2024-11-20 16:40:57.825701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.087 [2024-11-20 16:40:57.825710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.087 qpair failed and we were unable to recover it. 00:29:12.087 [2024-11-20 16:40:57.826024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.087 [2024-11-20 16:40:57.826035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.087 qpair failed and we were unable to recover it. 00:29:12.087 [2024-11-20 16:40:57.826242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.087 [2024-11-20 16:40:57.826252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.087 qpair failed and we were unable to recover it. 00:29:12.087 [2024-11-20 16:40:57.826602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.087 [2024-11-20 16:40:57.826612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.087 qpair failed and we were unable to recover it. 00:29:12.087 [2024-11-20 16:40:57.826922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.087 [2024-11-20 16:40:57.826932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.087 qpair failed and we were unable to recover it. 00:29:12.087 [2024-11-20 16:40:57.827214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.087 [2024-11-20 16:40:57.827224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.087 qpair failed and we were unable to recover it. 00:29:12.087 [2024-11-20 16:40:57.827547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.087 [2024-11-20 16:40:57.827557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.087 qpair failed and we were unable to recover it. 00:29:12.087 [2024-11-20 16:40:57.827857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.087 [2024-11-20 16:40:57.827866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.087 qpair failed and we were unable to recover it. 00:29:12.087 [2024-11-20 16:40:57.828145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.087 [2024-11-20 16:40:57.828155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.087 qpair failed and we were unable to recover it. 00:29:12.087 [2024-11-20 16:40:57.828319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.087 [2024-11-20 16:40:57.828329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.087 qpair failed and we were unable to recover it. 00:29:12.087 [2024-11-20 16:40:57.828590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.087 [2024-11-20 16:40:57.828600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.087 qpair failed and we were unable to recover it. 00:29:12.087 [2024-11-20 16:40:57.828792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.087 [2024-11-20 16:40:57.828802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.087 qpair failed and we were unable to recover it. 00:29:12.087 [2024-11-20 16:40:57.829130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.087 [2024-11-20 16:40:57.829140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.087 qpair failed and we were unable to recover it. 00:29:12.087 [2024-11-20 16:40:57.829346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.087 [2024-11-20 16:40:57.829356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.087 qpair failed and we were unable to recover it. 00:29:12.087 [2024-11-20 16:40:57.829532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.087 [2024-11-20 16:40:57.829541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.087 qpair failed and we were unable to recover it. 00:29:12.087 [2024-11-20 16:40:57.829843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.087 [2024-11-20 16:40:57.829852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.087 qpair failed and we were unable to recover it. 00:29:12.087 [2024-11-20 16:40:57.830050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.087 [2024-11-20 16:40:57.830060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.087 qpair failed and we were unable to recover it. 00:29:12.087 [2024-11-20 16:40:57.830360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.087 [2024-11-20 16:40:57.830370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.087 qpair failed and we were unable to recover it. 00:29:12.087 [2024-11-20 16:40:57.830676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.087 [2024-11-20 16:40:57.830686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.087 qpair failed and we were unable to recover it. 00:29:12.087 [2024-11-20 16:40:57.830990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.087 [2024-11-20 16:40:57.831000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.087 qpair failed and we were unable to recover it. 00:29:12.087 [2024-11-20 16:40:57.831319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.087 [2024-11-20 16:40:57.831328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.087 qpair failed and we were unable to recover it. 00:29:12.087 [2024-11-20 16:40:57.831524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.087 [2024-11-20 16:40:57.831533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.087 qpair failed and we were unable to recover it. 00:29:12.087 [2024-11-20 16:40:57.831859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.087 [2024-11-20 16:40:57.831868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.088 qpair failed and we were unable to recover it. 00:29:12.088 [2024-11-20 16:40:57.832181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.088 [2024-11-20 16:40:57.832191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.088 qpair failed and we were unable to recover it. 00:29:12.088 [2024-11-20 16:40:57.832492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.088 [2024-11-20 16:40:57.832504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.088 qpair failed and we were unable to recover it. 00:29:12.088 [2024-11-20 16:40:57.832815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.088 [2024-11-20 16:40:57.832825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.088 qpair failed and we were unable to recover it. 00:29:12.088 [2024-11-20 16:40:57.833131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.088 [2024-11-20 16:40:57.833142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.088 qpair failed and we were unable to recover it. 00:29:12.088 [2024-11-20 16:40:57.833307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.088 [2024-11-20 16:40:57.833317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.088 qpair failed and we were unable to recover it. 00:29:12.088 [2024-11-20 16:40:57.833647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.088 [2024-11-20 16:40:57.833657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.088 qpair failed and we were unable to recover it. 00:29:12.088 [2024-11-20 16:40:57.833947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.088 [2024-11-20 16:40:57.833958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.088 qpair failed and we were unable to recover it. 00:29:12.088 [2024-11-20 16:40:57.834284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.088 [2024-11-20 16:40:57.834294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.088 qpair failed and we were unable to recover it. 00:29:12.088 [2024-11-20 16:40:57.834478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.088 [2024-11-20 16:40:57.834488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.088 qpair failed and we were unable to recover it. 00:29:12.088 [2024-11-20 16:40:57.834837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.088 [2024-11-20 16:40:57.834846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.088 qpair failed and we were unable to recover it. 00:29:12.088 [2024-11-20 16:40:57.835155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.088 [2024-11-20 16:40:57.835165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.088 qpair failed and we were unable to recover it. 00:29:12.088 [2024-11-20 16:40:57.835476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.088 [2024-11-20 16:40:57.835486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.088 qpair failed and we were unable to recover it. 00:29:12.088 [2024-11-20 16:40:57.835817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.088 [2024-11-20 16:40:57.835827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.088 qpair failed and we were unable to recover it. 00:29:12.088 [2024-11-20 16:40:57.835999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.088 [2024-11-20 16:40:57.836011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.088 qpair failed and we were unable to recover it. 00:29:12.088 [2024-11-20 16:40:57.836245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.088 [2024-11-20 16:40:57.836255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.088 qpair failed and we were unable to recover it. 00:29:12.088 [2024-11-20 16:40:57.836451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.088 [2024-11-20 16:40:57.836461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.088 qpair failed and we were unable to recover it. 00:29:12.088 [2024-11-20 16:40:57.836664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.088 [2024-11-20 16:40:57.836673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.088 qpair failed and we were unable to recover it. 00:29:12.088 [2024-11-20 16:40:57.836993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.088 [2024-11-20 16:40:57.837003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.088 qpair failed and we were unable to recover it. 00:29:12.088 [2024-11-20 16:40:57.837185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.088 [2024-11-20 16:40:57.837195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.088 qpair failed and we were unable to recover it. 00:29:12.088 [2024-11-20 16:40:57.837528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.088 [2024-11-20 16:40:57.837538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.088 qpair failed and we were unable to recover it. 00:29:12.088 [2024-11-20 16:40:57.837733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.088 [2024-11-20 16:40:57.837743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.088 qpair failed and we were unable to recover it. 00:29:12.088 [2024-11-20 16:40:57.838036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.088 [2024-11-20 16:40:57.838046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.088 qpair failed and we were unable to recover it. 00:29:12.088 [2024-11-20 16:40:57.838372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.088 [2024-11-20 16:40:57.838381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.088 qpair failed and we were unable to recover it. 00:29:12.088 [2024-11-20 16:40:57.838704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.088 [2024-11-20 16:40:57.838713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.088 qpair failed and we were unable to recover it. 00:29:12.088 [2024-11-20 16:40:57.839005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.088 [2024-11-20 16:40:57.839015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.088 qpair failed and we were unable to recover it. 00:29:12.088 [2024-11-20 16:40:57.839360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.088 [2024-11-20 16:40:57.839370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.088 qpair failed and we were unable to recover it. 00:29:12.088 [2024-11-20 16:40:57.839681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.088 [2024-11-20 16:40:57.839690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.088 qpair failed and we were unable to recover it. 00:29:12.088 [2024-11-20 16:40:57.839967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.088 [2024-11-20 16:40:57.839976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.088 qpair failed and we were unable to recover it. 00:29:12.088 [2024-11-20 16:40:57.840459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.088 [2024-11-20 16:40:57.840471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.088 qpair failed and we were unable to recover it. 00:29:12.088 [2024-11-20 16:40:57.840666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.088 [2024-11-20 16:40:57.840675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.088 qpair failed and we were unable to recover it. 00:29:12.088 [2024-11-20 16:40:57.841000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.088 [2024-11-20 16:40:57.841011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.088 qpair failed and we were unable to recover it. 00:29:12.088 [2024-11-20 16:40:57.841325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.088 [2024-11-20 16:40:57.841335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.088 qpair failed and we were unable to recover it. 00:29:12.088 [2024-11-20 16:40:57.841640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.088 [2024-11-20 16:40:57.841649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.088 qpair failed and we were unable to recover it. 00:29:12.088 [2024-11-20 16:40:57.841932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.088 [2024-11-20 16:40:57.841942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.088 qpair failed and we were unable to recover it. 00:29:12.088 [2024-11-20 16:40:57.842310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.088 [2024-11-20 16:40:57.842320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.088 qpair failed and we were unable to recover it. 00:29:12.088 [2024-11-20 16:40:57.842628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.088 [2024-11-20 16:40:57.842638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.088 qpair failed and we were unable to recover it. 00:29:12.088 [2024-11-20 16:40:57.842945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.088 [2024-11-20 16:40:57.842955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.088 qpair failed and we were unable to recover it. 00:29:12.088 [2024-11-20 16:40:57.843270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.088 [2024-11-20 16:40:57.843281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.088 qpair failed and we were unable to recover it. 00:29:12.089 [2024-11-20 16:40:57.843504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.089 [2024-11-20 16:40:57.843513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.089 qpair failed and we were unable to recover it. 00:29:12.089 [2024-11-20 16:40:57.843703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.089 [2024-11-20 16:40:57.843713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.089 qpair failed and we were unable to recover it. 00:29:12.089 [2024-11-20 16:40:57.844056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.089 [2024-11-20 16:40:57.844066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.089 qpair failed and we were unable to recover it. 00:29:12.089 [2024-11-20 16:40:57.844233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.089 [2024-11-20 16:40:57.844243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.089 qpair failed and we were unable to recover it. 00:29:12.089 [2024-11-20 16:40:57.844532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.089 [2024-11-20 16:40:57.844543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.089 qpair failed and we were unable to recover it. 00:29:12.089 [2024-11-20 16:40:57.844856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.089 [2024-11-20 16:40:57.844866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.089 qpair failed and we were unable to recover it. 00:29:12.089 [2024-11-20 16:40:57.845054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.089 [2024-11-20 16:40:57.845064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.089 qpair failed and we were unable to recover it. 00:29:12.089 [2024-11-20 16:40:57.845345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.089 [2024-11-20 16:40:57.845355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.089 qpair failed and we were unable to recover it. 00:29:12.089 [2024-11-20 16:40:57.845660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.089 [2024-11-20 16:40:57.845669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.089 qpair failed and we were unable to recover it. 00:29:12.089 [2024-11-20 16:40:57.845809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.089 [2024-11-20 16:40:57.845819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.089 qpair failed and we were unable to recover it. 00:29:12.089 [2024-11-20 16:40:57.846158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.089 [2024-11-20 16:40:57.846169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.089 qpair failed and we were unable to recover it. 00:29:12.089 [2024-11-20 16:40:57.846491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.089 [2024-11-20 16:40:57.846502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.089 qpair failed and we were unable to recover it. 00:29:12.089 [2024-11-20 16:40:57.846775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.089 [2024-11-20 16:40:57.846786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.089 qpair failed and we were unable to recover it. 00:29:12.089 [2024-11-20 16:40:57.847089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.089 [2024-11-20 16:40:57.847099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.089 qpair failed and we were unable to recover it. 00:29:12.089 [2024-11-20 16:40:57.847382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.089 [2024-11-20 16:40:57.847391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.089 qpair failed and we were unable to recover it. 00:29:12.089 [2024-11-20 16:40:57.847694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.089 [2024-11-20 16:40:57.847703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.089 qpair failed and we were unable to recover it. 00:29:12.089 [2024-11-20 16:40:57.848012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.089 [2024-11-20 16:40:57.848021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.089 qpair failed and we were unable to recover it. 00:29:12.089 [2024-11-20 16:40:57.848324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.089 [2024-11-20 16:40:57.848333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.089 qpair failed and we were unable to recover it. 00:29:12.089 [2024-11-20 16:40:57.848611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.089 [2024-11-20 16:40:57.848630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.089 qpair failed and we were unable to recover it. 00:29:12.089 [2024-11-20 16:40:57.848931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.089 [2024-11-20 16:40:57.848941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.089 qpair failed and we were unable to recover it. 00:29:12.089 [2024-11-20 16:40:57.849177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.089 [2024-11-20 16:40:57.849188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.089 qpair failed and we were unable to recover it. 00:29:12.089 [2024-11-20 16:40:57.849508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.089 [2024-11-20 16:40:57.849518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.089 qpair failed and we were unable to recover it. 00:29:12.089 [2024-11-20 16:40:57.849841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.089 [2024-11-20 16:40:57.849851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.089 qpair failed and we were unable to recover it. 00:29:12.089 [2024-11-20 16:40:57.850131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.089 [2024-11-20 16:40:57.850141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.089 qpair failed and we were unable to recover it. 00:29:12.089 [2024-11-20 16:40:57.850446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.089 [2024-11-20 16:40:57.850456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.089 qpair failed and we were unable to recover it. 00:29:12.089 [2024-11-20 16:40:57.850756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.089 [2024-11-20 16:40:57.850765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.089 qpair failed and we were unable to recover it. 00:29:12.089 [2024-11-20 16:40:57.851055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.089 [2024-11-20 16:40:57.851065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.089 qpair failed and we were unable to recover it. 00:29:12.089 [2024-11-20 16:40:57.851391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.089 [2024-11-20 16:40:57.851401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.089 qpair failed and we were unable to recover it. 00:29:12.089 [2024-11-20 16:40:57.851577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.089 [2024-11-20 16:40:57.851587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.089 qpair failed and we were unable to recover it. 00:29:12.089 [2024-11-20 16:40:57.851934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.089 [2024-11-20 16:40:57.851944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.089 qpair failed and we were unable to recover it. 00:29:12.089 [2024-11-20 16:40:57.852288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.089 [2024-11-20 16:40:57.852298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.089 qpair failed and we were unable to recover it. 00:29:12.089 [2024-11-20 16:40:57.852569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.089 [2024-11-20 16:40:57.852580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.089 qpair failed and we were unable to recover it. 00:29:12.089 [2024-11-20 16:40:57.852867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.089 [2024-11-20 16:40:57.852877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.089 qpair failed and we were unable to recover it. 00:29:12.089 [2024-11-20 16:40:57.853198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.089 [2024-11-20 16:40:57.853209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.089 qpair failed and we were unable to recover it. 00:29:12.089 [2024-11-20 16:40:57.853514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.089 [2024-11-20 16:40:57.853524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.089 qpair failed and we were unable to recover it. 00:29:12.089 [2024-11-20 16:40:57.853820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.089 [2024-11-20 16:40:57.853829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.089 qpair failed and we were unable to recover it. 00:29:12.089 [2024-11-20 16:40:57.853916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.089 [2024-11-20 16:40:57.853926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.089 qpair failed and we were unable to recover it. 00:29:12.089 [2024-11-20 16:40:57.854228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.089 [2024-11-20 16:40:57.854238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.089 qpair failed and we were unable to recover it. 00:29:12.089 [2024-11-20 16:40:57.854539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.090 [2024-11-20 16:40:57.854548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.090 qpair failed and we were unable to recover it. 00:29:12.090 [2024-11-20 16:40:57.854871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.090 [2024-11-20 16:40:57.854881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.090 qpair failed and we were unable to recover it. 00:29:12.090 [2024-11-20 16:40:57.855077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.090 [2024-11-20 16:40:57.855088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.090 qpair failed and we were unable to recover it. 00:29:12.090 [2024-11-20 16:40:57.855285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.090 [2024-11-20 16:40:57.855295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.090 qpair failed and we were unable to recover it. 00:29:12.090 [2024-11-20 16:40:57.855581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.090 [2024-11-20 16:40:57.855591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.090 qpair failed and we were unable to recover it. 00:29:12.090 [2024-11-20 16:40:57.855876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.090 [2024-11-20 16:40:57.855893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.090 qpair failed and we were unable to recover it. 00:29:12.090 [2024-11-20 16:40:57.856180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.090 [2024-11-20 16:40:57.856191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.090 qpair failed and we were unable to recover it. 00:29:12.090 [2024-11-20 16:40:57.856510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.090 [2024-11-20 16:40:57.856520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.090 qpair failed and we were unable to recover it. 00:29:12.090 [2024-11-20 16:40:57.856817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.090 [2024-11-20 16:40:57.856827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.090 qpair failed and we were unable to recover it. 00:29:12.090 [2024-11-20 16:40:57.857084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.090 [2024-11-20 16:40:57.857096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.090 qpair failed and we were unable to recover it. 00:29:12.090 [2024-11-20 16:40:57.857409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.090 [2024-11-20 16:40:57.857419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.090 qpair failed and we were unable to recover it. 00:29:12.090 [2024-11-20 16:40:57.857764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.090 [2024-11-20 16:40:57.857773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.090 qpair failed and we were unable to recover it. 00:29:12.090 [2024-11-20 16:40:57.858053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.090 [2024-11-20 16:40:57.858063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.090 qpair failed and we were unable to recover it. 00:29:12.090 [2024-11-20 16:40:57.858414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.090 [2024-11-20 16:40:57.858423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.090 qpair failed and we were unable to recover it. 00:29:12.090 [2024-11-20 16:40:57.858702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.090 [2024-11-20 16:40:57.858712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.090 qpair failed and we were unable to recover it. 00:29:12.090 [2024-11-20 16:40:57.858969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.090 [2024-11-20 16:40:57.858979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.090 qpair failed and we were unable to recover it. 00:29:12.090 [2024-11-20 16:40:57.859299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.090 [2024-11-20 16:40:57.859309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.090 qpair failed and we were unable to recover it. 00:29:12.090 [2024-11-20 16:40:57.859579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.090 [2024-11-20 16:40:57.859589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.090 qpair failed and we were unable to recover it. 00:29:12.090 [2024-11-20 16:40:57.859858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.090 [2024-11-20 16:40:57.859868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.090 qpair failed and we were unable to recover it. 00:29:12.090 [2024-11-20 16:40:57.860255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.090 [2024-11-20 16:40:57.860265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.090 qpair failed and we were unable to recover it. 00:29:12.090 [2024-11-20 16:40:57.860570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.090 [2024-11-20 16:40:57.860580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.090 qpair failed and we were unable to recover it. 00:29:12.090 [2024-11-20 16:40:57.860886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.090 [2024-11-20 16:40:57.860896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.090 qpair failed and we were unable to recover it. 00:29:12.090 [2024-11-20 16:40:57.861174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.090 [2024-11-20 16:40:57.861185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.090 qpair failed and we were unable to recover it. 00:29:12.090 [2024-11-20 16:40:57.861482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.090 [2024-11-20 16:40:57.861493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.090 qpair failed and we were unable to recover it. 00:29:12.090 [2024-11-20 16:40:57.861789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.090 [2024-11-20 16:40:57.861798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.090 qpair failed and we were unable to recover it. 00:29:12.090 [2024-11-20 16:40:57.862013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.090 [2024-11-20 16:40:57.862023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.090 qpair failed and we were unable to recover it. 00:29:12.090 [2024-11-20 16:40:57.862322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.090 [2024-11-20 16:40:57.862331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.090 qpair failed and we were unable to recover it. 00:29:12.090 [2024-11-20 16:40:57.862644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.090 [2024-11-20 16:40:57.862654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.090 qpair failed and we were unable to recover it. 00:29:12.090 [2024-11-20 16:40:57.862988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.090 [2024-11-20 16:40:57.862998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.090 qpair failed and we were unable to recover it. 00:29:12.090 [2024-11-20 16:40:57.863316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.090 [2024-11-20 16:40:57.863326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.090 qpair failed and we were unable to recover it. 00:29:12.090 [2024-11-20 16:40:57.863630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.090 [2024-11-20 16:40:57.863640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.090 qpair failed and we were unable to recover it. 00:29:12.090 [2024-11-20 16:40:57.863933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.090 [2024-11-20 16:40:57.863943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.090 qpair failed and we were unable to recover it. 00:29:12.090 [2024-11-20 16:40:57.864245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.090 [2024-11-20 16:40:57.864255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.090 qpair failed and we were unable to recover it. 00:29:12.090 [2024-11-20 16:40:57.864560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.090 [2024-11-20 16:40:57.864570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.090 qpair failed and we were unable to recover it. 00:29:12.090 [2024-11-20 16:40:57.864762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.090 [2024-11-20 16:40:57.864772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.090 qpair failed and we were unable to recover it. 00:29:12.090 [2024-11-20 16:40:57.865082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.090 [2024-11-20 16:40:57.865092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.090 qpair failed and we were unable to recover it. 00:29:12.090 [2024-11-20 16:40:57.865382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.090 [2024-11-20 16:40:57.865391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.090 qpair failed and we were unable to recover it. 00:29:12.090 [2024-11-20 16:40:57.865702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.090 [2024-11-20 16:40:57.865711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.090 qpair failed and we were unable to recover it. 00:29:12.090 [2024-11-20 16:40:57.866095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.090 [2024-11-20 16:40:57.866105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.091 qpair failed and we were unable to recover it. 00:29:12.091 [2024-11-20 16:40:57.866383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.091 [2024-11-20 16:40:57.866392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.091 qpair failed and we were unable to recover it. 00:29:12.091 [2024-11-20 16:40:57.866698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.091 [2024-11-20 16:40:57.866707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.091 qpair failed and we were unable to recover it. 00:29:12.091 [2024-11-20 16:40:57.867013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.091 [2024-11-20 16:40:57.867023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.091 qpair failed and we were unable to recover it. 00:29:12.091 [2024-11-20 16:40:57.867246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.091 [2024-11-20 16:40:57.867256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.091 qpair failed and we were unable to recover it. 00:29:12.091 [2024-11-20 16:40:57.867541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.091 [2024-11-20 16:40:57.867550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.091 qpair failed and we were unable to recover it. 00:29:12.091 [2024-11-20 16:40:57.867829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.091 [2024-11-20 16:40:57.867838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.091 qpair failed and we were unable to recover it. 00:29:12.091 [2024-11-20 16:40:57.868139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.091 [2024-11-20 16:40:57.868149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.091 qpair failed and we were unable to recover it. 00:29:12.091 [2024-11-20 16:40:57.868500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.091 [2024-11-20 16:40:57.868509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.091 qpair failed and we were unable to recover it. 00:29:12.091 [2024-11-20 16:40:57.868846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.091 [2024-11-20 16:40:57.868856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.091 qpair failed and we were unable to recover it. 00:29:12.091 [2024-11-20 16:40:57.869144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.091 [2024-11-20 16:40:57.869154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.091 qpair failed and we were unable to recover it. 00:29:12.091 [2024-11-20 16:40:57.869512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.091 [2024-11-20 16:40:57.869522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.091 qpair failed and we were unable to recover it. 00:29:12.091 [2024-11-20 16:40:57.869819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.091 [2024-11-20 16:40:57.869828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.091 qpair failed and we were unable to recover it. 00:29:12.091 [2024-11-20 16:40:57.870129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.091 [2024-11-20 16:40:57.870140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.091 qpair failed and we were unable to recover it. 00:29:12.091 [2024-11-20 16:40:57.870468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.091 [2024-11-20 16:40:57.870477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.091 qpair failed and we were unable to recover it. 00:29:12.091 [2024-11-20 16:40:57.870784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.091 [2024-11-20 16:40:57.870793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.091 qpair failed and we were unable to recover it. 00:29:12.091 [2024-11-20 16:40:57.870986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.091 [2024-11-20 16:40:57.870997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.091 qpair failed and we were unable to recover it. 00:29:12.091 [2024-11-20 16:40:57.871287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.091 [2024-11-20 16:40:57.871298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.091 qpair failed and we were unable to recover it. 00:29:12.091 [2024-11-20 16:40:57.871603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.091 [2024-11-20 16:40:57.871613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.091 qpair failed and we were unable to recover it. 00:29:12.091 [2024-11-20 16:40:57.871911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.091 [2024-11-20 16:40:57.871920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.091 qpair failed and we were unable to recover it. 00:29:12.091 [2024-11-20 16:40:57.872219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.091 [2024-11-20 16:40:57.872229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.091 qpair failed and we were unable to recover it. 00:29:12.091 [2024-11-20 16:40:57.872529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.091 [2024-11-20 16:40:57.872539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.091 qpair failed and we were unable to recover it. 00:29:12.091 [2024-11-20 16:40:57.872841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.091 [2024-11-20 16:40:57.872850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.091 qpair failed and we were unable to recover it. 00:29:12.091 [2024-11-20 16:40:57.873148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.091 [2024-11-20 16:40:57.873160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.091 qpair failed and we were unable to recover it. 00:29:12.091 [2024-11-20 16:40:57.873461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.091 [2024-11-20 16:40:57.873470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.091 qpair failed and we were unable to recover it. 00:29:12.091 [2024-11-20 16:40:57.873751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.091 [2024-11-20 16:40:57.873760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.091 qpair failed and we were unable to recover it. 00:29:12.091 [2024-11-20 16:40:57.873926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.091 [2024-11-20 16:40:57.873936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.091 qpair failed and we were unable to recover it. 00:29:12.091 [2024-11-20 16:40:57.874213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.091 [2024-11-20 16:40:57.874223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.091 qpair failed and we were unable to recover it. 00:29:12.091 [2024-11-20 16:40:57.874523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.091 [2024-11-20 16:40:57.874533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.091 qpair failed and we were unable to recover it. 00:29:12.091 [2024-11-20 16:40:57.874838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.091 [2024-11-20 16:40:57.874848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.091 qpair failed and we were unable to recover it. 00:29:12.091 [2024-11-20 16:40:57.875197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.091 [2024-11-20 16:40:57.875207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.091 qpair failed and we were unable to recover it. 00:29:12.091 [2024-11-20 16:40:57.875503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.091 [2024-11-20 16:40:57.875514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.091 qpair failed and we were unable to recover it. 00:29:12.091 [2024-11-20 16:40:57.875791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.091 [2024-11-20 16:40:57.875801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.091 qpair failed and we were unable to recover it. 00:29:12.091 [2024-11-20 16:40:57.876104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.091 [2024-11-20 16:40:57.876114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.091 qpair failed and we were unable to recover it. 00:29:12.091 [2024-11-20 16:40:57.876325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.092 [2024-11-20 16:40:57.876336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.092 qpair failed and we were unable to recover it. 00:29:12.092 [2024-11-20 16:40:57.876646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.092 [2024-11-20 16:40:57.876656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.092 qpair failed and we were unable to recover it. 00:29:12.092 [2024-11-20 16:40:57.876962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.092 [2024-11-20 16:40:57.876972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.092 qpair failed and we were unable to recover it. 00:29:12.092 [2024-11-20 16:40:57.877272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.092 [2024-11-20 16:40:57.877283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.092 qpair failed and we were unable to recover it. 00:29:12.092 [2024-11-20 16:40:57.877580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.092 [2024-11-20 16:40:57.877591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.092 qpair failed and we were unable to recover it. 00:29:12.092 [2024-11-20 16:40:57.877872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.092 [2024-11-20 16:40:57.877882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.092 qpair failed and we were unable to recover it. 00:29:12.092 [2024-11-20 16:40:57.878189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.092 [2024-11-20 16:40:57.878199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.092 qpair failed and we were unable to recover it. 00:29:12.092 [2024-11-20 16:40:57.878471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.092 [2024-11-20 16:40:57.878481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.092 qpair failed and we were unable to recover it. 00:29:12.092 [2024-11-20 16:40:57.878750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.092 [2024-11-20 16:40:57.878760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.092 qpair failed and we were unable to recover it. 00:29:12.092 [2024-11-20 16:40:57.879024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.092 [2024-11-20 16:40:57.879035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.092 qpair failed and we were unable to recover it. 00:29:12.092 [2024-11-20 16:40:57.879160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.092 [2024-11-20 16:40:57.879169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.092 qpair failed and we were unable to recover it. 00:29:12.092 [2024-11-20 16:40:57.879633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.092 [2024-11-20 16:40:57.879661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.092 qpair failed and we were unable to recover it. 00:29:12.092 [2024-11-20 16:40:57.879966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.092 [2024-11-20 16:40:57.879976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.092 qpair failed and we were unable to recover it. 00:29:12.092 [2024-11-20 16:40:57.880346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.092 [2024-11-20 16:40:57.880373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.092 qpair failed and we were unable to recover it. 00:29:12.092 [2024-11-20 16:40:57.880735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.092 [2024-11-20 16:40:57.880744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.092 qpair failed and we were unable to recover it. 00:29:12.092 [2024-11-20 16:40:57.880963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.092 [2024-11-20 16:40:57.880971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.092 qpair failed and we were unable to recover it. 00:29:12.092 [2024-11-20 16:40:57.881407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.092 [2024-11-20 16:40:57.881438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.092 qpair failed and we were unable to recover it. 00:29:12.092 [2024-11-20 16:40:57.881734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.092 [2024-11-20 16:40:57.881744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.092 qpair failed and we were unable to recover it. 00:29:12.092 [2024-11-20 16:40:57.882051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.092 [2024-11-20 16:40:57.882059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.092 qpair failed and we were unable to recover it. 00:29:12.092 [2024-11-20 16:40:57.882268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.092 [2024-11-20 16:40:57.882275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.092 qpair failed and we were unable to recover it. 00:29:12.092 [2024-11-20 16:40:57.882588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.092 [2024-11-20 16:40:57.882594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.092 qpair failed and we were unable to recover it. 00:29:12.092 [2024-11-20 16:40:57.882915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.092 [2024-11-20 16:40:57.882921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.092 qpair failed and we were unable to recover it. 00:29:12.092 [2024-11-20 16:40:57.882962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.092 [2024-11-20 16:40:57.882969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.092 qpair failed and we were unable to recover it. 00:29:12.092 [2024-11-20 16:40:57.883287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.092 [2024-11-20 16:40:57.883294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.092 qpair failed and we were unable to recover it. 00:29:12.092 [2024-11-20 16:40:57.883613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.092 [2024-11-20 16:40:57.883620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.092 qpair failed and we were unable to recover it. 00:29:12.092 [2024-11-20 16:40:57.883952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.092 [2024-11-20 16:40:57.883960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.092 qpair failed and we were unable to recover it. 00:29:12.092 [2024-11-20 16:40:57.884279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.092 [2024-11-20 16:40:57.884287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.092 qpair failed and we were unable to recover it. 00:29:12.092 [2024-11-20 16:40:57.884602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.092 [2024-11-20 16:40:57.884609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.092 qpair failed and we were unable to recover it. 00:29:12.092 [2024-11-20 16:40:57.884921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.092 [2024-11-20 16:40:57.884928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.092 qpair failed and we were unable to recover it. 00:29:12.092 [2024-11-20 16:40:57.885233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.092 [2024-11-20 16:40:57.885240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.092 qpair failed and we were unable to recover it. 00:29:12.092 [2024-11-20 16:40:57.885554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.092 [2024-11-20 16:40:57.885561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.092 qpair failed and we were unable to recover it. 00:29:12.092 [2024-11-20 16:40:57.885883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.092 [2024-11-20 16:40:57.885890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.092 qpair failed and we were unable to recover it. 00:29:12.092 [2024-11-20 16:40:57.886081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.092 [2024-11-20 16:40:57.886089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.092 qpair failed and we were unable to recover it. 00:29:12.092 [2024-11-20 16:40:57.886308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.092 [2024-11-20 16:40:57.886316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.092 qpair failed and we were unable to recover it. 00:29:12.092 [2024-11-20 16:40:57.886636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.092 [2024-11-20 16:40:57.886643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.092 qpair failed and we were unable to recover it. 00:29:12.092 [2024-11-20 16:40:57.886813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.092 [2024-11-20 16:40:57.886822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.092 qpair failed and we were unable to recover it. 00:29:12.092 [2024-11-20 16:40:57.887012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.092 [2024-11-20 16:40:57.887020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.092 qpair failed and we were unable to recover it. 00:29:12.092 [2024-11-20 16:40:57.887357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.092 [2024-11-20 16:40:57.887363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.092 qpair failed and we were unable to recover it. 00:29:12.092 [2024-11-20 16:40:57.887667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.093 [2024-11-20 16:40:57.887674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.093 qpair failed and we were unable to recover it. 00:29:12.093 [2024-11-20 16:40:57.887990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.093 [2024-11-20 16:40:57.887997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.093 qpair failed and we were unable to recover it. 00:29:12.093 [2024-11-20 16:40:57.888374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.093 [2024-11-20 16:40:57.888381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.093 qpair failed and we were unable to recover it. 00:29:12.093 [2024-11-20 16:40:57.888585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.093 [2024-11-20 16:40:57.888592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.093 qpair failed and we were unable to recover it. 00:29:12.093 [2024-11-20 16:40:57.888834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.093 [2024-11-20 16:40:57.888842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.093 qpair failed and we were unable to recover it. 00:29:12.093 [2024-11-20 16:40:57.889176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.093 [2024-11-20 16:40:57.889183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.093 qpair failed and we were unable to recover it. 00:29:12.093 [2024-11-20 16:40:57.889498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.093 [2024-11-20 16:40:57.889505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.093 qpair failed and we were unable to recover it. 00:29:12.093 [2024-11-20 16:40:57.889699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.093 [2024-11-20 16:40:57.889706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.093 qpair failed and we were unable to recover it. 00:29:12.093 [2024-11-20 16:40:57.890037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.093 [2024-11-20 16:40:57.890044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.093 qpair failed and we were unable to recover it. 00:29:12.093 [2024-11-20 16:40:57.890354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.093 [2024-11-20 16:40:57.890361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.093 qpair failed and we were unable to recover it. 00:29:12.093 [2024-11-20 16:40:57.890681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.093 [2024-11-20 16:40:57.890688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.093 qpair failed and we were unable to recover it. 00:29:12.093 [2024-11-20 16:40:57.890895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.093 [2024-11-20 16:40:57.890902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.093 qpair failed and we were unable to recover it. 00:29:12.093 [2024-11-20 16:40:57.891129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.093 [2024-11-20 16:40:57.891136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.093 qpair failed and we were unable to recover it. 00:29:12.093 [2024-11-20 16:40:57.891420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.093 [2024-11-20 16:40:57.891427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.093 qpair failed and we were unable to recover it. 00:29:12.093 [2024-11-20 16:40:57.891750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.093 [2024-11-20 16:40:57.891757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.093 qpair failed and we were unable to recover it. 00:29:12.093 [2024-11-20 16:40:57.892068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.093 [2024-11-20 16:40:57.892076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.093 qpair failed and we were unable to recover it. 00:29:12.093 [2024-11-20 16:40:57.892381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.093 [2024-11-20 16:40:57.892388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.093 qpair failed and we were unable to recover it. 00:29:12.093 [2024-11-20 16:40:57.892694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.093 [2024-11-20 16:40:57.892701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.093 qpair failed and we were unable to recover it. 00:29:12.093 [2024-11-20 16:40:57.892858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.093 [2024-11-20 16:40:57.892869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.093 qpair failed and we were unable to recover it. 00:29:12.093 [2024-11-20 16:40:57.893063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.093 [2024-11-20 16:40:57.893071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.093 qpair failed and we were unable to recover it. 00:29:12.093 [2024-11-20 16:40:57.893341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.093 [2024-11-20 16:40:57.893348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.093 qpair failed and we were unable to recover it. 00:29:12.093 [2024-11-20 16:40:57.893672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.093 [2024-11-20 16:40:57.893679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.093 qpair failed and we were unable to recover it. 00:29:12.093 [2024-11-20 16:40:57.893984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.093 [2024-11-20 16:40:57.893992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.093 qpair failed and we were unable to recover it. 00:29:12.093 [2024-11-20 16:40:57.894161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.093 [2024-11-20 16:40:57.894169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.093 qpair failed and we were unable to recover it. 00:29:12.093 [2024-11-20 16:40:57.894460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.093 [2024-11-20 16:40:57.894467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.093 qpair failed and we were unable to recover it. 00:29:12.093 [2024-11-20 16:40:57.894783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.093 [2024-11-20 16:40:57.894790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.093 qpair failed and we were unable to recover it. 00:29:12.093 [2024-11-20 16:40:57.895103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.093 [2024-11-20 16:40:57.895111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.093 qpair failed and we were unable to recover it. 00:29:12.093 [2024-11-20 16:40:57.895315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.093 [2024-11-20 16:40:57.895322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.093 qpair failed and we were unable to recover it. 00:29:12.093 [2024-11-20 16:40:57.895510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.093 [2024-11-20 16:40:57.895518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.093 qpair failed and we were unable to recover it. 00:29:12.093 [2024-11-20 16:40:57.895854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.093 [2024-11-20 16:40:57.895860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.093 qpair failed and we were unable to recover it. 00:29:12.093 [2024-11-20 16:40:57.896161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.093 [2024-11-20 16:40:57.896168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.093 qpair failed and we were unable to recover it. 00:29:12.093 [2024-11-20 16:40:57.896484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.093 [2024-11-20 16:40:57.896491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.093 qpair failed and we were unable to recover it. 00:29:12.093 [2024-11-20 16:40:57.896810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.093 [2024-11-20 16:40:57.896818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.093 qpair failed and we were unable to recover it. 00:29:12.093 [2024-11-20 16:40:57.897149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.093 [2024-11-20 16:40:57.897156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.093 qpair failed and we were unable to recover it. 00:29:12.093 [2024-11-20 16:40:57.897497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.093 [2024-11-20 16:40:57.897504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.093 qpair failed and we were unable to recover it. 00:29:12.093 [2024-11-20 16:40:57.897813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.093 [2024-11-20 16:40:57.897820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.093 qpair failed and we were unable to recover it. 00:29:12.093 [2024-11-20 16:40:57.898025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.093 [2024-11-20 16:40:57.898032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.093 qpair failed and we were unable to recover it. 00:29:12.093 [2024-11-20 16:40:57.898310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.093 [2024-11-20 16:40:57.898318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.094 qpair failed and we were unable to recover it. 00:29:12.094 [2024-11-20 16:40:57.898636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.094 [2024-11-20 16:40:57.898643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.094 qpair failed and we were unable to recover it. 00:29:12.094 [2024-11-20 16:40:57.898933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.094 [2024-11-20 16:40:57.898940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.094 qpair failed and we were unable to recover it. 00:29:12.094 [2024-11-20 16:40:57.899302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.094 [2024-11-20 16:40:57.899308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.094 qpair failed and we were unable to recover it. 00:29:12.094 [2024-11-20 16:40:57.899581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.094 [2024-11-20 16:40:57.899587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.094 qpair failed and we were unable to recover it. 00:29:12.094 [2024-11-20 16:40:57.899886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.094 [2024-11-20 16:40:57.899893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.094 qpair failed and we were unable to recover it. 00:29:12.094 [2024-11-20 16:40:57.900211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.094 [2024-11-20 16:40:57.900218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.094 qpair failed and we were unable to recover it. 00:29:12.094 [2024-11-20 16:40:57.900419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.094 [2024-11-20 16:40:57.900426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.094 qpair failed and we were unable to recover it. 00:29:12.094 [2024-11-20 16:40:57.900704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.094 [2024-11-20 16:40:57.900713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.094 qpair failed and we were unable to recover it. 00:29:12.094 [2024-11-20 16:40:57.900895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.094 [2024-11-20 16:40:57.900903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.094 qpair failed and we were unable to recover it. 00:29:12.094 [2024-11-20 16:40:57.901276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.094 [2024-11-20 16:40:57.901283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.094 qpair failed and we were unable to recover it. 00:29:12.094 [2024-11-20 16:40:57.901565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.094 [2024-11-20 16:40:57.901572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.094 qpair failed and we were unable to recover it. 00:29:12.094 [2024-11-20 16:40:57.901873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.094 [2024-11-20 16:40:57.901880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.094 qpair failed and we were unable to recover it. 00:29:12.094 [2024-11-20 16:40:57.902197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.094 [2024-11-20 16:40:57.902204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.094 qpair failed and we were unable to recover it. 00:29:12.094 [2024-11-20 16:40:57.902548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.094 [2024-11-20 16:40:57.902554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.094 qpair failed and we were unable to recover it. 00:29:12.094 [2024-11-20 16:40:57.902755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.094 [2024-11-20 16:40:57.902762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.094 qpair failed and we were unable to recover it. 00:29:12.094 [2024-11-20 16:40:57.903067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.094 [2024-11-20 16:40:57.903074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.094 qpair failed and we were unable to recover it. 00:29:12.094 [2024-11-20 16:40:57.903399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.094 [2024-11-20 16:40:57.903406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.094 qpair failed and we were unable to recover it. 00:29:12.094 [2024-11-20 16:40:57.903595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.094 [2024-11-20 16:40:57.903603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.094 qpair failed and we were unable to recover it. 00:29:12.094 [2024-11-20 16:40:57.903874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.094 [2024-11-20 16:40:57.903882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.094 qpair failed and we were unable to recover it. 00:29:12.094 [2024-11-20 16:40:57.904193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.094 [2024-11-20 16:40:57.904201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.094 qpair failed and we were unable to recover it. 00:29:12.094 [2024-11-20 16:40:57.904516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.094 [2024-11-20 16:40:57.904524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.094 qpair failed and we were unable to recover it. 00:29:12.094 [2024-11-20 16:40:57.904835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.094 [2024-11-20 16:40:57.904843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.094 qpair failed and we were unable to recover it. 00:29:12.094 [2024-11-20 16:40:57.905176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.094 [2024-11-20 16:40:57.905182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.094 qpair failed and we were unable to recover it. 00:29:12.094 [2024-11-20 16:40:57.905478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.094 [2024-11-20 16:40:57.905487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.094 qpair failed and we were unable to recover it. 00:29:12.094 [2024-11-20 16:40:57.905786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.094 [2024-11-20 16:40:57.905793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.094 qpair failed and we were unable to recover it. 00:29:12.094 [2024-11-20 16:40:57.906094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.094 [2024-11-20 16:40:57.906101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.094 qpair failed and we were unable to recover it. 00:29:12.094 [2024-11-20 16:40:57.906412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.094 [2024-11-20 16:40:57.906419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.094 qpair failed and we were unable to recover it. 00:29:12.094 [2024-11-20 16:40:57.906724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.094 [2024-11-20 16:40:57.906731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.094 qpair failed and we were unable to recover it. 00:29:12.094 [2024-11-20 16:40:57.907054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.094 [2024-11-20 16:40:57.907061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.094 qpair failed and we were unable to recover it. 00:29:12.094 [2024-11-20 16:40:57.907347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.094 [2024-11-20 16:40:57.907354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.094 qpair failed and we were unable to recover it. 00:29:12.094 [2024-11-20 16:40:57.907654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.094 [2024-11-20 16:40:57.907661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.094 qpair failed and we were unable to recover it. 00:29:12.094 [2024-11-20 16:40:57.907960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.094 [2024-11-20 16:40:57.907966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.094 qpair failed and we were unable to recover it. 00:29:12.094 [2024-11-20 16:40:57.908247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.094 [2024-11-20 16:40:57.908254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.094 qpair failed and we were unable to recover it. 00:29:12.094 [2024-11-20 16:40:57.908568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.094 [2024-11-20 16:40:57.908576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.094 qpair failed and we were unable to recover it. 00:29:12.094 [2024-11-20 16:40:57.908891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.094 [2024-11-20 16:40:57.908899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.094 qpair failed and we were unable to recover it. 00:29:12.094 [2024-11-20 16:40:57.909107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.094 [2024-11-20 16:40:57.909115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.094 qpair failed and we were unable to recover it. 00:29:12.094 [2024-11-20 16:40:57.909284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.094 [2024-11-20 16:40:57.909291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.094 qpair failed and we were unable to recover it. 00:29:12.094 [2024-11-20 16:40:57.909593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.094 [2024-11-20 16:40:57.909599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.095 qpair failed and we were unable to recover it. 00:29:12.095 [2024-11-20 16:40:57.909637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.095 [2024-11-20 16:40:57.909644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.095 qpair failed and we were unable to recover it. 00:29:12.095 [2024-11-20 16:40:57.909906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.095 [2024-11-20 16:40:57.909913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.095 qpair failed and we were unable to recover it. 00:29:12.095 [2024-11-20 16:40:57.910223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.095 [2024-11-20 16:40:57.910230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.095 qpair failed and we were unable to recover it. 00:29:12.095 [2024-11-20 16:40:57.910521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.095 [2024-11-20 16:40:57.910528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.095 qpair failed and we were unable to recover it. 00:29:12.095 [2024-11-20 16:40:57.910696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.095 [2024-11-20 16:40:57.910704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.095 qpair failed and we were unable to recover it. 00:29:12.095 [2024-11-20 16:40:57.910967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.095 [2024-11-20 16:40:57.910974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.095 qpair failed and we were unable to recover it. 00:29:12.095 [2024-11-20 16:40:57.911284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.095 [2024-11-20 16:40:57.911291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.095 qpair failed and we were unable to recover it. 00:29:12.095 [2024-11-20 16:40:57.911454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.095 [2024-11-20 16:40:57.911461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.095 qpair failed and we were unable to recover it. 00:29:12.095 [2024-11-20 16:40:57.911733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.095 [2024-11-20 16:40:57.911740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.095 qpair failed and we were unable to recover it. 00:29:12.095 [2024-11-20 16:40:57.912017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.095 [2024-11-20 16:40:57.912026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.095 qpair failed and we were unable to recover it. 00:29:12.095 [2024-11-20 16:40:57.912339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.095 [2024-11-20 16:40:57.912346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.095 qpair failed and we were unable to recover it. 00:29:12.095 [2024-11-20 16:40:57.912671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.095 [2024-11-20 16:40:57.912679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.095 qpair failed and we were unable to recover it. 00:29:12.095 [2024-11-20 16:40:57.912967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.095 [2024-11-20 16:40:57.912975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.095 qpair failed and we were unable to recover it. 00:29:12.095 [2024-11-20 16:40:57.913261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.095 [2024-11-20 16:40:57.913267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.095 qpair failed and we were unable to recover it. 00:29:12.095 [2024-11-20 16:40:57.913575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.095 [2024-11-20 16:40:57.913581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.095 qpair failed and we were unable to recover it. 00:29:12.095 [2024-11-20 16:40:57.913873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.095 [2024-11-20 16:40:57.913880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.095 qpair failed and we were unable to recover it. 00:29:12.095 [2024-11-20 16:40:57.914164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.095 [2024-11-20 16:40:57.914172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.095 qpair failed and we were unable to recover it. 00:29:12.095 [2024-11-20 16:40:57.914489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.095 [2024-11-20 16:40:57.914496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.095 qpair failed and we were unable to recover it. 00:29:12.095 [2024-11-20 16:40:57.914803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.095 [2024-11-20 16:40:57.914811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.095 qpair failed and we were unable to recover it. 00:29:12.095 [2024-11-20 16:40:57.915125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.095 [2024-11-20 16:40:57.915133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.095 qpair failed and we were unable to recover it. 00:29:12.095 [2024-11-20 16:40:57.915319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.095 [2024-11-20 16:40:57.915327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.095 qpair failed and we were unable to recover it. 00:29:12.095 [2024-11-20 16:40:57.915637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.095 [2024-11-20 16:40:57.915643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.095 qpair failed and we were unable to recover it. 00:29:12.095 [2024-11-20 16:40:57.915956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.095 [2024-11-20 16:40:57.915963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.095 qpair failed and we were unable to recover it. 00:29:12.095 [2024-11-20 16:40:57.916263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.095 [2024-11-20 16:40:57.916270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.095 qpair failed and we were unable to recover it. 00:29:12.095 [2024-11-20 16:40:57.916556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.095 [2024-11-20 16:40:57.916564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.095 qpair failed and we were unable to recover it. 00:29:12.095 [2024-11-20 16:40:57.916871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.095 [2024-11-20 16:40:57.916878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.095 qpair failed and we were unable to recover it. 00:29:12.095 [2024-11-20 16:40:57.917189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.095 [2024-11-20 16:40:57.917196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.095 qpair failed and we were unable to recover it. 00:29:12.095 [2024-11-20 16:40:57.917569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.095 [2024-11-20 16:40:57.917576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.095 qpair failed and we were unable to recover it. 00:29:12.095 [2024-11-20 16:40:57.917848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.095 [2024-11-20 16:40:57.917855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.095 qpair failed and we were unable to recover it. 00:29:12.095 [2024-11-20 16:40:57.918042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.095 [2024-11-20 16:40:57.918049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.095 qpair failed and we were unable to recover it. 00:29:12.095 [2024-11-20 16:40:57.918348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.095 [2024-11-20 16:40:57.918355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.095 qpair failed and we were unable to recover it. 00:29:12.096 [2024-11-20 16:40:57.918721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.096 [2024-11-20 16:40:57.918727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.096 qpair failed and we were unable to recover it. 00:29:12.096 [2024-11-20 16:40:57.919020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.096 [2024-11-20 16:40:57.919027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.096 qpair failed and we were unable to recover it. 00:29:12.096 [2024-11-20 16:40:57.919186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.096 [2024-11-20 16:40:57.919193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.096 qpair failed and we were unable to recover it. 00:29:12.096 [2024-11-20 16:40:57.919492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.096 [2024-11-20 16:40:57.919499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.096 qpair failed and we were unable to recover it. 00:29:12.096 [2024-11-20 16:40:57.919781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.096 [2024-11-20 16:40:57.919796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.096 qpair failed and we were unable to recover it. 00:29:12.096 [2024-11-20 16:40:57.920079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.096 [2024-11-20 16:40:57.920086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.096 qpair failed and we were unable to recover it. 00:29:12.096 [2024-11-20 16:40:57.920445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.096 [2024-11-20 16:40:57.920452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.096 qpair failed and we were unable to recover it. 00:29:12.096 [2024-11-20 16:40:57.920726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.096 [2024-11-20 16:40:57.920733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.096 qpair failed and we were unable to recover it. 00:29:12.096 [2024-11-20 16:40:57.921030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.096 [2024-11-20 16:40:57.921037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.096 qpair failed and we were unable to recover it. 00:29:12.096 [2024-11-20 16:40:57.921325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.096 [2024-11-20 16:40:57.921332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.096 qpair failed and we were unable to recover it. 00:29:12.096 [2024-11-20 16:40:57.921657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.096 [2024-11-20 16:40:57.921663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.096 qpair failed and we were unable to recover it. 00:29:12.096 [2024-11-20 16:40:57.921955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.096 [2024-11-20 16:40:57.921962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.096 qpair failed and we were unable to recover it. 00:29:12.096 [2024-11-20 16:40:57.922251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.096 [2024-11-20 16:40:57.922258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.096 qpair failed and we were unable to recover it. 00:29:12.096 [2024-11-20 16:40:57.922557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.096 [2024-11-20 16:40:57.922564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.096 qpair failed and we were unable to recover it. 00:29:12.096 [2024-11-20 16:40:57.922897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.096 [2024-11-20 16:40:57.922904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.096 qpair failed and we were unable to recover it. 00:29:12.096 [2024-11-20 16:40:57.923211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.096 [2024-11-20 16:40:57.923218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.096 qpair failed and we were unable to recover it. 00:29:12.096 [2024-11-20 16:40:57.923499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.096 [2024-11-20 16:40:57.923507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.096 qpair failed and we were unable to recover it. 00:29:12.096 [2024-11-20 16:40:57.923830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.096 [2024-11-20 16:40:57.923837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.096 qpair failed and we were unable to recover it. 00:29:12.096 [2024-11-20 16:40:57.924149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.096 [2024-11-20 16:40:57.924157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.096 qpair failed and we were unable to recover it. 00:29:12.096 [2024-11-20 16:40:57.924462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.096 [2024-11-20 16:40:57.924469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.096 qpair failed and we were unable to recover it. 00:29:12.096 [2024-11-20 16:40:57.924809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.096 [2024-11-20 16:40:57.924816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.096 qpair failed and we were unable to recover it. 00:29:12.096 [2024-11-20 16:40:57.925108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.096 [2024-11-20 16:40:57.925115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.096 qpair failed and we were unable to recover it. 00:29:12.096 [2024-11-20 16:40:57.925410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.096 [2024-11-20 16:40:57.925417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.096 qpair failed and we were unable to recover it. 00:29:12.096 [2024-11-20 16:40:57.925710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.096 [2024-11-20 16:40:57.925717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.096 qpair failed and we were unable to recover it. 00:29:12.096 [2024-11-20 16:40:57.925927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.096 [2024-11-20 16:40:57.925933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.096 qpair failed and we were unable to recover it. 00:29:12.096 [2024-11-20 16:40:57.926230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.096 [2024-11-20 16:40:57.926237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.096 qpair failed and we were unable to recover it. 00:29:12.096 [2024-11-20 16:40:57.926558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.096 [2024-11-20 16:40:57.926565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.096 qpair failed and we were unable to recover it. 00:29:12.096 [2024-11-20 16:40:57.926754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.096 [2024-11-20 16:40:57.926760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.096 qpair failed and we were unable to recover it. 00:29:12.096 [2024-11-20 16:40:57.927038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.096 [2024-11-20 16:40:57.927045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.096 qpair failed and we were unable to recover it. 00:29:12.096 [2024-11-20 16:40:57.927365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.096 [2024-11-20 16:40:57.927372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.096 qpair failed and we were unable to recover it. 00:29:12.096 [2024-11-20 16:40:57.927541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.096 [2024-11-20 16:40:57.927547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.096 qpair failed and we were unable to recover it. 00:29:12.096 [2024-11-20 16:40:57.927768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.096 [2024-11-20 16:40:57.927775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.096 qpair failed and we were unable to recover it. 00:29:12.096 [2024-11-20 16:40:57.928108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.096 [2024-11-20 16:40:57.928115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.096 qpair failed and we were unable to recover it. 00:29:12.096 [2024-11-20 16:40:57.928311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.096 [2024-11-20 16:40:57.928318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.096 qpair failed and we were unable to recover it. 00:29:12.096 [2024-11-20 16:40:57.928487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.096 [2024-11-20 16:40:57.928494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.096 qpair failed and we were unable to recover it. 00:29:12.096 [2024-11-20 16:40:57.928666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.096 [2024-11-20 16:40:57.928674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.096 qpair failed and we were unable to recover it. 00:29:12.096 [2024-11-20 16:40:57.928985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.096 [2024-11-20 16:40:57.928992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.096 qpair failed and we were unable to recover it. 00:29:12.096 [2024-11-20 16:40:57.929302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.097 [2024-11-20 16:40:57.929308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.097 qpair failed and we were unable to recover it. 00:29:12.097 [2024-11-20 16:40:57.929633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.097 [2024-11-20 16:40:57.929640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.097 qpair failed and we were unable to recover it. 00:29:12.097 [2024-11-20 16:40:57.929925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.097 [2024-11-20 16:40:57.929933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.097 qpair failed and we were unable to recover it. 00:29:12.097 [2024-11-20 16:40:57.930267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.097 [2024-11-20 16:40:57.930274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.097 qpair failed and we were unable to recover it. 00:29:12.097 [2024-11-20 16:40:57.930568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.097 [2024-11-20 16:40:57.930575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.097 qpair failed and we were unable to recover it. 00:29:12.097 [2024-11-20 16:40:57.930876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.097 [2024-11-20 16:40:57.930883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.097 qpair failed and we were unable to recover it. 00:29:12.097 [2024-11-20 16:40:57.931178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.097 [2024-11-20 16:40:57.931185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.097 qpair failed and we were unable to recover it. 00:29:12.097 [2024-11-20 16:40:57.931341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.097 [2024-11-20 16:40:57.931349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.097 qpair failed and we were unable to recover it. 00:29:12.097 [2024-11-20 16:40:57.931662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.097 [2024-11-20 16:40:57.931669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.097 qpair failed and we were unable to recover it. 00:29:12.097 [2024-11-20 16:40:57.931987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.097 [2024-11-20 16:40:57.931994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.097 qpair failed and we were unable to recover it. 00:29:12.097 [2024-11-20 16:40:57.932317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.097 [2024-11-20 16:40:57.932324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.097 qpair failed and we were unable to recover it. 00:29:12.097 [2024-11-20 16:40:57.932604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.097 [2024-11-20 16:40:57.932619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.097 qpair failed and we were unable to recover it. 00:29:12.097 [2024-11-20 16:40:57.932917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.097 [2024-11-20 16:40:57.932924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.097 qpair failed and we were unable to recover it. 00:29:12.097 [2024-11-20 16:40:57.933221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.097 [2024-11-20 16:40:57.933228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.097 qpair failed and we were unable to recover it. 00:29:12.097 [2024-11-20 16:40:57.933532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.097 [2024-11-20 16:40:57.933539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.097 qpair failed and we were unable to recover it. 00:29:12.097 [2024-11-20 16:40:57.933861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.097 [2024-11-20 16:40:57.933869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.097 qpair failed and we were unable to recover it. 00:29:12.097 [2024-11-20 16:40:57.934158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.097 [2024-11-20 16:40:57.934165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.097 qpair failed and we were unable to recover it. 00:29:12.097 [2024-11-20 16:40:57.934500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.097 [2024-11-20 16:40:57.934508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.097 qpair failed and we were unable to recover it. 00:29:12.097 [2024-11-20 16:40:57.934819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.097 [2024-11-20 16:40:57.934827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.097 qpair failed and we were unable to recover it. 00:29:12.097 [2024-11-20 16:40:57.935141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.097 [2024-11-20 16:40:57.935148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.097 qpair failed and we were unable to recover it. 00:29:12.097 [2024-11-20 16:40:57.935455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.097 [2024-11-20 16:40:57.935462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.097 qpair failed and we were unable to recover it. 00:29:12.097 [2024-11-20 16:40:57.935631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.097 [2024-11-20 16:40:57.935640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.097 qpair failed and we were unable to recover it. 00:29:12.097 [2024-11-20 16:40:57.935843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.097 [2024-11-20 16:40:57.935849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.097 qpair failed and we were unable to recover it. 00:29:12.097 [2024-11-20 16:40:57.936021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.097 [2024-11-20 16:40:57.936028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.097 qpair failed and we were unable to recover it. 00:29:12.097 [2024-11-20 16:40:57.936401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.097 [2024-11-20 16:40:57.936407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.097 qpair failed and we were unable to recover it. 00:29:12.097 [2024-11-20 16:40:57.936708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.097 [2024-11-20 16:40:57.936714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.097 qpair failed and we were unable to recover it. 00:29:12.097 [2024-11-20 16:40:57.936905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.097 [2024-11-20 16:40:57.936912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.097 qpair failed and we were unable to recover it. 00:29:12.097 [2024-11-20 16:40:57.937236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.097 [2024-11-20 16:40:57.937243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.097 qpair failed and we were unable to recover it. 00:29:12.097 [2024-11-20 16:40:57.937544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.097 [2024-11-20 16:40:57.937550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.097 qpair failed and we were unable to recover it. 00:29:12.097 [2024-11-20 16:40:57.937820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.097 [2024-11-20 16:40:57.937827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.097 qpair failed and we were unable to recover it. 00:29:12.097 [2024-11-20 16:40:57.938149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.097 [2024-11-20 16:40:57.938156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.097 qpair failed and we were unable to recover it. 00:29:12.097 [2024-11-20 16:40:57.938467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.097 [2024-11-20 16:40:57.938475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.097 qpair failed and we were unable to recover it. 00:29:12.097 [2024-11-20 16:40:57.938821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.097 [2024-11-20 16:40:57.938828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.097 qpair failed and we were unable to recover it. 00:29:12.097 [2024-11-20 16:40:57.939114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.097 [2024-11-20 16:40:57.939121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.097 qpair failed and we were unable to recover it. 00:29:12.097 [2024-11-20 16:40:57.939436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.097 [2024-11-20 16:40:57.939442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.097 qpair failed and we were unable to recover it. 00:29:12.097 [2024-11-20 16:40:57.939726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.097 [2024-11-20 16:40:57.939734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.097 qpair failed and we were unable to recover it. 00:29:12.097 [2024-11-20 16:40:57.940041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.097 [2024-11-20 16:40:57.940048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.097 qpair failed and we were unable to recover it. 00:29:12.097 [2024-11-20 16:40:57.940357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.097 [2024-11-20 16:40:57.940364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.097 qpair failed and we were unable to recover it. 00:29:12.098 [2024-11-20 16:40:57.940669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.098 [2024-11-20 16:40:57.940676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.098 qpair failed and we were unable to recover it. 00:29:12.098 [2024-11-20 16:40:57.940833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.098 [2024-11-20 16:40:57.940840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.098 qpair failed and we were unable to recover it. 00:29:12.098 [2024-11-20 16:40:57.940996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.098 [2024-11-20 16:40:57.941003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.098 qpair failed and we were unable to recover it. 00:29:12.098 [2024-11-20 16:40:57.941156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.098 [2024-11-20 16:40:57.941163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.098 qpair failed and we were unable to recover it. 00:29:12.098 [2024-11-20 16:40:57.941481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.098 [2024-11-20 16:40:57.941487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.098 qpair failed and we were unable to recover it. 00:29:12.098 [2024-11-20 16:40:57.941769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.098 [2024-11-20 16:40:57.941785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.098 qpair failed and we were unable to recover it. 00:29:12.098 [2024-11-20 16:40:57.942089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.098 [2024-11-20 16:40:57.942095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.098 qpair failed and we were unable to recover it. 00:29:12.098 [2024-11-20 16:40:57.942401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.098 [2024-11-20 16:40:57.942408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.098 qpair failed and we were unable to recover it. 00:29:12.098 [2024-11-20 16:40:57.942644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.098 [2024-11-20 16:40:57.942651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.098 qpair failed and we were unable to recover it. 00:29:12.098 [2024-11-20 16:40:57.942951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.098 [2024-11-20 16:40:57.942958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.098 qpair failed and we were unable to recover it. 00:29:12.098 [2024-11-20 16:40:57.943273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.098 [2024-11-20 16:40:57.943279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.098 qpair failed and we were unable to recover it. 00:29:12.098 [2024-11-20 16:40:57.943587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.098 [2024-11-20 16:40:57.943593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.098 qpair failed and we were unable to recover it. 00:29:12.098 [2024-11-20 16:40:57.943913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.098 [2024-11-20 16:40:57.943921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.098 qpair failed and we were unable to recover it. 00:29:12.098 [2024-11-20 16:40:57.944229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.098 [2024-11-20 16:40:57.944237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.098 qpair failed and we were unable to recover it. 00:29:12.098 [2024-11-20 16:40:57.944544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.098 [2024-11-20 16:40:57.944551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.098 qpair failed and we were unable to recover it. 00:29:12.098 [2024-11-20 16:40:57.944858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.098 [2024-11-20 16:40:57.944865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.098 qpair failed and we were unable to recover it. 00:29:12.098 [2024-11-20 16:40:57.945173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.098 [2024-11-20 16:40:57.945180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.098 qpair failed and we were unable to recover it. 00:29:12.098 [2024-11-20 16:40:57.945459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.098 [2024-11-20 16:40:57.945474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.098 qpair failed and we were unable to recover it. 00:29:12.098 [2024-11-20 16:40:57.945792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.098 [2024-11-20 16:40:57.945798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.098 qpair failed and we were unable to recover it. 00:29:12.098 [2024-11-20 16:40:57.946096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.098 [2024-11-20 16:40:57.946103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.098 qpair failed and we were unable to recover it. 00:29:12.098 [2024-11-20 16:40:57.946320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.098 [2024-11-20 16:40:57.946326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.098 qpair failed and we were unable to recover it. 00:29:12.098 [2024-11-20 16:40:57.946556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.098 [2024-11-20 16:40:57.946563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.098 qpair failed and we were unable to recover it. 00:29:12.098 [2024-11-20 16:40:57.946736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.098 [2024-11-20 16:40:57.946743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.098 qpair failed and we were unable to recover it. 00:29:12.098 [2024-11-20 16:40:57.946937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.098 [2024-11-20 16:40:57.946946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.098 qpair failed and we were unable to recover it. 00:29:12.098 [2024-11-20 16:40:57.947232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.098 [2024-11-20 16:40:57.947239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.098 qpair failed and we were unable to recover it. 00:29:12.098 [2024-11-20 16:40:57.947552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.098 [2024-11-20 16:40:57.947559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.098 qpair failed and we were unable to recover it. 00:29:12.098 [2024-11-20 16:40:57.947865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.098 [2024-11-20 16:40:57.947871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.098 qpair failed and we were unable to recover it. 00:29:12.098 [2024-11-20 16:40:57.948038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.098 [2024-11-20 16:40:57.948045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.098 qpair failed and we were unable to recover it. 00:29:12.098 [2024-11-20 16:40:57.948268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.099 [2024-11-20 16:40:57.948275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.099 qpair failed and we were unable to recover it. 00:29:12.099 [2024-11-20 16:40:57.948444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.099 [2024-11-20 16:40:57.948452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.099 qpair failed and we were unable to recover it. 00:29:12.099 [2024-11-20 16:40:57.948718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.099 [2024-11-20 16:40:57.948725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.099 qpair failed and we were unable to recover it. 00:29:12.099 [2024-11-20 16:40:57.949032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.099 [2024-11-20 16:40:57.949039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.099 qpair failed and we were unable to recover it. 00:29:12.099 [2024-11-20 16:40:57.949339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.099 [2024-11-20 16:40:57.949345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.099 qpair failed and we were unable to recover it. 00:29:12.099 [2024-11-20 16:40:57.949636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.099 [2024-11-20 16:40:57.949643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.099 qpair failed and we were unable to recover it. 00:29:12.099 [2024-11-20 16:40:57.949959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.099 [2024-11-20 16:40:57.949965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.099 qpair failed and we were unable to recover it. 00:29:12.099 [2024-11-20 16:40:57.950273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.099 [2024-11-20 16:40:57.950280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.099 qpair failed and we were unable to recover it. 00:29:12.099 [2024-11-20 16:40:57.950622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.099 [2024-11-20 16:40:57.950628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.099 qpair failed and we were unable to recover it. 00:29:12.099 [2024-11-20 16:40:57.950905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.099 [2024-11-20 16:40:57.950912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.099 qpair failed and we were unable to recover it. 00:29:12.099 [2024-11-20 16:40:57.951222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.099 [2024-11-20 16:40:57.951229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.099 qpair failed and we were unable to recover it. 00:29:12.099 [2024-11-20 16:40:57.951527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.099 [2024-11-20 16:40:57.951533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.099 qpair failed and we were unable to recover it. 00:29:12.099 [2024-11-20 16:40:57.951853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.099 [2024-11-20 16:40:57.951859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.099 qpair failed and we were unable to recover it. 00:29:12.099 [2024-11-20 16:40:57.952207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.099 [2024-11-20 16:40:57.952214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.099 qpair failed and we were unable to recover it. 00:29:12.099 [2024-11-20 16:40:57.952517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.099 [2024-11-20 16:40:57.952524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.099 qpair failed and we were unable to recover it. 00:29:12.099 [2024-11-20 16:40:57.952807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.099 [2024-11-20 16:40:57.952813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.099 qpair failed and we were unable to recover it. 00:29:12.099 [2024-11-20 16:40:57.953108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.099 [2024-11-20 16:40:57.953115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.099 qpair failed and we were unable to recover it. 00:29:12.099 [2024-11-20 16:40:57.953417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.099 [2024-11-20 16:40:57.953424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.099 qpair failed and we were unable to recover it. 00:29:12.099 [2024-11-20 16:40:57.953727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.099 [2024-11-20 16:40:57.953734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.099 qpair failed and we were unable to recover it. 00:29:12.099 [2024-11-20 16:40:57.954043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.099 [2024-11-20 16:40:57.954050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.099 qpair failed and we were unable to recover it. 00:29:12.099 [2024-11-20 16:40:57.954376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.099 [2024-11-20 16:40:57.954383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.099 qpair failed and we were unable to recover it. 00:29:12.099 [2024-11-20 16:40:57.954684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.099 [2024-11-20 16:40:57.954691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.099 qpair failed and we were unable to recover it. 00:29:12.099 [2024-11-20 16:40:57.954870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.099 [2024-11-20 16:40:57.954877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.099 qpair failed and we were unable to recover it. 00:29:12.099 [2024-11-20 16:40:57.955093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.099 [2024-11-20 16:40:57.955101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.099 qpair failed and we were unable to recover it. 00:29:12.099 [2024-11-20 16:40:57.955456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.099 [2024-11-20 16:40:57.955463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.099 qpair failed and we were unable to recover it. 00:29:12.099 [2024-11-20 16:40:57.955746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.099 [2024-11-20 16:40:57.955753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.099 qpair failed and we were unable to recover it. 00:29:12.099 [2024-11-20 16:40:57.956074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.099 [2024-11-20 16:40:57.956080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.099 qpair failed and we were unable to recover it. 00:29:12.099 [2024-11-20 16:40:57.956257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.099 [2024-11-20 16:40:57.956264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.099 qpair failed and we were unable to recover it. 00:29:12.099 [2024-11-20 16:40:57.956507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.099 [2024-11-20 16:40:57.956513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.099 qpair failed and we were unable to recover it. 00:29:12.099 [2024-11-20 16:40:57.956876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.099 [2024-11-20 16:40:57.956882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.099 qpair failed and we were unable to recover it. 00:29:12.099 [2024-11-20 16:40:57.957166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.099 [2024-11-20 16:40:57.957173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.099 qpair failed and we were unable to recover it. 00:29:12.099 [2024-11-20 16:40:57.957365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.099 [2024-11-20 16:40:57.957372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.099 qpair failed and we were unable to recover it. 00:29:12.099 [2024-11-20 16:40:57.957672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.099 [2024-11-20 16:40:57.957678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.099 qpair failed and we were unable to recover it. 00:29:12.099 [2024-11-20 16:40:57.957961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.099 [2024-11-20 16:40:57.957968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.099 qpair failed and we were unable to recover it. 00:29:12.099 [2024-11-20 16:40:57.958283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.099 [2024-11-20 16:40:57.958290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.099 qpair failed and we were unable to recover it. 00:29:12.099 [2024-11-20 16:40:57.958586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.099 [2024-11-20 16:40:57.958595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.099 qpair failed and we were unable to recover it. 00:29:12.099 [2024-11-20 16:40:57.958752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.099 [2024-11-20 16:40:57.958760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.099 qpair failed and we were unable to recover it. 00:29:12.099 [2024-11-20 16:40:57.959092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.099 [2024-11-20 16:40:57.959100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.100 qpair failed and we were unable to recover it. 00:29:12.100 [2024-11-20 16:40:57.959294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.100 [2024-11-20 16:40:57.959301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.100 qpair failed and we were unable to recover it. 00:29:12.100 [2024-11-20 16:40:57.959465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.100 [2024-11-20 16:40:57.959472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.100 qpair failed and we were unable to recover it. 00:29:12.100 [2024-11-20 16:40:57.959777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.100 [2024-11-20 16:40:57.959784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.100 qpair failed and we were unable to recover it. 00:29:12.100 [2024-11-20 16:40:57.959961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.100 [2024-11-20 16:40:57.959968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.100 qpair failed and we were unable to recover it. 00:29:12.100 [2024-11-20 16:40:57.960252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.100 [2024-11-20 16:40:57.960259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.100 qpair failed and we were unable to recover it. 00:29:12.100 [2024-11-20 16:40:57.960544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.100 [2024-11-20 16:40:57.960551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.100 qpair failed and we were unable to recover it. 00:29:12.100 [2024-11-20 16:40:57.960853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.100 [2024-11-20 16:40:57.960860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.100 qpair failed and we were unable to recover it. 00:29:12.100 [2024-11-20 16:40:57.961171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.100 [2024-11-20 16:40:57.961178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.100 qpair failed and we were unable to recover it. 00:29:12.100 [2024-11-20 16:40:57.961478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.100 [2024-11-20 16:40:57.961485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.100 qpair failed and we were unable to recover it. 00:29:12.100 [2024-11-20 16:40:57.961793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.100 [2024-11-20 16:40:57.961800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.100 qpair failed and we were unable to recover it. 00:29:12.100 [2024-11-20 16:40:57.962121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.100 [2024-11-20 16:40:57.962128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.100 qpair failed and we were unable to recover it. 00:29:12.100 [2024-11-20 16:40:57.962468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.100 [2024-11-20 16:40:57.962475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.100 qpair failed and we were unable to recover it. 00:29:12.100 [2024-11-20 16:40:57.962782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.100 [2024-11-20 16:40:57.962790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.100 qpair failed and we were unable to recover it. 00:29:12.100 [2024-11-20 16:40:57.963014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.100 [2024-11-20 16:40:57.963022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.100 qpair failed and we were unable to recover it. 00:29:12.100 [2024-11-20 16:40:57.963215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.100 [2024-11-20 16:40:57.963222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.100 qpair failed and we were unable to recover it. 00:29:12.100 [2024-11-20 16:40:57.963488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.100 [2024-11-20 16:40:57.963495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.100 qpair failed and we were unable to recover it. 00:29:12.100 [2024-11-20 16:40:57.963796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.100 [2024-11-20 16:40:57.963803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.100 qpair failed and we were unable to recover it. 00:29:12.100 [2024-11-20 16:40:57.964114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.100 [2024-11-20 16:40:57.964121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.100 qpair failed and we were unable to recover it. 00:29:12.100 [2024-11-20 16:40:57.964431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.100 [2024-11-20 16:40:57.964437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.100 qpair failed and we were unable to recover it. 00:29:12.100 [2024-11-20 16:40:57.964728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.100 [2024-11-20 16:40:57.964736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.100 qpair failed and we were unable to recover it. 00:29:12.100 [2024-11-20 16:40:57.965086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.100 [2024-11-20 16:40:57.965093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.100 qpair failed and we were unable to recover it. 00:29:12.100 [2024-11-20 16:40:57.965383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.100 [2024-11-20 16:40:57.965389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.100 qpair failed and we were unable to recover it. 00:29:12.100 [2024-11-20 16:40:57.965717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.100 [2024-11-20 16:40:57.965725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.100 qpair failed and we were unable to recover it. 00:29:12.100 [2024-11-20 16:40:57.966010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.100 [2024-11-20 16:40:57.966017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.100 qpair failed and we were unable to recover it. 00:29:12.100 [2024-11-20 16:40:57.966205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.100 [2024-11-20 16:40:57.966212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.100 qpair failed and we were unable to recover it. 00:29:12.100 [2024-11-20 16:40:57.966505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.100 [2024-11-20 16:40:57.966512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.100 qpair failed and we were unable to recover it. 00:29:12.100 [2024-11-20 16:40:57.966827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.100 [2024-11-20 16:40:57.966833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.100 qpair failed and we were unable to recover it. 00:29:12.100 [2024-11-20 16:40:57.967049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.100 [2024-11-20 16:40:57.967056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.100 qpair failed and we were unable to recover it. 00:29:12.100 [2024-11-20 16:40:57.967319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.101 [2024-11-20 16:40:57.967325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.101 qpair failed and we were unable to recover it. 00:29:12.101 [2024-11-20 16:40:57.967648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.101 [2024-11-20 16:40:57.967655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.101 qpair failed and we were unable to recover it. 00:29:12.101 [2024-11-20 16:40:57.967955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.101 [2024-11-20 16:40:57.967962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.101 qpair failed and we were unable to recover it. 00:29:12.101 [2024-11-20 16:40:57.968262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.101 [2024-11-20 16:40:57.968270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.101 qpair failed and we were unable to recover it. 00:29:12.101 [2024-11-20 16:40:57.968513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.101 [2024-11-20 16:40:57.968521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.101 qpair failed and we were unable to recover it. 00:29:12.101 [2024-11-20 16:40:57.968807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.101 [2024-11-20 16:40:57.968813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.101 qpair failed and we were unable to recover it. 00:29:12.101 [2024-11-20 16:40:57.969101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.101 [2024-11-20 16:40:57.969110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.101 qpair failed and we were unable to recover it. 00:29:12.101 [2024-11-20 16:40:57.969316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.101 [2024-11-20 16:40:57.969331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.101 qpair failed and we were unable to recover it. 00:29:12.101 [2024-11-20 16:40:57.969629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.101 [2024-11-20 16:40:57.969636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.101 qpair failed and we were unable to recover it. 00:29:12.101 [2024-11-20 16:40:57.969924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.101 [2024-11-20 16:40:57.969933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.101 qpair failed and we were unable to recover it. 00:29:12.101 [2024-11-20 16:40:57.970235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.101 [2024-11-20 16:40:57.970242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.101 qpair failed and we were unable to recover it. 00:29:12.101 [2024-11-20 16:40:57.970552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.101 [2024-11-20 16:40:57.970559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.101 qpair failed and we were unable to recover it. 00:29:12.101 [2024-11-20 16:40:57.970849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.101 [2024-11-20 16:40:57.970857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.101 qpair failed and we were unable to recover it. 00:29:12.101 [2024-11-20 16:40:57.971160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.101 [2024-11-20 16:40:57.971168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.101 qpair failed and we were unable to recover it. 00:29:12.101 [2024-11-20 16:40:57.971476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.101 [2024-11-20 16:40:57.971483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.101 qpair failed and we were unable to recover it. 00:29:12.101 [2024-11-20 16:40:57.971778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.101 [2024-11-20 16:40:57.971785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.101 qpair failed and we were unable to recover it. 00:29:12.101 [2024-11-20 16:40:57.972091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.101 [2024-11-20 16:40:57.972098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.101 qpair failed and we were unable to recover it. 00:29:12.101 [2024-11-20 16:40:57.972400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.101 [2024-11-20 16:40:57.972407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.101 qpair failed and we were unable to recover it. 00:29:12.101 [2024-11-20 16:40:57.972592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.101 [2024-11-20 16:40:57.972599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.101 qpair failed and we were unable to recover it. 00:29:12.101 [2024-11-20 16:40:57.972903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.101 [2024-11-20 16:40:57.972909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.101 qpair failed and we were unable to recover it. 00:29:12.101 [2024-11-20 16:40:57.973182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.101 [2024-11-20 16:40:57.973189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.101 qpair failed and we were unable to recover it. 00:29:12.101 [2024-11-20 16:40:57.973505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.101 [2024-11-20 16:40:57.973511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.101 qpair failed and we were unable to recover it. 00:29:12.101 [2024-11-20 16:40:57.973819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.101 [2024-11-20 16:40:57.973825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.101 qpair failed and we were unable to recover it. 00:29:12.101 [2024-11-20 16:40:57.974136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.101 [2024-11-20 16:40:57.974143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.101 qpair failed and we were unable to recover it. 00:29:12.101 [2024-11-20 16:40:57.974424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.101 [2024-11-20 16:40:57.974439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.101 qpair failed and we were unable to recover it. 00:29:12.101 [2024-11-20 16:40:57.974756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.101 [2024-11-20 16:40:57.974762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.101 qpair failed and we were unable to recover it. 00:29:12.101 [2024-11-20 16:40:57.975065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.101 [2024-11-20 16:40:57.975072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.101 qpair failed and we were unable to recover it. 00:29:12.101 [2024-11-20 16:40:57.975397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.101 [2024-11-20 16:40:57.975403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.101 qpair failed and we were unable to recover it. 00:29:12.101 [2024-11-20 16:40:57.975744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.101 [2024-11-20 16:40:57.975751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.101 qpair failed and we were unable to recover it. 00:29:12.101 [2024-11-20 16:40:57.975936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.101 [2024-11-20 16:40:57.975943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.101 qpair failed and we were unable to recover it. 00:29:12.101 [2024-11-20 16:40:57.976245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.101 [2024-11-20 16:40:57.976252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.101 qpair failed and we were unable to recover it. 00:29:12.101 [2024-11-20 16:40:57.976554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.101 [2024-11-20 16:40:57.976560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.101 qpair failed and we were unable to recover it. 00:29:12.101 [2024-11-20 16:40:57.976875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.102 [2024-11-20 16:40:57.976882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.102 qpair failed and we were unable to recover it. 00:29:12.102 [2024-11-20 16:40:57.977189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.102 [2024-11-20 16:40:57.977196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.102 qpair failed and we were unable to recover it. 00:29:12.102 [2024-11-20 16:40:57.977504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.102 [2024-11-20 16:40:57.977511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.102 qpair failed and we were unable to recover it. 00:29:12.102 [2024-11-20 16:40:57.977822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.102 [2024-11-20 16:40:57.977830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.102 qpair failed and we were unable to recover it. 00:29:12.102 [2024-11-20 16:40:57.977912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.102 [2024-11-20 16:40:57.977919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.102 qpair failed and we were unable to recover it. 00:29:12.102 [2024-11-20 16:40:57.978202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.102 [2024-11-20 16:40:57.978210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.102 qpair failed and we were unable to recover it. 00:29:12.102 [2024-11-20 16:40:57.978533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.102 [2024-11-20 16:40:57.978541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.102 qpair failed and we were unable to recover it. 00:29:12.102 [2024-11-20 16:40:57.978847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.102 [2024-11-20 16:40:57.978854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.102 qpair failed and we were unable to recover it. 00:29:12.102 [2024-11-20 16:40:57.979206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.102 [2024-11-20 16:40:57.979213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.102 qpair failed and we were unable to recover it. 00:29:12.102 [2024-11-20 16:40:57.979524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.102 [2024-11-20 16:40:57.979532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.102 qpair failed and we were unable to recover it. 00:29:12.102 [2024-11-20 16:40:57.979823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.102 [2024-11-20 16:40:57.979831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.102 qpair failed and we were unable to recover it. 00:29:12.102 [2024-11-20 16:40:57.980122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.102 [2024-11-20 16:40:57.980129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.102 qpair failed and we were unable to recover it. 00:29:12.102 [2024-11-20 16:40:57.980421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.102 [2024-11-20 16:40:57.980427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.102 qpair failed and we were unable to recover it. 00:29:12.102 [2024-11-20 16:40:57.980726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.102 [2024-11-20 16:40:57.980733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.102 qpair failed and we were unable to recover it. 00:29:12.102 [2024-11-20 16:40:57.981020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.102 [2024-11-20 16:40:57.981027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.102 qpair failed and we were unable to recover it. 00:29:12.102 [2024-11-20 16:40:57.981330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.102 [2024-11-20 16:40:57.981337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.102 qpair failed and we were unable to recover it. 00:29:12.102 [2024-11-20 16:40:57.981618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.102 [2024-11-20 16:40:57.981631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.102 qpair failed and we were unable to recover it. 00:29:12.102 [2024-11-20 16:40:57.981927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.102 [2024-11-20 16:40:57.981936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.102 qpair failed and we were unable to recover it. 00:29:12.102 [2024-11-20 16:40:57.982206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.102 [2024-11-20 16:40:57.982214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.102 qpair failed and we were unable to recover it. 00:29:12.102 [2024-11-20 16:40:57.982512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.102 [2024-11-20 16:40:57.982519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.102 qpair failed and we were unable to recover it. 00:29:12.102 [2024-11-20 16:40:57.982828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.102 [2024-11-20 16:40:57.982835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.102 qpair failed and we were unable to recover it. 00:29:12.102 [2024-11-20 16:40:57.983191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.102 [2024-11-20 16:40:57.983198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.102 qpair failed and we were unable to recover it. 00:29:12.102 [2024-11-20 16:40:57.983392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.102 [2024-11-20 16:40:57.983398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.102 qpair failed and we were unable to recover it. 00:29:12.102 [2024-11-20 16:40:57.983663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.102 [2024-11-20 16:40:57.983672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.102 qpair failed and we were unable to recover it. 00:29:12.102 [2024-11-20 16:40:57.983975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.102 [2024-11-20 16:40:57.983986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.102 qpair failed and we were unable to recover it. 00:29:12.102 [2024-11-20 16:40:57.984175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.102 [2024-11-20 16:40:57.984181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.102 qpair failed and we were unable to recover it. 00:29:12.102 [2024-11-20 16:40:57.984475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.102 [2024-11-20 16:40:57.984481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.102 qpair failed and we were unable to recover it. 00:29:12.102 [2024-11-20 16:40:57.984678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.102 [2024-11-20 16:40:57.984687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.102 qpair failed and we were unable to recover it. 00:29:12.102 [2024-11-20 16:40:57.985063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.102 [2024-11-20 16:40:57.985071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.102 qpair failed and we were unable to recover it. 00:29:12.102 [2024-11-20 16:40:57.985352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.102 [2024-11-20 16:40:57.985359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.102 qpair failed and we were unable to recover it. 00:29:12.102 [2024-11-20 16:40:57.985710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.102 [2024-11-20 16:40:57.985720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.102 qpair failed and we were unable to recover it. 00:29:12.102 [2024-11-20 16:40:57.986004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.102 [2024-11-20 16:40:57.986012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.102 qpair failed and we were unable to recover it. 00:29:12.102 [2024-11-20 16:40:57.986339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.102 [2024-11-20 16:40:57.986346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.102 qpair failed and we were unable to recover it. 00:29:12.102 [2024-11-20 16:40:57.986500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.102 [2024-11-20 16:40:57.986507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.102 qpair failed and we were unable to recover it. 00:29:12.102 [2024-11-20 16:40:57.986771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.102 [2024-11-20 16:40:57.986778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.102 qpair failed and we were unable to recover it. 00:29:12.102 [2024-11-20 16:40:57.986947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.102 [2024-11-20 16:40:57.986954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.102 qpair failed and we were unable to recover it. 00:29:12.102 [2024-11-20 16:40:57.987264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.102 [2024-11-20 16:40:57.987271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.102 qpair failed and we were unable to recover it. 00:29:12.102 [2024-11-20 16:40:57.987586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.102 [2024-11-20 16:40:57.987593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.102 qpair failed and we were unable to recover it. 00:29:12.103 [2024-11-20 16:40:57.987922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.103 [2024-11-20 16:40:57.987930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.103 qpair failed and we were unable to recover it. 00:29:12.103 [2024-11-20 16:40:57.988237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.103 [2024-11-20 16:40:57.988244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.103 qpair failed and we were unable to recover it. 00:29:12.103 [2024-11-20 16:40:57.988445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.103 [2024-11-20 16:40:57.988451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.103 qpair failed and we were unable to recover it. 00:29:12.103 [2024-11-20 16:40:57.988720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.103 [2024-11-20 16:40:57.988727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.103 qpair failed and we were unable to recover it. 00:29:12.103 [2024-11-20 16:40:57.989028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.103 [2024-11-20 16:40:57.989034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.103 qpair failed and we were unable to recover it. 00:29:12.103 [2024-11-20 16:40:57.989351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.103 [2024-11-20 16:40:57.989358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.103 qpair failed and we were unable to recover it. 00:29:12.103 [2024-11-20 16:40:57.989550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.103 [2024-11-20 16:40:57.989557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.103 qpair failed and we were unable to recover it. 00:29:12.103 [2024-11-20 16:40:57.989816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.103 [2024-11-20 16:40:57.989822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.103 qpair failed and we were unable to recover it. 00:29:12.103 [2024-11-20 16:40:57.990111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.103 [2024-11-20 16:40:57.990118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.103 qpair failed and we were unable to recover it. 00:29:12.103 [2024-11-20 16:40:57.990418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.103 [2024-11-20 16:40:57.990425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.103 qpair failed and we were unable to recover it. 00:29:12.103 [2024-11-20 16:40:57.990739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.103 [2024-11-20 16:40:57.990745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.103 qpair failed and we were unable to recover it. 00:29:12.103 [2024-11-20 16:40:57.990903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.103 [2024-11-20 16:40:57.990911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.103 qpair failed and we were unable to recover it. 00:29:12.103 [2024-11-20 16:40:57.991138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.103 [2024-11-20 16:40:57.991145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.103 qpair failed and we were unable to recover it. 00:29:12.103 [2024-11-20 16:40:57.991470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.103 [2024-11-20 16:40:57.991477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.103 qpair failed and we were unable to recover it. 00:29:12.103 [2024-11-20 16:40:57.991776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.103 [2024-11-20 16:40:57.991783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.103 qpair failed and we were unable to recover it. 00:29:12.103 [2024-11-20 16:40:57.992068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.103 [2024-11-20 16:40:57.992075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.103 qpair failed and we were unable to recover it. 00:29:12.103 [2024-11-20 16:40:57.992277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.103 [2024-11-20 16:40:57.992284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.103 qpair failed and we were unable to recover it. 00:29:12.103 [2024-11-20 16:40:57.992590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.103 [2024-11-20 16:40:57.992597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.103 qpair failed and we were unable to recover it. 00:29:12.103 [2024-11-20 16:40:57.992909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.103 [2024-11-20 16:40:57.992917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.103 qpair failed and we were unable to recover it. 00:29:12.103 [2024-11-20 16:40:57.993233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.103 [2024-11-20 16:40:57.993242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.103 qpair failed and we were unable to recover it. 00:29:12.103 [2024-11-20 16:40:57.993544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.103 [2024-11-20 16:40:57.993551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.103 qpair failed and we were unable to recover it. 00:29:12.103 [2024-11-20 16:40:57.993844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.103 [2024-11-20 16:40:57.993851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.103 qpair failed and we were unable to recover it. 00:29:12.103 [2024-11-20 16:40:57.994144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.103 [2024-11-20 16:40:57.994151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.103 qpair failed and we were unable to recover it. 00:29:12.103 [2024-11-20 16:40:57.994350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.103 [2024-11-20 16:40:57.994356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.103 qpair failed and we were unable to recover it. 00:29:12.103 [2024-11-20 16:40:57.994669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.103 [2024-11-20 16:40:57.994677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.103 qpair failed and we were unable to recover it. 00:29:12.103 [2024-11-20 16:40:57.995016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.103 [2024-11-20 16:40:57.995023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.103 qpair failed and we were unable to recover it. 00:29:12.103 [2024-11-20 16:40:57.995297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.103 [2024-11-20 16:40:57.995304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.103 qpair failed and we were unable to recover it. 00:29:12.103 [2024-11-20 16:40:57.995643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.103 [2024-11-20 16:40:57.995651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.103 qpair failed and we were unable to recover it. 00:29:12.103 [2024-11-20 16:40:57.995958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.103 [2024-11-20 16:40:57.995965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.103 qpair failed and we were unable to recover it. 00:29:12.104 [2024-11-20 16:40:57.996242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.104 [2024-11-20 16:40:57.996249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.104 qpair failed and we were unable to recover it. 00:29:12.104 [2024-11-20 16:40:57.996411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.104 [2024-11-20 16:40:57.996427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.104 qpair failed and we were unable to recover it. 00:29:12.104 [2024-11-20 16:40:57.996731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.104 [2024-11-20 16:40:57.996739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.104 qpair failed and we were unable to recover it. 00:29:12.104 [2024-11-20 16:40:57.997049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.104 [2024-11-20 16:40:57.997056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.104 qpair failed and we were unable to recover it. 00:29:12.104 [2024-11-20 16:40:57.997431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.104 [2024-11-20 16:40:57.997437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.104 qpair failed and we were unable to recover it. 00:29:12.104 [2024-11-20 16:40:57.997743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.104 [2024-11-20 16:40:57.997750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.104 qpair failed and we were unable to recover it. 00:29:12.104 [2024-11-20 16:40:57.998064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.104 [2024-11-20 16:40:57.998071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.104 qpair failed and we were unable to recover it. 00:29:12.104 [2024-11-20 16:40:57.998275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.104 [2024-11-20 16:40:57.998282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.104 qpair failed and we were unable to recover it. 00:29:12.104 [2024-11-20 16:40:57.998596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.104 [2024-11-20 16:40:57.998602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.104 qpair failed and we were unable to recover it. 00:29:12.104 [2024-11-20 16:40:57.998925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.104 [2024-11-20 16:40:57.998931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.104 qpair failed and we were unable to recover it. 00:29:12.104 [2024-11-20 16:40:57.999277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.104 [2024-11-20 16:40:57.999284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.104 qpair failed and we were unable to recover it. 00:29:12.104 [2024-11-20 16:40:57.999602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.104 [2024-11-20 16:40:57.999609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.104 qpair failed and we were unable to recover it. 00:29:12.104 [2024-11-20 16:40:57.999892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.104 [2024-11-20 16:40:57.999906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.104 qpair failed and we were unable to recover it. 00:29:12.104 [2024-11-20 16:40:58.000102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.104 [2024-11-20 16:40:58.000109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.104 qpair failed and we were unable to recover it. 00:29:12.104 [2024-11-20 16:40:58.000433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.104 [2024-11-20 16:40:58.000440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.104 qpair failed and we were unable to recover it. 00:29:12.104 [2024-11-20 16:40:58.000739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.104 [2024-11-20 16:40:58.000746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.104 qpair failed and we were unable to recover it. 00:29:12.104 [2024-11-20 16:40:58.001040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.104 [2024-11-20 16:40:58.001047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.104 qpair failed and we were unable to recover it. 00:29:12.104 [2024-11-20 16:40:58.001423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.104 [2024-11-20 16:40:58.001430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.104 qpair failed and we were unable to recover it. 00:29:12.104 [2024-11-20 16:40:58.001701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.104 [2024-11-20 16:40:58.001708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.104 qpair failed and we were unable to recover it. 00:29:12.104 [2024-11-20 16:40:58.001993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.104 [2024-11-20 16:40:58.002000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.104 qpair failed and we were unable to recover it. 00:29:12.104 [2024-11-20 16:40:58.002315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.104 [2024-11-20 16:40:58.002322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.104 qpair failed and we were unable to recover it. 00:29:12.104 [2024-11-20 16:40:58.002623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.104 [2024-11-20 16:40:58.002629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.104 qpair failed and we were unable to recover it. 00:29:12.104 [2024-11-20 16:40:58.002922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.104 [2024-11-20 16:40:58.002928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.104 qpair failed and we were unable to recover it. 00:29:12.104 [2024-11-20 16:40:58.003138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.104 [2024-11-20 16:40:58.003146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.104 qpair failed and we were unable to recover it. 00:29:12.104 [2024-11-20 16:40:58.003416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.104 [2024-11-20 16:40:58.003423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.104 qpair failed and we were unable to recover it. 00:29:12.104 [2024-11-20 16:40:58.003732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.104 [2024-11-20 16:40:58.003745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.104 qpair failed and we were unable to recover it. 00:29:12.104 [2024-11-20 16:40:58.004043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.104 [2024-11-20 16:40:58.004050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.104 qpair failed and we were unable to recover it. 00:29:12.104 [2024-11-20 16:40:58.004332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.104 [2024-11-20 16:40:58.004339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.104 qpair failed and we were unable to recover it. 00:29:12.104 [2024-11-20 16:40:58.004631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.104 [2024-11-20 16:40:58.004637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.104 qpair failed and we were unable to recover it. 00:29:12.104 [2024-11-20 16:40:58.004943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.104 [2024-11-20 16:40:58.004950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.104 qpair failed and we were unable to recover it. 00:29:12.104 [2024-11-20 16:40:58.005259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.104 [2024-11-20 16:40:58.005267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.104 qpair failed and we were unable to recover it. 00:29:12.104 [2024-11-20 16:40:58.005561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.104 [2024-11-20 16:40:58.005568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.104 qpair failed and we were unable to recover it. 00:29:12.104 [2024-11-20 16:40:58.005845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.104 [2024-11-20 16:40:58.005852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.104 qpair failed and we were unable to recover it. 00:29:12.104 [2024-11-20 16:40:58.006143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.104 [2024-11-20 16:40:58.006150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.104 qpair failed and we were unable to recover it. 00:29:12.104 [2024-11-20 16:40:58.006449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.104 [2024-11-20 16:40:58.006464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.104 qpair failed and we were unable to recover it. 00:29:12.104 [2024-11-20 16:40:58.006798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.104 [2024-11-20 16:40:58.006804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.104 qpair failed and we were unable to recover it. 00:29:12.104 [2024-11-20 16:40:58.007089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.105 [2024-11-20 16:40:58.007096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.105 qpair failed and we were unable to recover it. 00:29:12.105 [2024-11-20 16:40:58.007400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.105 [2024-11-20 16:40:58.007407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.105 qpair failed and we were unable to recover it. 00:29:12.105 [2024-11-20 16:40:58.007702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.105 [2024-11-20 16:40:58.007710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.105 qpair failed and we were unable to recover it. 00:29:12.105 [2024-11-20 16:40:58.008016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.105 [2024-11-20 16:40:58.008024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.105 qpair failed and we were unable to recover it. 00:29:12.105 [2024-11-20 16:40:58.008344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.105 [2024-11-20 16:40:58.008351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.105 qpair failed and we were unable to recover it. 00:29:12.105 [2024-11-20 16:40:58.008635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.105 [2024-11-20 16:40:58.008650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.105 qpair failed and we were unable to recover it. 00:29:12.105 [2024-11-20 16:40:58.008956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.105 [2024-11-20 16:40:58.008962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.105 qpair failed and we were unable to recover it. 00:29:12.105 [2024-11-20 16:40:58.009243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.105 [2024-11-20 16:40:58.009250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.105 qpair failed and we were unable to recover it. 00:29:12.105 [2024-11-20 16:40:58.009561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.105 [2024-11-20 16:40:58.009568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.105 qpair failed and we were unable to recover it. 00:29:12.105 [2024-11-20 16:40:58.009753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.105 [2024-11-20 16:40:58.009760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.105 qpair failed and we were unable to recover it. 00:29:12.105 [2024-11-20 16:40:58.010058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.105 [2024-11-20 16:40:58.010065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.105 qpair failed and we were unable to recover it. 00:29:12.105 [2024-11-20 16:40:58.010371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.105 [2024-11-20 16:40:58.010377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.105 qpair failed and we were unable to recover it. 00:29:12.105 [2024-11-20 16:40:58.010687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.105 [2024-11-20 16:40:58.010693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.105 qpair failed and we were unable to recover it. 00:29:12.105 [2024-11-20 16:40:58.011017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.105 [2024-11-20 16:40:58.011024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.105 qpair failed and we were unable to recover it. 00:29:12.105 [2024-11-20 16:40:58.011319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.105 [2024-11-20 16:40:58.011326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.105 qpair failed and we were unable to recover it. 00:29:12.105 [2024-11-20 16:40:58.011636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.105 [2024-11-20 16:40:58.011643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.105 qpair failed and we were unable to recover it. 00:29:12.105 [2024-11-20 16:40:58.011942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.105 [2024-11-20 16:40:58.011949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.105 qpair failed and we were unable to recover it. 00:29:12.105 [2024-11-20 16:40:58.012255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.105 [2024-11-20 16:40:58.012262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.105 qpair failed and we were unable to recover it. 00:29:12.105 [2024-11-20 16:40:58.012575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.105 [2024-11-20 16:40:58.012582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.105 qpair failed and we were unable to recover it. 00:29:12.105 [2024-11-20 16:40:58.012896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.105 [2024-11-20 16:40:58.012903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.105 qpair failed and we were unable to recover it. 00:29:12.105 [2024-11-20 16:40:58.013276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.105 [2024-11-20 16:40:58.013283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.105 qpair failed and we were unable to recover it. 00:29:12.105 [2024-11-20 16:40:58.013576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.105 [2024-11-20 16:40:58.013583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.105 qpair failed and we were unable to recover it. 00:29:12.105 [2024-11-20 16:40:58.013915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.105 [2024-11-20 16:40:58.013922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.105 qpair failed and we were unable to recover it. 00:29:12.105 [2024-11-20 16:40:58.014233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.105 [2024-11-20 16:40:58.014241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.105 qpair failed and we were unable to recover it. 00:29:12.105 [2024-11-20 16:40:58.014592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.105 [2024-11-20 16:40:58.014599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.105 qpair failed and we were unable to recover it. 00:29:12.105 [2024-11-20 16:40:58.014783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.105 [2024-11-20 16:40:58.014790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.105 qpair failed and we were unable to recover it. 00:29:12.105 [2024-11-20 16:40:58.014969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.105 [2024-11-20 16:40:58.014976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.105 qpair failed and we were unable to recover it. 00:29:12.105 [2024-11-20 16:40:58.015290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.105 [2024-11-20 16:40:58.015298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.105 qpair failed and we were unable to recover it. 00:29:12.105 [2024-11-20 16:40:58.015592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.105 [2024-11-20 16:40:58.015599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.105 qpair failed and we were unable to recover it. 00:29:12.105 [2024-11-20 16:40:58.015910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.105 [2024-11-20 16:40:58.015917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.105 qpair failed and we were unable to recover it. 00:29:12.105 [2024-11-20 16:40:58.016226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.105 [2024-11-20 16:40:58.016234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.105 qpair failed and we were unable to recover it. 00:29:12.105 [2024-11-20 16:40:58.016533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.106 [2024-11-20 16:40:58.016539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.106 qpair failed and we were unable to recover it. 00:29:12.106 [2024-11-20 16:40:58.016840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.106 [2024-11-20 16:40:58.016847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.106 qpair failed and we were unable to recover it. 00:29:12.106 [2024-11-20 16:40:58.017026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.106 [2024-11-20 16:40:58.017034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.106 qpair failed and we were unable to recover it. 00:29:12.106 [2024-11-20 16:40:58.017314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.106 [2024-11-20 16:40:58.017323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.106 qpair failed and we were unable to recover it. 00:29:12.106 [2024-11-20 16:40:58.017511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.106 [2024-11-20 16:40:58.017518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.106 qpair failed and we were unable to recover it. 00:29:12.106 [2024-11-20 16:40:58.017862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.106 [2024-11-20 16:40:58.017869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.106 qpair failed and we were unable to recover it. 00:29:12.106 [2024-11-20 16:40:58.018144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.106 [2024-11-20 16:40:58.018151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.106 qpair failed and we were unable to recover it. 00:29:12.106 [2024-11-20 16:40:58.018473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.106 [2024-11-20 16:40:58.018480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.106 qpair failed and we were unable to recover it. 00:29:12.106 [2024-11-20 16:40:58.018798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.106 [2024-11-20 16:40:58.018805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.106 qpair failed and we were unable to recover it. 00:29:12.106 [2024-11-20 16:40:58.019106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.106 [2024-11-20 16:40:58.019113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.106 qpair failed and we were unable to recover it. 00:29:12.382 [2024-11-20 16:40:58.019419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.382 [2024-11-20 16:40:58.019428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.382 qpair failed and we were unable to recover it. 00:29:12.382 [2024-11-20 16:40:58.019734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.382 [2024-11-20 16:40:58.019741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.382 qpair failed and we were unable to recover it. 00:29:12.382 [2024-11-20 16:40:58.020040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.382 [2024-11-20 16:40:58.020048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.382 qpair failed and we were unable to recover it. 00:29:12.382 [2024-11-20 16:40:58.020360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.382 [2024-11-20 16:40:58.020366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.382 qpair failed and we were unable to recover it. 00:29:12.382 [2024-11-20 16:40:58.020676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.382 [2024-11-20 16:40:58.020683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.382 qpair failed and we were unable to recover it. 00:29:12.382 [2024-11-20 16:40:58.020865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.382 [2024-11-20 16:40:58.020872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.382 qpair failed and we were unable to recover it. 00:29:12.382 [2024-11-20 16:40:58.021262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.382 [2024-11-20 16:40:58.021269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.382 qpair failed and we were unable to recover it. 00:29:12.383 [2024-11-20 16:40:58.021617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.383 [2024-11-20 16:40:58.021624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.383 qpair failed and we were unable to recover it. 00:29:12.383 [2024-11-20 16:40:58.021783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.383 [2024-11-20 16:40:58.021790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.383 qpair failed and we were unable to recover it. 00:29:12.383 [2024-11-20 16:40:58.022117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.383 [2024-11-20 16:40:58.022124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.383 qpair failed and we were unable to recover it. 00:29:12.383 [2024-11-20 16:40:58.022427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.383 [2024-11-20 16:40:58.022434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.383 qpair failed and we were unable to recover it. 00:29:12.383 [2024-11-20 16:40:58.022725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.383 [2024-11-20 16:40:58.022732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.383 qpair failed and we were unable to recover it. 00:29:12.383 [2024-11-20 16:40:58.023035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.383 [2024-11-20 16:40:58.023042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.383 qpair failed and we were unable to recover it. 00:29:12.383 [2024-11-20 16:40:58.023358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.383 [2024-11-20 16:40:58.023364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.383 qpair failed and we were unable to recover it. 00:29:12.383 [2024-11-20 16:40:58.023564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.383 [2024-11-20 16:40:58.023571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.383 qpair failed and we were unable to recover it. 00:29:12.383 [2024-11-20 16:40:58.023797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.383 [2024-11-20 16:40:58.023804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.383 qpair failed and we were unable to recover it. 00:29:12.383 [2024-11-20 16:40:58.024121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.383 [2024-11-20 16:40:58.024128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.383 qpair failed and we were unable to recover it. 00:29:12.383 [2024-11-20 16:40:58.024308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.383 [2024-11-20 16:40:58.024316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.383 qpair failed and we were unable to recover it. 00:29:12.383 [2024-11-20 16:40:58.024670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.383 [2024-11-20 16:40:58.024676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.383 qpair failed and we were unable to recover it. 00:29:12.383 [2024-11-20 16:40:58.024975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.383 [2024-11-20 16:40:58.024987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.383 qpair failed and we were unable to recover it. 00:29:12.383 [2024-11-20 16:40:58.025294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.383 [2024-11-20 16:40:58.025300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.383 qpair failed and we were unable to recover it. 00:29:12.383 [2024-11-20 16:40:58.025607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.383 [2024-11-20 16:40:58.025614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.383 qpair failed and we were unable to recover it. 00:29:12.383 [2024-11-20 16:40:58.025927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.383 [2024-11-20 16:40:58.025934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.383 qpair failed and we were unable to recover it. 00:29:12.383 [2024-11-20 16:40:58.026260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.383 [2024-11-20 16:40:58.026267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.383 qpair failed and we were unable to recover it. 00:29:12.383 [2024-11-20 16:40:58.026566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.383 [2024-11-20 16:40:58.026574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.383 qpair failed and we were unable to recover it. 00:29:12.383 [2024-11-20 16:40:58.026862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.383 [2024-11-20 16:40:58.026870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.383 qpair failed and we were unable to recover it. 00:29:12.383 [2024-11-20 16:40:58.027148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.383 [2024-11-20 16:40:58.027155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.383 qpair failed and we were unable to recover it. 00:29:12.383 [2024-11-20 16:40:58.027451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.383 [2024-11-20 16:40:58.027458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.383 qpair failed and we were unable to recover it. 00:29:12.383 [2024-11-20 16:40:58.027657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.383 [2024-11-20 16:40:58.027664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.383 qpair failed and we were unable to recover it. 00:29:12.383 [2024-11-20 16:40:58.027990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.383 [2024-11-20 16:40:58.027997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.383 qpair failed and we were unable to recover it. 00:29:12.383 [2024-11-20 16:40:58.028318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.383 [2024-11-20 16:40:58.028325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.383 qpair failed and we were unable to recover it. 00:29:12.383 [2024-11-20 16:40:58.028644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.383 [2024-11-20 16:40:58.028651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.383 qpair failed and we were unable to recover it. 00:29:12.383 [2024-11-20 16:40:58.028938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.383 [2024-11-20 16:40:58.028945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.383 qpair failed and we were unable to recover it. 00:29:12.383 [2024-11-20 16:40:58.029113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.383 [2024-11-20 16:40:58.029122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.383 qpair failed and we were unable to recover it. 00:29:12.383 [2024-11-20 16:40:58.029461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.383 [2024-11-20 16:40:58.029468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.383 qpair failed and we were unable to recover it. 00:29:12.383 [2024-11-20 16:40:58.029854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.383 [2024-11-20 16:40:58.029860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.383 qpair failed and we were unable to recover it. 00:29:12.383 [2024-11-20 16:40:58.030160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.383 [2024-11-20 16:40:58.030167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.383 qpair failed and we were unable to recover it. 00:29:12.383 [2024-11-20 16:40:58.030326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.383 [2024-11-20 16:40:58.030334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.383 qpair failed and we were unable to recover it. 00:29:12.383 [2024-11-20 16:40:58.030518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.383 [2024-11-20 16:40:58.030526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.383 qpair failed and we were unable to recover it. 00:29:12.383 [2024-11-20 16:40:58.030800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.383 [2024-11-20 16:40:58.030807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.383 qpair failed and we were unable to recover it. 00:29:12.383 [2024-11-20 16:40:58.030987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.383 [2024-11-20 16:40:58.030994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.383 qpair failed and we were unable to recover it. 00:29:12.383 [2024-11-20 16:40:58.031242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.383 [2024-11-20 16:40:58.031249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.383 qpair failed and we were unable to recover it. 00:29:12.383 [2024-11-20 16:40:58.031582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.383 [2024-11-20 16:40:58.031589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.383 qpair failed and we were unable to recover it. 00:29:12.383 [2024-11-20 16:40:58.031833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.384 [2024-11-20 16:40:58.031840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.384 qpair failed and we were unable to recover it. 00:29:12.384 [2024-11-20 16:40:58.032086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.384 [2024-11-20 16:40:58.032093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.384 qpair failed and we were unable to recover it. 00:29:12.384 [2024-11-20 16:40:58.032355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.384 [2024-11-20 16:40:58.032361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.384 qpair failed and we were unable to recover it. 00:29:12.384 [2024-11-20 16:40:58.032539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.384 [2024-11-20 16:40:58.032546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.384 qpair failed and we were unable to recover it. 00:29:12.384 [2024-11-20 16:40:58.032885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.384 [2024-11-20 16:40:58.032892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.384 qpair failed and we were unable to recover it. 00:29:12.384 [2024-11-20 16:40:58.033197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.384 [2024-11-20 16:40:58.033204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.384 qpair failed and we were unable to recover it. 00:29:12.384 [2024-11-20 16:40:58.033516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.384 [2024-11-20 16:40:58.033522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.384 qpair failed and we were unable to recover it. 00:29:12.384 [2024-11-20 16:40:58.033816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.384 [2024-11-20 16:40:58.033823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.384 qpair failed and we were unable to recover it. 00:29:12.384 [2024-11-20 16:40:58.034015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.384 [2024-11-20 16:40:58.034022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.384 qpair failed and we were unable to recover it. 00:29:12.384 [2024-11-20 16:40:58.034418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.384 [2024-11-20 16:40:58.034425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.384 qpair failed and we were unable to recover it. 00:29:12.384 [2024-11-20 16:40:58.034744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.384 [2024-11-20 16:40:58.034750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.384 qpair failed and we were unable to recover it. 00:29:12.384 [2024-11-20 16:40:58.034830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.384 [2024-11-20 16:40:58.034837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.384 qpair failed and we were unable to recover it. 00:29:12.384 [2024-11-20 16:40:58.035119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.384 [2024-11-20 16:40:58.035126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.384 qpair failed and we were unable to recover it. 00:29:12.384 [2024-11-20 16:40:58.035430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.384 [2024-11-20 16:40:58.035436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.384 qpair failed and we were unable to recover it. 00:29:12.384 [2024-11-20 16:40:58.035762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.384 [2024-11-20 16:40:58.035769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.384 qpair failed and we were unable to recover it. 00:29:12.384 [2024-11-20 16:40:58.036089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.384 [2024-11-20 16:40:58.036096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.384 qpair failed and we were unable to recover it. 00:29:12.384 [2024-11-20 16:40:58.036273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.384 [2024-11-20 16:40:58.036280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.384 qpair failed and we were unable to recover it. 00:29:12.384 [2024-11-20 16:40:58.036525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.384 [2024-11-20 16:40:58.036532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.384 qpair failed and we were unable to recover it. 00:29:12.384 [2024-11-20 16:40:58.036692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.384 [2024-11-20 16:40:58.036699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.384 qpair failed and we were unable to recover it. 00:29:12.384 [2024-11-20 16:40:58.036889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.384 [2024-11-20 16:40:58.036896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.384 qpair failed and we were unable to recover it. 00:29:12.384 [2024-11-20 16:40:58.037244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.384 [2024-11-20 16:40:58.037250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.384 qpair failed and we were unable to recover it. 00:29:12.384 [2024-11-20 16:40:58.037648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.384 [2024-11-20 16:40:58.037656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.384 qpair failed and we were unable to recover it. 00:29:12.384 [2024-11-20 16:40:58.037965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.384 [2024-11-20 16:40:58.037971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.384 qpair failed and we were unable to recover it. 00:29:12.384 [2024-11-20 16:40:58.038270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.384 [2024-11-20 16:40:58.038277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.384 qpair failed and we were unable to recover it. 00:29:12.384 [2024-11-20 16:40:58.038627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.384 [2024-11-20 16:40:58.038633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.384 qpair failed and we were unable to recover it. 00:29:12.384 [2024-11-20 16:40:58.038926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.384 [2024-11-20 16:40:58.038933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.384 qpair failed and we were unable to recover it. 00:29:12.384 [2024-11-20 16:40:58.039113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.384 [2024-11-20 16:40:58.039121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.384 qpair failed and we were unable to recover it. 00:29:12.384 [2024-11-20 16:40:58.039395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.384 [2024-11-20 16:40:58.039401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.384 qpair failed and we were unable to recover it. 00:29:12.384 [2024-11-20 16:40:58.039782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.384 [2024-11-20 16:40:58.039788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.384 qpair failed and we were unable to recover it. 00:29:12.384 [2024-11-20 16:40:58.040103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.384 [2024-11-20 16:40:58.040110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.384 qpair failed and we were unable to recover it. 00:29:12.384 [2024-11-20 16:40:58.040317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.384 [2024-11-20 16:40:58.040326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.384 qpair failed and we were unable to recover it. 00:29:12.384 [2024-11-20 16:40:58.040630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.384 [2024-11-20 16:40:58.040636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.384 qpair failed and we were unable to recover it. 00:29:12.384 [2024-11-20 16:40:58.040790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.384 [2024-11-20 16:40:58.040797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.384 qpair failed and we were unable to recover it. 00:29:12.384 [2024-11-20 16:40:58.041031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.384 [2024-11-20 16:40:58.041038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.384 qpair failed and we were unable to recover it. 00:29:12.384 [2024-11-20 16:40:58.041363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.384 [2024-11-20 16:40:58.041370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.384 qpair failed and we were unable to recover it. 00:29:12.384 [2024-11-20 16:40:58.041672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.384 [2024-11-20 16:40:58.041679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.384 qpair failed and we were unable to recover it. 00:29:12.384 [2024-11-20 16:40:58.042004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.384 [2024-11-20 16:40:58.042011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.384 qpair failed and we were unable to recover it. 00:29:12.384 [2024-11-20 16:40:58.042195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.384 [2024-11-20 16:40:58.042202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.384 qpair failed and we were unable to recover it. 00:29:12.384 [2024-11-20 16:40:58.042553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.385 [2024-11-20 16:40:58.042560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.385 qpair failed and we were unable to recover it. 00:29:12.385 [2024-11-20 16:40:58.042752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.385 [2024-11-20 16:40:58.042759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.385 qpair failed and we were unable to recover it. 00:29:12.385 [2024-11-20 16:40:58.043077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.385 [2024-11-20 16:40:58.043084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.385 qpair failed and we were unable to recover it. 00:29:12.385 [2024-11-20 16:40:58.043413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.385 [2024-11-20 16:40:58.043419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.385 qpair failed and we were unable to recover it. 00:29:12.385 [2024-11-20 16:40:58.043766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.385 [2024-11-20 16:40:58.043772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.385 qpair failed and we were unable to recover it. 00:29:12.385 [2024-11-20 16:40:58.044090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.385 [2024-11-20 16:40:58.044098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.385 qpair failed and we were unable to recover it. 00:29:12.385 [2024-11-20 16:40:58.044398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.385 [2024-11-20 16:40:58.044405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.385 qpair failed and we were unable to recover it. 00:29:12.385 [2024-11-20 16:40:58.044712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.385 [2024-11-20 16:40:58.044719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.385 qpair failed and we were unable to recover it. 00:29:12.385 [2024-11-20 16:40:58.045027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.385 [2024-11-20 16:40:58.045035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.385 qpair failed and we were unable to recover it. 00:29:12.385 [2024-11-20 16:40:58.045341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.385 [2024-11-20 16:40:58.045348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.385 qpair failed and we were unable to recover it. 00:29:12.385 [2024-11-20 16:40:58.045653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.385 [2024-11-20 16:40:58.045660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.385 qpair failed and we were unable to recover it. 00:29:12.385 [2024-11-20 16:40:58.046032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.385 [2024-11-20 16:40:58.046040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.385 qpair failed and we were unable to recover it. 00:29:12.385 [2024-11-20 16:40:58.046342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.385 [2024-11-20 16:40:58.046349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.385 qpair failed and we were unable to recover it. 00:29:12.385 [2024-11-20 16:40:58.046749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.385 [2024-11-20 16:40:58.046755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.385 qpair failed and we were unable to recover it. 00:29:12.385 [2024-11-20 16:40:58.047051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.385 [2024-11-20 16:40:58.047057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.385 qpair failed and we were unable to recover it. 00:29:12.385 [2024-11-20 16:40:58.047314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.385 [2024-11-20 16:40:58.047321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.385 qpair failed and we were unable to recover it. 00:29:12.385 [2024-11-20 16:40:58.047620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.385 [2024-11-20 16:40:58.047626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.385 qpair failed and we were unable to recover it. 00:29:12.385 [2024-11-20 16:40:58.047930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.385 [2024-11-20 16:40:58.047936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.385 qpair failed and we were unable to recover it. 00:29:12.385 [2024-11-20 16:40:58.048134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.385 [2024-11-20 16:40:58.048141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.385 qpair failed and we were unable to recover it. 00:29:12.385 [2024-11-20 16:40:58.048459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.385 [2024-11-20 16:40:58.048466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.385 qpair failed and we were unable to recover it. 00:29:12.385 [2024-11-20 16:40:58.048747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.385 [2024-11-20 16:40:58.048754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.385 qpair failed and we were unable to recover it. 00:29:12.385 [2024-11-20 16:40:58.049087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.385 [2024-11-20 16:40:58.049094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.385 qpair failed and we were unable to recover it. 00:29:12.385 [2024-11-20 16:40:58.049327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.385 [2024-11-20 16:40:58.049334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.385 qpair failed and we were unable to recover it. 00:29:12.385 [2024-11-20 16:40:58.049640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.385 [2024-11-20 16:40:58.049647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.385 qpair failed and we were unable to recover it. 00:29:12.385 [2024-11-20 16:40:58.049937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.385 [2024-11-20 16:40:58.049945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.385 qpair failed and we were unable to recover it. 00:29:12.385 [2024-11-20 16:40:58.050254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.385 [2024-11-20 16:40:58.050262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.385 qpair failed and we were unable to recover it. 00:29:12.385 [2024-11-20 16:40:58.050583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.385 [2024-11-20 16:40:58.050590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.385 qpair failed and we were unable to recover it. 00:29:12.385 [2024-11-20 16:40:58.050794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.385 [2024-11-20 16:40:58.050800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.385 qpair failed and we were unable to recover it. 00:29:12.385 [2024-11-20 16:40:58.051007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.385 [2024-11-20 16:40:58.051014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.385 qpair failed and we were unable to recover it. 00:29:12.385 [2024-11-20 16:40:58.051229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.385 [2024-11-20 16:40:58.051236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.385 qpair failed and we were unable to recover it. 00:29:12.385 [2024-11-20 16:40:58.051423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.385 [2024-11-20 16:40:58.051429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.385 qpair failed and we were unable to recover it. 00:29:12.385 [2024-11-20 16:40:58.051739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.385 [2024-11-20 16:40:58.051747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.385 qpair failed and we were unable to recover it. 00:29:12.385 [2024-11-20 16:40:58.052055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.385 [2024-11-20 16:40:58.052061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.385 qpair failed and we were unable to recover it. 00:29:12.385 [2024-11-20 16:40:58.052237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.385 [2024-11-20 16:40:58.052244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.385 qpair failed and we were unable to recover it. 00:29:12.385 [2024-11-20 16:40:58.052512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.385 [2024-11-20 16:40:58.052519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.385 qpair failed and we were unable to recover it. 00:29:12.385 [2024-11-20 16:40:58.052803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.385 [2024-11-20 16:40:58.052810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.385 qpair failed and we were unable to recover it. 00:29:12.385 [2024-11-20 16:40:58.053136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.385 [2024-11-20 16:40:58.053144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.385 qpair failed and we were unable to recover it. 00:29:12.385 [2024-11-20 16:40:58.053448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.385 [2024-11-20 16:40:58.053454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.385 qpair failed and we were unable to recover it. 00:29:12.386 [2024-11-20 16:40:58.053646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.386 [2024-11-20 16:40:58.053652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.386 qpair failed and we were unable to recover it. 00:29:12.386 [2024-11-20 16:40:58.053957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.386 [2024-11-20 16:40:58.053965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.386 qpair failed and we were unable to recover it. 00:29:12.386 [2024-11-20 16:40:58.054285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.386 [2024-11-20 16:40:58.054292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.386 qpair failed and we were unable to recover it. 00:29:12.386 [2024-11-20 16:40:58.054601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.386 [2024-11-20 16:40:58.054608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.386 qpair failed and we were unable to recover it. 00:29:12.386 [2024-11-20 16:40:58.054922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.386 [2024-11-20 16:40:58.054928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.386 qpair failed and we were unable to recover it. 00:29:12.386 [2024-11-20 16:40:58.055309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.386 [2024-11-20 16:40:58.055316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.386 qpair failed and we were unable to recover it. 00:29:12.386 [2024-11-20 16:40:58.055625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.386 [2024-11-20 16:40:58.055632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.386 qpair failed and we were unable to recover it. 00:29:12.386 [2024-11-20 16:40:58.055958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.386 [2024-11-20 16:40:58.055964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.386 qpair failed and we were unable to recover it. 00:29:12.386 [2024-11-20 16:40:58.056259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.386 [2024-11-20 16:40:58.056266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.386 qpair failed and we were unable to recover it. 00:29:12.386 [2024-11-20 16:40:58.056580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.386 [2024-11-20 16:40:58.056587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.386 qpair failed and we were unable to recover it. 00:29:12.386 [2024-11-20 16:40:58.056974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.386 [2024-11-20 16:40:58.056984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.386 qpair failed and we were unable to recover it. 00:29:12.386 [2024-11-20 16:40:58.057288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.386 [2024-11-20 16:40:58.057294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.386 qpair failed and we were unable to recover it. 00:29:12.386 [2024-11-20 16:40:58.057458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.386 [2024-11-20 16:40:58.057464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.386 qpair failed and we were unable to recover it. 00:29:12.386 [2024-11-20 16:40:58.057665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.386 [2024-11-20 16:40:58.057672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.386 qpair failed and we were unable to recover it. 00:29:12.386 [2024-11-20 16:40:58.057993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.386 [2024-11-20 16:40:58.058000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.386 qpair failed and we were unable to recover it. 00:29:12.386 [2024-11-20 16:40:58.058180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.386 [2024-11-20 16:40:58.058187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.386 qpair failed and we were unable to recover it. 00:29:12.386 [2024-11-20 16:40:58.058362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.386 [2024-11-20 16:40:58.058369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.386 qpair failed and we were unable to recover it. 00:29:12.386 [2024-11-20 16:40:58.058697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.386 [2024-11-20 16:40:58.058704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.386 qpair failed and we were unable to recover it. 00:29:12.386 [2024-11-20 16:40:58.058990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.386 [2024-11-20 16:40:58.058997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.386 qpair failed and we were unable to recover it. 00:29:12.386 [2024-11-20 16:40:58.059204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.386 [2024-11-20 16:40:58.059212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.386 qpair failed and we were unable to recover it. 00:29:12.386 [2024-11-20 16:40:58.059536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.386 [2024-11-20 16:40:58.059543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.386 qpair failed and we were unable to recover it. 00:29:12.386 [2024-11-20 16:40:58.059723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.386 [2024-11-20 16:40:58.059734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.386 qpair failed and we were unable to recover it. 00:29:12.386 [2024-11-20 16:40:58.059849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.386 [2024-11-20 16:40:58.059857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.386 qpair failed and we were unable to recover it. 00:29:12.386 [2024-11-20 16:40:58.059927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.386 [2024-11-20 16:40:58.059934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.386 qpair failed and we were unable to recover it. 00:29:12.386 [2024-11-20 16:40:58.060195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.386 [2024-11-20 16:40:58.060202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.386 qpair failed and we were unable to recover it. 00:29:12.386 [2024-11-20 16:40:58.060495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.386 [2024-11-20 16:40:58.060501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.386 qpair failed and we were unable to recover it. 00:29:12.386 [2024-11-20 16:40:58.060669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.386 [2024-11-20 16:40:58.060677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.386 qpair failed and we were unable to recover it. 00:29:12.386 [2024-11-20 16:40:58.060990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.386 [2024-11-20 16:40:58.060998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.386 qpair failed and we were unable to recover it. 00:29:12.386 [2024-11-20 16:40:58.061209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.386 [2024-11-20 16:40:58.061215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.386 qpair failed and we were unable to recover it. 00:29:12.386 [2024-11-20 16:40:58.061420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.386 [2024-11-20 16:40:58.061427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.386 qpair failed and we were unable to recover it. 00:29:12.386 [2024-11-20 16:40:58.061579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.386 [2024-11-20 16:40:58.061585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.386 qpair failed and we were unable to recover it. 00:29:12.386 [2024-11-20 16:40:58.061883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.386 [2024-11-20 16:40:58.061889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.386 qpair failed and we were unable to recover it. 00:29:12.386 [2024-11-20 16:40:58.062186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.386 [2024-11-20 16:40:58.062193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.386 qpair failed and we were unable to recover it. 00:29:12.386 [2024-11-20 16:40:58.062513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.386 [2024-11-20 16:40:58.062519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.386 qpair failed and we were unable to recover it. 00:29:12.386 [2024-11-20 16:40:58.062813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.386 [2024-11-20 16:40:58.062820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.386 qpair failed and we were unable to recover it. 00:29:12.386 [2024-11-20 16:40:58.063215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.386 [2024-11-20 16:40:58.063222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.386 qpair failed and we were unable to recover it. 00:29:12.386 [2024-11-20 16:40:58.063486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.386 [2024-11-20 16:40:58.063492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.386 qpair failed and we were unable to recover it. 00:29:12.387 [2024-11-20 16:40:58.063664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.387 [2024-11-20 16:40:58.063672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.387 qpair failed and we were unable to recover it. 00:29:12.387 [2024-11-20 16:40:58.063993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.387 [2024-11-20 16:40:58.064000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.387 qpair failed and we were unable to recover it. 00:29:12.387 [2024-11-20 16:40:58.064317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.387 [2024-11-20 16:40:58.064324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.387 qpair failed and we were unable to recover it. 00:29:12.387 [2024-11-20 16:40:58.064638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.387 [2024-11-20 16:40:58.064644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.387 qpair failed and we were unable to recover it. 00:29:12.387 [2024-11-20 16:40:58.064935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.387 [2024-11-20 16:40:58.064943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.387 qpair failed and we were unable to recover it. 00:29:12.387 [2024-11-20 16:40:58.065117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.387 [2024-11-20 16:40:58.065125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.387 qpair failed and we were unable to recover it. 00:29:12.387 [2024-11-20 16:40:58.065453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.387 [2024-11-20 16:40:58.065460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.387 qpair failed and we were unable to recover it. 00:29:12.387 [2024-11-20 16:40:58.065757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.387 [2024-11-20 16:40:58.065764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.387 qpair failed and we were unable to recover it. 00:29:12.387 [2024-11-20 16:40:58.066068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.387 [2024-11-20 16:40:58.066075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.387 qpair failed and we were unable to recover it. 00:29:12.387 [2024-11-20 16:40:58.066378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.387 [2024-11-20 16:40:58.066385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.387 qpair failed and we were unable to recover it. 00:29:12.387 [2024-11-20 16:40:58.066716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.387 [2024-11-20 16:40:58.066723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.387 qpair failed and we were unable to recover it. 00:29:12.387 [2024-11-20 16:40:58.067011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.387 [2024-11-20 16:40:58.067019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.387 qpair failed and we were unable to recover it. 00:29:12.387 [2024-11-20 16:40:58.067332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.387 [2024-11-20 16:40:58.067338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.387 qpair failed and we were unable to recover it. 00:29:12.387 [2024-11-20 16:40:58.067633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.387 [2024-11-20 16:40:58.067640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.387 qpair failed and we were unable to recover it. 00:29:12.387 [2024-11-20 16:40:58.067942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.387 [2024-11-20 16:40:58.067949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.387 qpair failed and we were unable to recover it. 00:29:12.387 [2024-11-20 16:40:58.068237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.387 [2024-11-20 16:40:58.068245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.387 qpair failed and we were unable to recover it. 00:29:12.387 [2024-11-20 16:40:58.068401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.387 [2024-11-20 16:40:58.068408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.387 qpair failed and we were unable to recover it. 00:29:12.387 [2024-11-20 16:40:58.068676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.387 [2024-11-20 16:40:58.068683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.387 qpair failed and we were unable to recover it. 00:29:12.387 [2024-11-20 16:40:58.068872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.387 [2024-11-20 16:40:58.068879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.387 qpair failed and we were unable to recover it. 00:29:12.387 [2024-11-20 16:40:58.069165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.387 [2024-11-20 16:40:58.069172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.387 qpair failed and we were unable to recover it. 00:29:12.387 [2024-11-20 16:40:58.069491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.387 [2024-11-20 16:40:58.069498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.387 qpair failed and we were unable to recover it. 00:29:12.387 [2024-11-20 16:40:58.069805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.387 [2024-11-20 16:40:58.069813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.387 qpair failed and we were unable to recover it. 00:29:12.387 [2024-11-20 16:40:58.070178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.387 [2024-11-20 16:40:58.070185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.387 qpair failed and we were unable to recover it. 00:29:12.387 [2024-11-20 16:40:58.070478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.387 [2024-11-20 16:40:58.070485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.387 qpair failed and we were unable to recover it. 00:29:12.387 [2024-11-20 16:40:58.070788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.387 [2024-11-20 16:40:58.070797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.387 qpair failed and we were unable to recover it. 00:29:12.387 [2024-11-20 16:40:58.071170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.387 [2024-11-20 16:40:58.071177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.387 qpair failed and we were unable to recover it. 00:29:12.387 [2024-11-20 16:40:58.071487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.387 [2024-11-20 16:40:58.071494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.387 qpair failed and we were unable to recover it. 00:29:12.387 [2024-11-20 16:40:58.071778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.387 [2024-11-20 16:40:58.071785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.387 qpair failed and we were unable to recover it. 00:29:12.387 [2024-11-20 16:40:58.072123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.387 [2024-11-20 16:40:58.072130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.387 qpair failed and we were unable to recover it. 00:29:12.387 [2024-11-20 16:40:58.072442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.387 [2024-11-20 16:40:58.072449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.387 qpair failed and we were unable to recover it. 00:29:12.387 [2024-11-20 16:40:58.072788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.387 [2024-11-20 16:40:58.072795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.387 qpair failed and we were unable to recover it. 00:29:12.387 [2024-11-20 16:40:58.072960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.388 [2024-11-20 16:40:58.072968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.388 qpair failed and we were unable to recover it. 00:29:12.388 [2024-11-20 16:40:58.073254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.388 [2024-11-20 16:40:58.073262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.388 qpair failed and we were unable to recover it. 00:29:12.388 [2024-11-20 16:40:58.073550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.388 [2024-11-20 16:40:58.073557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.388 qpair failed and we were unable to recover it. 00:29:12.388 [2024-11-20 16:40:58.073854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.388 [2024-11-20 16:40:58.073861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.388 qpair failed and we were unable to recover it. 00:29:12.388 [2024-11-20 16:40:58.074162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.388 [2024-11-20 16:40:58.074169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.388 qpair failed and we were unable to recover it. 00:29:12.388 [2024-11-20 16:40:58.074339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.388 [2024-11-20 16:40:58.074346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.388 qpair failed and we were unable to recover it. 00:29:12.388 [2024-11-20 16:40:58.074645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.388 [2024-11-20 16:40:58.074651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.388 qpair failed and we were unable to recover it. 00:29:12.388 [2024-11-20 16:40:58.074957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.388 [2024-11-20 16:40:58.074963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.388 qpair failed and we were unable to recover it. 00:29:12.388 [2024-11-20 16:40:58.075328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.388 [2024-11-20 16:40:58.075335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.388 qpair failed and we were unable to recover it. 00:29:12.388 [2024-11-20 16:40:58.075506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.388 [2024-11-20 16:40:58.075512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.388 qpair failed and we were unable to recover it. 00:29:12.388 [2024-11-20 16:40:58.075887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.388 [2024-11-20 16:40:58.075895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.388 qpair failed and we were unable to recover it. 00:29:12.388 [2024-11-20 16:40:58.076198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.388 [2024-11-20 16:40:58.076205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.388 qpair failed and we were unable to recover it. 00:29:12.388 [2024-11-20 16:40:58.076548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.388 [2024-11-20 16:40:58.076554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.388 qpair failed and we were unable to recover it. 00:29:12.388 [2024-11-20 16:40:58.076722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.388 [2024-11-20 16:40:58.076729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.388 qpair failed and we were unable to recover it. 00:29:12.388 [2024-11-20 16:40:58.077040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.388 [2024-11-20 16:40:58.077047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.388 qpair failed and we were unable to recover it. 00:29:12.388 [2024-11-20 16:40:58.077356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.388 [2024-11-20 16:40:58.077363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.388 qpair failed and we were unable to recover it. 00:29:12.388 [2024-11-20 16:40:58.077661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.388 [2024-11-20 16:40:58.077676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.388 qpair failed and we were unable to recover it. 00:29:12.388 [2024-11-20 16:40:58.077990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.388 [2024-11-20 16:40:58.077997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.388 qpair failed and we were unable to recover it. 00:29:12.388 [2024-11-20 16:40:58.078290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.388 [2024-11-20 16:40:58.078297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.388 qpair failed and we were unable to recover it. 00:29:12.388 [2024-11-20 16:40:58.078501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.388 [2024-11-20 16:40:58.078507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.388 qpair failed and we were unable to recover it. 00:29:12.388 [2024-11-20 16:40:58.078690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.388 [2024-11-20 16:40:58.078697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.388 qpair failed and we were unable to recover it. 00:29:12.388 [2024-11-20 16:40:58.078984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.388 [2024-11-20 16:40:58.078991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.388 qpair failed and we were unable to recover it. 00:29:12.388 [2024-11-20 16:40:58.079071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.388 [2024-11-20 16:40:58.079079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.388 qpair failed and we were unable to recover it. 00:29:12.388 [2024-11-20 16:40:58.079384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.388 [2024-11-20 16:40:58.079390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.388 qpair failed and we were unable to recover it. 00:29:12.388 [2024-11-20 16:40:58.079669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.388 [2024-11-20 16:40:58.079676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.388 qpair failed and we were unable to recover it. 00:29:12.388 [2024-11-20 16:40:58.079995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.388 [2024-11-20 16:40:58.080002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.388 qpair failed and we were unable to recover it. 00:29:12.388 [2024-11-20 16:40:58.080200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.388 [2024-11-20 16:40:58.080207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.388 qpair failed and we were unable to recover it. 00:29:12.388 [2024-11-20 16:40:58.080381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.388 [2024-11-20 16:40:58.080388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.388 qpair failed and we were unable to recover it. 00:29:12.388 [2024-11-20 16:40:58.080700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.388 [2024-11-20 16:40:58.080706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.388 qpair failed and we were unable to recover it. 00:29:12.388 [2024-11-20 16:40:58.080979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.388 [2024-11-20 16:40:58.080989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.388 qpair failed and we were unable to recover it. 00:29:12.388 [2024-11-20 16:40:58.081152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.388 [2024-11-20 16:40:58.081159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.388 qpair failed and we were unable to recover it. 00:29:12.388 [2024-11-20 16:40:58.081494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.388 [2024-11-20 16:40:58.081502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.388 qpair failed and we were unable to recover it. 00:29:12.388 [2024-11-20 16:40:58.081658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.388 [2024-11-20 16:40:58.081665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.388 qpair failed and we were unable to recover it. 00:29:12.388 [2024-11-20 16:40:58.081811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.388 [2024-11-20 16:40:58.081819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.388 qpair failed and we were unable to recover it. 00:29:12.388 [2024-11-20 16:40:58.082114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.388 [2024-11-20 16:40:58.082121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.388 qpair failed and we were unable to recover it. 00:29:12.388 [2024-11-20 16:40:58.082442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.388 [2024-11-20 16:40:58.082449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.388 qpair failed and we were unable to recover it. 00:29:12.388 [2024-11-20 16:40:58.082751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.388 [2024-11-20 16:40:58.082759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.388 qpair failed and we were unable to recover it. 00:29:12.388 [2024-11-20 16:40:58.082947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.388 [2024-11-20 16:40:58.082955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.388 qpair failed and we were unable to recover it. 00:29:12.388 [2024-11-20 16:40:58.083293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.389 [2024-11-20 16:40:58.083300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.389 qpair failed and we were unable to recover it. 00:29:12.389 [2024-11-20 16:40:58.083500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.389 [2024-11-20 16:40:58.083507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.389 qpair failed and we were unable to recover it. 00:29:12.389 [2024-11-20 16:40:58.083838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.389 [2024-11-20 16:40:58.083846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.389 qpair failed and we were unable to recover it. 00:29:12.389 [2024-11-20 16:40:58.084153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.389 [2024-11-20 16:40:58.084160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.389 qpair failed and we were unable to recover it. 00:29:12.389 [2024-11-20 16:40:58.084528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.389 [2024-11-20 16:40:58.084535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.389 qpair failed and we were unable to recover it. 00:29:12.389 [2024-11-20 16:40:58.084698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.389 [2024-11-20 16:40:58.084705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.389 qpair failed and we were unable to recover it. 00:29:12.389 [2024-11-20 16:40:58.085001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.389 [2024-11-20 16:40:58.085008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.389 qpair failed and we were unable to recover it. 00:29:12.389 [2024-11-20 16:40:58.085307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.389 [2024-11-20 16:40:58.085313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.389 qpair failed and we were unable to recover it. 00:29:12.389 [2024-11-20 16:40:58.085474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.389 [2024-11-20 16:40:58.085481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.389 qpair failed and we were unable to recover it. 00:29:12.389 [2024-11-20 16:40:58.085866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.389 [2024-11-20 16:40:58.085872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.389 qpair failed and we were unable to recover it. 00:29:12.389 [2024-11-20 16:40:58.086076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.389 [2024-11-20 16:40:58.086083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.389 qpair failed and we were unable to recover it. 00:29:12.389 [2024-11-20 16:40:58.086398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.389 [2024-11-20 16:40:58.086406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.389 qpair failed and we were unable to recover it. 00:29:12.389 [2024-11-20 16:40:58.086736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.389 [2024-11-20 16:40:58.086742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.389 qpair failed and we were unable to recover it. 00:29:12.389 [2024-11-20 16:40:58.086919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.389 [2024-11-20 16:40:58.086925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.389 qpair failed and we were unable to recover it. 00:29:12.389 [2024-11-20 16:40:58.087215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.389 [2024-11-20 16:40:58.087222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.389 qpair failed and we were unable to recover it. 00:29:12.389 [2024-11-20 16:40:58.087528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.389 [2024-11-20 16:40:58.087535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.389 qpair failed and we were unable to recover it. 00:29:12.389 [2024-11-20 16:40:58.087849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.389 [2024-11-20 16:40:58.087856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.389 qpair failed and we were unable to recover it. 00:29:12.389 [2024-11-20 16:40:58.088169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.389 [2024-11-20 16:40:58.088176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.389 qpair failed and we were unable to recover it. 00:29:12.389 [2024-11-20 16:40:58.088501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.389 [2024-11-20 16:40:58.088508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.389 qpair failed and we were unable to recover it. 00:29:12.389 [2024-11-20 16:40:58.088715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.389 [2024-11-20 16:40:58.088721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.389 qpair failed and we were unable to recover it. 00:29:12.389 [2024-11-20 16:40:58.089094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.389 [2024-11-20 16:40:58.089101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.389 qpair failed and we were unable to recover it. 00:29:12.389 [2024-11-20 16:40:58.089402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.389 [2024-11-20 16:40:58.089416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.389 qpair failed and we were unable to recover it. 00:29:12.389 [2024-11-20 16:40:58.089721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.389 [2024-11-20 16:40:58.089728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.389 qpair failed and we were unable to recover it. 00:29:12.389 [2024-11-20 16:40:58.089901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.389 [2024-11-20 16:40:58.089908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.389 qpair failed and we were unable to recover it. 00:29:12.389 [2024-11-20 16:40:58.090291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.389 [2024-11-20 16:40:58.090298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.389 qpair failed and we were unable to recover it. 00:29:12.389 [2024-11-20 16:40:58.090596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.389 [2024-11-20 16:40:58.090611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.389 qpair failed and we were unable to recover it. 00:29:12.389 [2024-11-20 16:40:58.090962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.389 [2024-11-20 16:40:58.090968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.389 qpair failed and we were unable to recover it. 00:29:12.389 [2024-11-20 16:40:58.091259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.389 [2024-11-20 16:40:58.091271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.389 qpair failed and we were unable to recover it. 00:29:12.389 [2024-11-20 16:40:58.091580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.389 [2024-11-20 16:40:58.091586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.389 qpair failed and we were unable to recover it. 00:29:12.389 [2024-11-20 16:40:58.091900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.389 [2024-11-20 16:40:58.091907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.389 qpair failed and we were unable to recover it. 00:29:12.389 [2024-11-20 16:40:58.092183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.389 [2024-11-20 16:40:58.092189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.389 qpair failed and we were unable to recover it. 00:29:12.389 [2024-11-20 16:40:58.092482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.389 [2024-11-20 16:40:58.092489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.389 qpair failed and we were unable to recover it. 00:29:12.389 [2024-11-20 16:40:58.092786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.389 [2024-11-20 16:40:58.092793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.389 qpair failed and we were unable to recover it. 00:29:12.389 [2024-11-20 16:40:58.093100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.389 [2024-11-20 16:40:58.093107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.389 qpair failed and we were unable to recover it. 00:29:12.389 [2024-11-20 16:40:58.093306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.389 [2024-11-20 16:40:58.093313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.389 qpair failed and we were unable to recover it. 00:29:12.389 [2024-11-20 16:40:58.093539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.389 [2024-11-20 16:40:58.093548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.389 qpair failed and we were unable to recover it. 00:29:12.389 [2024-11-20 16:40:58.093874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.389 [2024-11-20 16:40:58.093881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.389 qpair failed and we were unable to recover it. 00:29:12.389 [2024-11-20 16:40:58.094198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.389 [2024-11-20 16:40:58.094204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.389 qpair failed and we were unable to recover it. 00:29:12.389 [2024-11-20 16:40:58.094507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.390 [2024-11-20 16:40:58.094514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.390 qpair failed and we were unable to recover it. 00:29:12.390 [2024-11-20 16:40:58.094808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.390 [2024-11-20 16:40:58.094816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.390 qpair failed and we were unable to recover it. 00:29:12.390 [2024-11-20 16:40:58.095139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.390 [2024-11-20 16:40:58.095146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.390 qpair failed and we were unable to recover it. 00:29:12.390 [2024-11-20 16:40:58.095455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.390 [2024-11-20 16:40:58.095462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.390 qpair failed and we were unable to recover it. 00:29:12.390 [2024-11-20 16:40:58.095765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.390 [2024-11-20 16:40:58.095772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.390 qpair failed and we were unable to recover it. 00:29:12.390 [2024-11-20 16:40:58.096062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.390 [2024-11-20 16:40:58.096069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.390 qpair failed and we were unable to recover it. 00:29:12.390 [2024-11-20 16:40:58.096400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.390 [2024-11-20 16:40:58.096408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.390 qpair failed and we were unable to recover it. 00:29:12.390 [2024-11-20 16:40:58.096600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.390 [2024-11-20 16:40:58.096607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.390 qpair failed and we were unable to recover it. 00:29:12.390 [2024-11-20 16:40:58.096879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.390 [2024-11-20 16:40:58.096886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.390 qpair failed and we were unable to recover it. 00:29:12.390 [2024-11-20 16:40:58.097240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.390 [2024-11-20 16:40:58.097247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.390 qpair failed and we were unable to recover it. 00:29:12.390 [2024-11-20 16:40:58.097565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.390 [2024-11-20 16:40:58.097571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.390 qpair failed and we were unable to recover it. 00:29:12.390 [2024-11-20 16:40:58.097877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.390 [2024-11-20 16:40:58.097884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.390 qpair failed and we were unable to recover it. 00:29:12.390 [2024-11-20 16:40:58.098039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.390 [2024-11-20 16:40:58.098046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.390 qpair failed and we were unable to recover it. 00:29:12.390 [2024-11-20 16:40:58.098413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.390 [2024-11-20 16:40:58.098420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.390 qpair failed and we were unable to recover it. 00:29:12.390 [2024-11-20 16:40:58.098666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.390 [2024-11-20 16:40:58.098673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.390 qpair failed and we were unable to recover it. 00:29:12.390 [2024-11-20 16:40:58.099004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.390 [2024-11-20 16:40:58.099011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.390 qpair failed and we were unable to recover it. 00:29:12.390 [2024-11-20 16:40:58.099322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.390 [2024-11-20 16:40:58.099334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.390 qpair failed and we were unable to recover it. 00:29:12.390 [2024-11-20 16:40:58.099638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.390 [2024-11-20 16:40:58.099644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.390 qpair failed and we were unable to recover it. 00:29:12.390 [2024-11-20 16:40:58.099802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.390 [2024-11-20 16:40:58.099809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.390 qpair failed and we were unable to recover it. 00:29:12.390 [2024-11-20 16:40:58.100137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.390 [2024-11-20 16:40:58.100143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.390 qpair failed and we were unable to recover it. 00:29:12.390 [2024-11-20 16:40:58.100465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.390 [2024-11-20 16:40:58.100472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.390 qpair failed and we were unable to recover it. 00:29:12.390 [2024-11-20 16:40:58.100650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.390 [2024-11-20 16:40:58.100656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.390 qpair failed and we were unable to recover it. 00:29:12.390 [2024-11-20 16:40:58.100940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.390 [2024-11-20 16:40:58.100947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.390 qpair failed and we were unable to recover it. 00:29:12.390 [2024-11-20 16:40:58.101263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.390 [2024-11-20 16:40:58.101270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.390 qpair failed and we were unable to recover it. 00:29:12.390 [2024-11-20 16:40:58.101571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.390 [2024-11-20 16:40:58.101579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.390 qpair failed and we were unable to recover it. 00:29:12.390 [2024-11-20 16:40:58.101886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.390 [2024-11-20 16:40:58.101894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.390 qpair failed and we were unable to recover it. 00:29:12.390 [2024-11-20 16:40:58.102192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.390 [2024-11-20 16:40:58.102200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.390 qpair failed and we were unable to recover it. 00:29:12.390 [2024-11-20 16:40:58.102489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.390 [2024-11-20 16:40:58.102495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.390 qpair failed and we were unable to recover it. 00:29:12.390 [2024-11-20 16:40:58.102801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.390 [2024-11-20 16:40:58.102808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.390 qpair failed and we were unable to recover it. 00:29:12.390 [2024-11-20 16:40:58.103112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.390 [2024-11-20 16:40:58.103119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.390 qpair failed and we were unable to recover it. 00:29:12.390 [2024-11-20 16:40:58.103417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.390 [2024-11-20 16:40:58.103424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.390 qpair failed and we were unable to recover it. 00:29:12.390 [2024-11-20 16:40:58.103727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.390 [2024-11-20 16:40:58.103734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.390 qpair failed and we were unable to recover it. 00:29:12.390 [2024-11-20 16:40:58.104042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.390 [2024-11-20 16:40:58.104050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.390 qpair failed and we were unable to recover it. 00:29:12.390 [2024-11-20 16:40:58.104347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.390 [2024-11-20 16:40:58.104354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.390 qpair failed and we were unable to recover it. 00:29:12.390 [2024-11-20 16:40:58.104651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.390 [2024-11-20 16:40:58.104659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.390 qpair failed and we were unable to recover it. 00:29:12.390 [2024-11-20 16:40:58.104961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.390 [2024-11-20 16:40:58.104969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.390 qpair failed and we were unable to recover it. 00:29:12.390 [2024-11-20 16:40:58.105291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.390 [2024-11-20 16:40:58.105299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.390 qpair failed and we were unable to recover it. 00:29:12.390 [2024-11-20 16:40:58.105542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.390 [2024-11-20 16:40:58.105551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.390 qpair failed and we were unable to recover it. 00:29:12.390 [2024-11-20 16:40:58.105719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.391 [2024-11-20 16:40:58.105727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.391 qpair failed and we were unable to recover it. 00:29:12.391 [2024-11-20 16:40:58.106034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.391 [2024-11-20 16:40:58.106042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.391 qpair failed and we were unable to recover it. 00:29:12.391 [2024-11-20 16:40:58.106376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.391 [2024-11-20 16:40:58.106383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.391 qpair failed and we were unable to recover it. 00:29:12.391 [2024-11-20 16:40:58.106582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.391 [2024-11-20 16:40:58.106589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.391 qpair failed and we were unable to recover it. 00:29:12.391 [2024-11-20 16:40:58.106786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.391 [2024-11-20 16:40:58.106793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.391 qpair failed and we were unable to recover it. 00:29:12.391 [2024-11-20 16:40:58.107088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.391 [2024-11-20 16:40:58.107096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.391 qpair failed and we were unable to recover it. 00:29:12.391 [2024-11-20 16:40:58.107298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.391 [2024-11-20 16:40:58.107305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.391 qpair failed and we were unable to recover it. 00:29:12.391 [2024-11-20 16:40:58.107482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.391 [2024-11-20 16:40:58.107490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.391 qpair failed and we were unable to recover it. 00:29:12.391 [2024-11-20 16:40:58.107675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.391 [2024-11-20 16:40:58.107684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.391 qpair failed and we were unable to recover it. 00:29:12.391 [2024-11-20 16:40:58.107951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.391 [2024-11-20 16:40:58.107958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.391 qpair failed and we were unable to recover it. 00:29:12.391 [2024-11-20 16:40:58.108262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.391 [2024-11-20 16:40:58.108270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.391 qpair failed and we were unable to recover it. 00:29:12.391 [2024-11-20 16:40:58.108551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.391 [2024-11-20 16:40:58.108558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.391 qpair failed and we were unable to recover it. 00:29:12.391 [2024-11-20 16:40:58.108860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.391 [2024-11-20 16:40:58.108867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.391 qpair failed and we were unable to recover it. 00:29:12.391 [2024-11-20 16:40:58.109060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.391 [2024-11-20 16:40:58.109068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.391 qpair failed and we were unable to recover it. 00:29:12.391 [2024-11-20 16:40:58.109340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.391 [2024-11-20 16:40:58.109348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.391 qpair failed and we were unable to recover it. 00:29:12.391 [2024-11-20 16:40:58.109647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.391 [2024-11-20 16:40:58.109655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.391 qpair failed and we were unable to recover it. 00:29:12.391 [2024-11-20 16:40:58.109984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.391 [2024-11-20 16:40:58.109992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.391 qpair failed and we were unable to recover it. 00:29:12.391 [2024-11-20 16:40:58.110294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.391 [2024-11-20 16:40:58.110301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.391 qpair failed and we were unable to recover it. 00:29:12.391 [2024-11-20 16:40:58.110583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.391 [2024-11-20 16:40:58.110590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.391 qpair failed and we were unable to recover it. 00:29:12.391 [2024-11-20 16:40:58.110897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.391 [2024-11-20 16:40:58.110905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.391 qpair failed and we were unable to recover it. 00:29:12.391 [2024-11-20 16:40:58.111240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.391 [2024-11-20 16:40:58.111248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.391 qpair failed and we were unable to recover it. 00:29:12.391 [2024-11-20 16:40:58.111449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.391 [2024-11-20 16:40:58.111456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.391 qpair failed and we were unable to recover it. 00:29:12.391 [2024-11-20 16:40:58.111746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.391 [2024-11-20 16:40:58.111753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.391 qpair failed and we were unable to recover it. 00:29:12.391 [2024-11-20 16:40:58.112034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.391 [2024-11-20 16:40:58.112042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.391 qpair failed and we were unable to recover it. 00:29:12.391 [2024-11-20 16:40:58.112342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.391 [2024-11-20 16:40:58.112350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.391 qpair failed and we were unable to recover it. 00:29:12.391 [2024-11-20 16:40:58.112406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.391 [2024-11-20 16:40:58.112412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.391 qpair failed and we were unable to recover it. 00:29:12.391 [2024-11-20 16:40:58.112558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.391 [2024-11-20 16:40:58.112565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.391 qpair failed and we were unable to recover it. 00:29:12.391 [2024-11-20 16:40:58.112875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.391 [2024-11-20 16:40:58.112883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.391 qpair failed and we were unable to recover it. 00:29:12.391 [2024-11-20 16:40:58.113206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.391 [2024-11-20 16:40:58.113214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.391 qpair failed and we were unable to recover it. 00:29:12.391 [2024-11-20 16:40:58.113505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.391 [2024-11-20 16:40:58.113512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.391 qpair failed and we were unable to recover it. 00:29:12.391 [2024-11-20 16:40:58.113811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.391 [2024-11-20 16:40:58.113819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.391 qpair failed and we were unable to recover it. 00:29:12.391 [2024-11-20 16:40:58.114165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.391 [2024-11-20 16:40:58.114173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.391 qpair failed and we were unable to recover it. 00:29:12.391 [2024-11-20 16:40:58.114472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.391 [2024-11-20 16:40:58.114480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.391 qpair failed and we were unable to recover it. 00:29:12.391 [2024-11-20 16:40:58.114780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.391 [2024-11-20 16:40:58.114787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.391 qpair failed and we were unable to recover it. 00:29:12.391 [2024-11-20 16:40:58.115096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.391 [2024-11-20 16:40:58.115104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.391 qpair failed and we were unable to recover it. 00:29:12.391 [2024-11-20 16:40:58.115471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.391 [2024-11-20 16:40:58.115478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.391 qpair failed and we were unable to recover it. 00:29:12.391 [2024-11-20 16:40:58.115791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.391 [2024-11-20 16:40:58.115798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.392 qpair failed and we were unable to recover it. 00:29:12.392 [2024-11-20 16:40:58.116100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.392 [2024-11-20 16:40:58.116108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.392 qpair failed and we were unable to recover it. 00:29:12.392 [2024-11-20 16:40:58.116400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.392 [2024-11-20 16:40:58.116408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.392 qpair failed and we were unable to recover it. 00:29:12.392 [2024-11-20 16:40:58.116715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.392 [2024-11-20 16:40:58.116726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.392 qpair failed and we were unable to recover it. 00:29:12.392 [2024-11-20 16:40:58.117036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.392 [2024-11-20 16:40:58.117043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.392 qpair failed and we were unable to recover it. 00:29:12.392 [2024-11-20 16:40:58.117382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.392 [2024-11-20 16:40:58.117390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.392 qpair failed and we were unable to recover it. 00:29:12.392 [2024-11-20 16:40:58.117693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.392 [2024-11-20 16:40:58.117700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.392 qpair failed and we were unable to recover it. 00:29:12.392 [2024-11-20 16:40:58.117887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.392 [2024-11-20 16:40:58.117895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.392 qpair failed and we were unable to recover it. 00:29:12.392 [2024-11-20 16:40:58.118189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.392 [2024-11-20 16:40:58.118197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.392 qpair failed and we were unable to recover it. 00:29:12.392 [2024-11-20 16:40:58.118497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.392 [2024-11-20 16:40:58.118504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.392 qpair failed and we were unable to recover it. 00:29:12.392 [2024-11-20 16:40:58.118808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.392 [2024-11-20 16:40:58.118816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.392 qpair failed and we were unable to recover it. 00:29:12.392 [2024-11-20 16:40:58.119101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.392 [2024-11-20 16:40:58.119109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.392 qpair failed and we were unable to recover it. 00:29:12.392 [2024-11-20 16:40:58.119405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.392 [2024-11-20 16:40:58.119413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.392 qpair failed and we were unable to recover it. 00:29:12.392 [2024-11-20 16:40:58.119724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.392 [2024-11-20 16:40:58.119731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.392 qpair failed and we were unable to recover it. 00:29:12.392 [2024-11-20 16:40:58.120074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.392 [2024-11-20 16:40:58.120082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.392 qpair failed and we were unable to recover it. 00:29:12.392 [2024-11-20 16:40:58.120368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.392 [2024-11-20 16:40:58.120376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.392 qpair failed and we were unable to recover it. 00:29:12.392 [2024-11-20 16:40:58.120759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.392 [2024-11-20 16:40:58.120767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.392 qpair failed and we were unable to recover it. 00:29:12.392 [2024-11-20 16:40:58.121064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.392 [2024-11-20 16:40:58.121072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.392 qpair failed and we were unable to recover it. 00:29:12.392 [2024-11-20 16:40:58.121378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.392 [2024-11-20 16:40:58.121386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.392 qpair failed and we were unable to recover it. 00:29:12.392 [2024-11-20 16:40:58.121702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.392 [2024-11-20 16:40:58.121709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.392 qpair failed and we were unable to recover it. 00:29:12.392 [2024-11-20 16:40:58.122027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.392 [2024-11-20 16:40:58.122035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.392 qpair failed and we were unable to recover it. 00:29:12.392 [2024-11-20 16:40:58.122343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.392 [2024-11-20 16:40:58.122351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.392 qpair failed and we were unable to recover it. 00:29:12.392 [2024-11-20 16:40:58.122654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.392 [2024-11-20 16:40:58.122662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.392 qpair failed and we were unable to recover it. 00:29:12.392 [2024-11-20 16:40:58.122941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.392 [2024-11-20 16:40:58.122948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.392 qpair failed and we were unable to recover it. 00:29:12.392 [2024-11-20 16:40:58.123245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.392 [2024-11-20 16:40:58.123252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.392 qpair failed and we were unable to recover it. 00:29:12.392 [2024-11-20 16:40:58.123561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.392 [2024-11-20 16:40:58.123568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.392 qpair failed and we were unable to recover it. 00:29:12.392 [2024-11-20 16:40:58.123878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.392 [2024-11-20 16:40:58.123886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.392 qpair failed and we were unable to recover it. 00:29:12.392 [2024-11-20 16:40:58.124166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.392 [2024-11-20 16:40:58.124174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.392 qpair failed and we were unable to recover it. 00:29:12.392 [2024-11-20 16:40:58.124486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.392 [2024-11-20 16:40:58.124493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.392 qpair failed and we were unable to recover it. 00:29:12.392 [2024-11-20 16:40:58.124820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.392 [2024-11-20 16:40:58.124828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.392 qpair failed and we were unable to recover it. 00:29:12.392 [2024-11-20 16:40:58.125130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.392 [2024-11-20 16:40:58.125137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.393 qpair failed and we were unable to recover it. 00:29:12.393 [2024-11-20 16:40:58.125440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.393 [2024-11-20 16:40:58.125447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.393 qpair failed and we were unable to recover it. 00:29:12.393 [2024-11-20 16:40:58.125768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.393 [2024-11-20 16:40:58.125775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.393 qpair failed and we were unable to recover it. 00:29:12.393 [2024-11-20 16:40:58.126080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.393 [2024-11-20 16:40:58.126087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.393 qpair failed and we were unable to recover it. 00:29:12.393 [2024-11-20 16:40:58.126397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.393 [2024-11-20 16:40:58.126404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.393 qpair failed and we were unable to recover it. 00:29:12.393 [2024-11-20 16:40:58.126726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.393 [2024-11-20 16:40:58.126733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.393 qpair failed and we were unable to recover it. 00:29:12.393 [2024-11-20 16:40:58.127041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.393 [2024-11-20 16:40:58.127048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.393 qpair failed and we were unable to recover it. 00:29:12.393 [2024-11-20 16:40:58.127357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.393 [2024-11-20 16:40:58.127364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.393 qpair failed and we were unable to recover it. 00:29:12.393 [2024-11-20 16:40:58.127651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.393 [2024-11-20 16:40:58.127658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.393 qpair failed and we were unable to recover it. 00:29:12.393 [2024-11-20 16:40:58.127976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.393 [2024-11-20 16:40:58.127990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.393 qpair failed and we were unable to recover it. 00:29:12.393 [2024-11-20 16:40:58.128285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.393 [2024-11-20 16:40:58.128292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.393 qpair failed and we were unable to recover it. 00:29:12.393 [2024-11-20 16:40:58.128585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.393 [2024-11-20 16:40:58.128591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.393 qpair failed and we were unable to recover it. 00:29:12.393 [2024-11-20 16:40:58.128869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.393 [2024-11-20 16:40:58.128876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.393 qpair failed and we were unable to recover it. 00:29:12.393 [2024-11-20 16:40:58.129196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.393 [2024-11-20 16:40:58.129205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.393 qpair failed and we were unable to recover it. 00:29:12.393 [2024-11-20 16:40:58.129383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.393 [2024-11-20 16:40:58.129390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.393 qpair failed and we were unable to recover it. 00:29:12.393 [2024-11-20 16:40:58.129662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.393 [2024-11-20 16:40:58.129669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.393 qpair failed and we were unable to recover it. 00:29:12.393 [2024-11-20 16:40:58.129991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.393 [2024-11-20 16:40:58.129999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.393 qpair failed and we were unable to recover it. 00:29:12.393 [2024-11-20 16:40:58.130297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.393 [2024-11-20 16:40:58.130303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.393 qpair failed and we were unable to recover it. 00:29:12.393 [2024-11-20 16:40:58.130596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.393 [2024-11-20 16:40:58.130603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.393 qpair failed and we were unable to recover it. 00:29:12.393 [2024-11-20 16:40:58.130885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.393 [2024-11-20 16:40:58.130892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.393 qpair failed and we were unable to recover it. 00:29:12.393 [2024-11-20 16:40:58.131190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.393 [2024-11-20 16:40:58.131196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.393 qpair failed and we were unable to recover it. 00:29:12.393 [2024-11-20 16:40:58.131389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.393 [2024-11-20 16:40:58.131396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.393 qpair failed and we were unable to recover it. 00:29:12.393 [2024-11-20 16:40:58.131756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.393 [2024-11-20 16:40:58.131763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.393 qpair failed and we were unable to recover it. 00:29:12.393 [2024-11-20 16:40:58.132108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.393 [2024-11-20 16:40:58.132116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.393 qpair failed and we were unable to recover it. 00:29:12.393 [2024-11-20 16:40:58.132412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.393 [2024-11-20 16:40:58.132419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.393 qpair failed and we were unable to recover it. 00:29:12.393 [2024-11-20 16:40:58.132729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.393 [2024-11-20 16:40:58.132736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.393 qpair failed and we were unable to recover it. 00:29:12.393 [2024-11-20 16:40:58.133059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.393 [2024-11-20 16:40:58.133065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.393 qpair failed and we were unable to recover it. 00:29:12.393 [2024-11-20 16:40:58.133215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.393 [2024-11-20 16:40:58.133223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.393 qpair failed and we were unable to recover it. 00:29:12.393 [2024-11-20 16:40:58.133495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.393 [2024-11-20 16:40:58.133502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.393 qpair failed and we were unable to recover it. 00:29:12.393 [2024-11-20 16:40:58.133685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.393 [2024-11-20 16:40:58.133693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.393 qpair failed and we were unable to recover it. 00:29:12.393 [2024-11-20 16:40:58.133885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.393 [2024-11-20 16:40:58.133893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.393 qpair failed and we were unable to recover it. 00:29:12.393 [2024-11-20 16:40:58.134069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.393 [2024-11-20 16:40:58.134077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.393 qpair failed and we were unable to recover it. 00:29:12.393 [2024-11-20 16:40:58.134346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.393 [2024-11-20 16:40:58.134352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.393 qpair failed and we were unable to recover it. 00:29:12.393 [2024-11-20 16:40:58.134657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.393 [2024-11-20 16:40:58.134671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.393 qpair failed and we were unable to recover it. 00:29:12.393 [2024-11-20 16:40:58.134971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.393 [2024-11-20 16:40:58.134977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.393 qpair failed and we were unable to recover it. 00:29:12.393 [2024-11-20 16:40:58.135159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.393 [2024-11-20 16:40:58.135166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.393 qpair failed and we were unable to recover it. 00:29:12.393 [2024-11-20 16:40:58.135348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.393 [2024-11-20 16:40:58.135354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.393 qpair failed and we were unable to recover it. 00:29:12.393 [2024-11-20 16:40:58.135668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.393 [2024-11-20 16:40:58.135674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.393 qpair failed and we were unable to recover it. 00:29:12.393 [2024-11-20 16:40:58.135844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.394 [2024-11-20 16:40:58.135851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.394 qpair failed and we were unable to recover it. 00:29:12.394 [2024-11-20 16:40:58.136039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.394 [2024-11-20 16:40:58.136046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.394 qpair failed and we were unable to recover it. 00:29:12.394 [2024-11-20 16:40:58.136362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.394 [2024-11-20 16:40:58.136369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.394 qpair failed and we were unable to recover it. 00:29:12.394 [2024-11-20 16:40:58.136657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.394 [2024-11-20 16:40:58.136664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.394 qpair failed and we were unable to recover it. 00:29:12.394 [2024-11-20 16:40:58.136951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.394 [2024-11-20 16:40:58.136957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.394 qpair failed and we were unable to recover it. 00:29:12.394 [2024-11-20 16:40:58.137266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.394 [2024-11-20 16:40:58.137273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.394 qpair failed and we were unable to recover it. 00:29:12.394 [2024-11-20 16:40:58.137583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.394 [2024-11-20 16:40:58.137590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.394 qpair failed and we were unable to recover it. 00:29:12.394 [2024-11-20 16:40:58.137915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.394 [2024-11-20 16:40:58.137922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.394 qpair failed and we were unable to recover it. 00:29:12.394 [2024-11-20 16:40:58.138112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.394 [2024-11-20 16:40:58.138119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.394 qpair failed and we were unable to recover it. 00:29:12.394 [2024-11-20 16:40:58.138432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.394 [2024-11-20 16:40:58.138438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.394 qpair failed and we were unable to recover it. 00:29:12.394 [2024-11-20 16:40:58.138762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.394 [2024-11-20 16:40:58.138769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.394 qpair failed and we were unable to recover it. 00:29:12.394 [2024-11-20 16:40:58.138943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.394 [2024-11-20 16:40:58.138950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.394 qpair failed and we were unable to recover it. 00:29:12.394 [2024-11-20 16:40:58.139222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.394 [2024-11-20 16:40:58.139230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.394 qpair failed and we were unable to recover it. 00:29:12.394 [2024-11-20 16:40:58.139531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.394 [2024-11-20 16:40:58.139538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.394 qpair failed and we were unable to recover it. 00:29:12.394 [2024-11-20 16:40:58.139846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.394 [2024-11-20 16:40:58.139853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.394 qpair failed and we were unable to recover it. 00:29:12.394 [2024-11-20 16:40:58.140158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.394 [2024-11-20 16:40:58.140167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.394 qpair failed and we were unable to recover it. 00:29:12.394 [2024-11-20 16:40:58.140458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.394 [2024-11-20 16:40:58.140465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.394 qpair failed and we were unable to recover it. 00:29:12.394 [2024-11-20 16:40:58.140649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.394 [2024-11-20 16:40:58.140656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.394 qpair failed and we were unable to recover it. 00:29:12.394 [2024-11-20 16:40:58.140985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.394 [2024-11-20 16:40:58.140992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.394 qpair failed and we were unable to recover it. 00:29:12.394 [2024-11-20 16:40:58.141308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.394 [2024-11-20 16:40:58.141315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.394 qpair failed and we were unable to recover it. 00:29:12.394 [2024-11-20 16:40:58.141627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.394 [2024-11-20 16:40:58.141633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.394 qpair failed and we were unable to recover it. 00:29:12.394 [2024-11-20 16:40:58.141926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.394 [2024-11-20 16:40:58.141933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.394 qpair failed and we were unable to recover it. 00:29:12.394 [2024-11-20 16:40:58.142236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.394 [2024-11-20 16:40:58.142243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.394 qpair failed and we were unable to recover it. 00:29:12.394 [2024-11-20 16:40:58.142553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.394 [2024-11-20 16:40:58.142560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.394 qpair failed and we were unable to recover it. 00:29:12.394 [2024-11-20 16:40:58.142868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.394 [2024-11-20 16:40:58.142875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.394 qpair failed and we were unable to recover it. 00:29:12.394 [2024-11-20 16:40:58.143230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.394 [2024-11-20 16:40:58.143237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.394 qpair failed and we were unable to recover it. 00:29:12.394 [2024-11-20 16:40:58.143540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.394 [2024-11-20 16:40:58.143546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.394 qpair failed and we were unable to recover it. 00:29:12.394 [2024-11-20 16:40:58.143733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.394 [2024-11-20 16:40:58.143740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.394 qpair failed and we were unable to recover it. 00:29:12.394 [2024-11-20 16:40:58.144000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.394 [2024-11-20 16:40:58.144007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.394 qpair failed and we were unable to recover it. 00:29:12.394 [2024-11-20 16:40:58.144355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.394 [2024-11-20 16:40:58.144361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.394 qpair failed and we were unable to recover it. 00:29:12.394 [2024-11-20 16:40:58.144671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.394 [2024-11-20 16:40:58.144677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.394 qpair failed and we were unable to recover it. 00:29:12.394 [2024-11-20 16:40:58.144852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.394 [2024-11-20 16:40:58.144860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.394 qpair failed and we were unable to recover it. 00:29:12.394 [2024-11-20 16:40:58.145148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.394 [2024-11-20 16:40:58.145155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.394 qpair failed and we were unable to recover it. 00:29:12.394 [2024-11-20 16:40:58.145354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.394 [2024-11-20 16:40:58.145361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.394 qpair failed and we were unable to recover it. 00:29:12.394 [2024-11-20 16:40:58.145650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.394 [2024-11-20 16:40:58.145657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.394 qpair failed and we were unable to recover it. 00:29:12.394 [2024-11-20 16:40:58.145930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.394 [2024-11-20 16:40:58.145936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.394 qpair failed and we were unable to recover it. 00:29:12.394 [2024-11-20 16:40:58.146280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.394 [2024-11-20 16:40:58.146287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.394 qpair failed and we were unable to recover it. 00:29:12.394 [2024-11-20 16:40:58.146570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.394 [2024-11-20 16:40:58.146577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.394 qpair failed and we were unable to recover it. 00:29:12.394 [2024-11-20 16:40:58.146765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.395 [2024-11-20 16:40:58.146772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.395 qpair failed and we were unable to recover it. 00:29:12.395 [2024-11-20 16:40:58.147081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.395 [2024-11-20 16:40:58.147088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.395 qpair failed and we were unable to recover it. 00:29:12.395 [2024-11-20 16:40:58.147420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.395 [2024-11-20 16:40:58.147427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.395 qpair failed and we were unable to recover it. 00:29:12.395 [2024-11-20 16:40:58.147734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.395 [2024-11-20 16:40:58.147741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.395 qpair failed and we were unable to recover it. 00:29:12.395 [2024-11-20 16:40:58.148020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.395 [2024-11-20 16:40:58.148028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.395 qpair failed and we were unable to recover it. 00:29:12.395 [2024-11-20 16:40:58.148343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.395 [2024-11-20 16:40:58.148349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.395 qpair failed and we were unable to recover it. 00:29:12.395 [2024-11-20 16:40:58.148663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.395 [2024-11-20 16:40:58.148670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.395 qpair failed and we were unable to recover it. 00:29:12.395 [2024-11-20 16:40:58.148864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.395 [2024-11-20 16:40:58.148872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.395 qpair failed and we were unable to recover it. 00:29:12.395 [2024-11-20 16:40:58.149191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.395 [2024-11-20 16:40:58.149197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.395 qpair failed and we were unable to recover it. 00:29:12.395 [2024-11-20 16:40:58.149471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.395 [2024-11-20 16:40:58.149477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.395 qpair failed and we were unable to recover it. 00:29:12.395 [2024-11-20 16:40:58.149791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.395 [2024-11-20 16:40:58.149798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.395 qpair failed and we were unable to recover it. 00:29:12.395 [2024-11-20 16:40:58.150091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.395 [2024-11-20 16:40:58.150098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.395 qpair failed and we were unable to recover it. 00:29:12.395 [2024-11-20 16:40:58.150386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.395 [2024-11-20 16:40:58.150393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.395 qpair failed and we were unable to recover it. 00:29:12.395 [2024-11-20 16:40:58.150667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.395 [2024-11-20 16:40:58.150674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.395 qpair failed and we were unable to recover it. 00:29:12.395 [2024-11-20 16:40:58.150998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.395 [2024-11-20 16:40:58.151005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.395 qpair failed and we were unable to recover it. 00:29:12.395 [2024-11-20 16:40:58.151342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.395 [2024-11-20 16:40:58.151349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.395 qpair failed and we were unable to recover it. 00:29:12.395 [2024-11-20 16:40:58.151653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.395 [2024-11-20 16:40:58.151659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.395 qpair failed and we were unable to recover it. 00:29:12.395 [2024-11-20 16:40:58.151954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.395 [2024-11-20 16:40:58.151962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.395 qpair failed and we were unable to recover it. 00:29:12.395 [2024-11-20 16:40:58.152140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.395 [2024-11-20 16:40:58.152148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.395 qpair failed and we were unable to recover it. 00:29:12.395 [2024-11-20 16:40:58.152451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.395 [2024-11-20 16:40:58.152458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.395 qpair failed and we were unable to recover it. 00:29:12.395 [2024-11-20 16:40:58.152757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.395 [2024-11-20 16:40:58.152763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.395 qpair failed and we were unable to recover it. 00:29:12.395 [2024-11-20 16:40:58.153049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.395 [2024-11-20 16:40:58.153055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.395 qpair failed and we were unable to recover it. 00:29:12.395 [2024-11-20 16:40:58.153236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.395 [2024-11-20 16:40:58.153242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.395 qpair failed and we were unable to recover it. 00:29:12.395 [2024-11-20 16:40:58.153511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.395 [2024-11-20 16:40:58.153518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.395 qpair failed and we were unable to recover it. 00:29:12.395 [2024-11-20 16:40:58.153732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.395 [2024-11-20 16:40:58.153739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.395 qpair failed and we were unable to recover it. 00:29:12.395 [2024-11-20 16:40:58.154046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.395 [2024-11-20 16:40:58.154053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.395 qpair failed and we were unable to recover it. 00:29:12.395 [2024-11-20 16:40:58.154239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.395 [2024-11-20 16:40:58.154246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.395 qpair failed and we were unable to recover it. 00:29:12.395 [2024-11-20 16:40:58.154567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.395 [2024-11-20 16:40:58.154574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.395 qpair failed and we were unable to recover it. 00:29:12.395 [2024-11-20 16:40:58.154767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.395 [2024-11-20 16:40:58.154774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.395 qpair failed and we were unable to recover it. 00:29:12.395 [2024-11-20 16:40:58.155140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.395 [2024-11-20 16:40:58.155146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.395 qpair failed and we were unable to recover it. 00:29:12.395 [2024-11-20 16:40:58.155290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.395 [2024-11-20 16:40:58.155297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.395 qpair failed and we were unable to recover it. 00:29:12.395 [2024-11-20 16:40:58.155569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.395 [2024-11-20 16:40:58.155577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.395 qpair failed and we were unable to recover it. 00:29:12.395 [2024-11-20 16:40:58.155943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.395 [2024-11-20 16:40:58.155950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.395 qpair failed and we were unable to recover it. 00:29:12.395 [2024-11-20 16:40:58.156290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.395 [2024-11-20 16:40:58.156297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.395 qpair failed and we were unable to recover it. 00:29:12.395 [2024-11-20 16:40:58.156611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.395 [2024-11-20 16:40:58.156617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.395 qpair failed and we were unable to recover it. 00:29:12.395 [2024-11-20 16:40:58.156909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.395 [2024-11-20 16:40:58.156915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.395 qpair failed and we were unable to recover it. 00:29:12.395 [2024-11-20 16:40:58.157199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.395 [2024-11-20 16:40:58.157206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.395 qpair failed and we were unable to recover it. 00:29:12.395 [2024-11-20 16:40:58.157478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.395 [2024-11-20 16:40:58.157485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.395 qpair failed and we were unable to recover it. 00:29:12.395 [2024-11-20 16:40:58.157830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.396 [2024-11-20 16:40:58.157837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.396 qpair failed and we were unable to recover it. 00:29:12.396 [2024-11-20 16:40:58.158149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.396 [2024-11-20 16:40:58.158156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.396 qpair failed and we were unable to recover it. 00:29:12.396 [2024-11-20 16:40:58.158458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.396 [2024-11-20 16:40:58.158465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.396 qpair failed and we were unable to recover it. 00:29:12.396 [2024-11-20 16:40:58.158752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.396 [2024-11-20 16:40:58.158759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.396 qpair failed and we were unable to recover it. 00:29:12.396 [2024-11-20 16:40:58.159044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.396 [2024-11-20 16:40:58.159051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.396 qpair failed and we were unable to recover it. 00:29:12.396 [2024-11-20 16:40:58.159346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.396 [2024-11-20 16:40:58.159353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.396 qpair failed and we were unable to recover it. 00:29:12.396 [2024-11-20 16:40:58.159641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.396 [2024-11-20 16:40:58.159648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.396 qpair failed and we were unable to recover it. 00:29:12.396 [2024-11-20 16:40:58.159954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.396 [2024-11-20 16:40:58.159960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.396 qpair failed and we were unable to recover it. 00:29:12.396 [2024-11-20 16:40:58.160268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.396 [2024-11-20 16:40:58.160274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.396 qpair failed and we were unable to recover it. 00:29:12.396 [2024-11-20 16:40:58.160567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.396 [2024-11-20 16:40:58.160574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.396 qpair failed and we were unable to recover it. 00:29:12.396 [2024-11-20 16:40:58.160895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.396 [2024-11-20 16:40:58.160902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.396 qpair failed and we were unable to recover it. 00:29:12.396 [2024-11-20 16:40:58.161192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.396 [2024-11-20 16:40:58.161199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.396 qpair failed and we were unable to recover it. 00:29:12.396 [2024-11-20 16:40:58.161503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.396 [2024-11-20 16:40:58.161510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.396 qpair failed and we were unable to recover it. 00:29:12.396 [2024-11-20 16:40:58.161815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.396 [2024-11-20 16:40:58.161822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.396 qpair failed and we were unable to recover it. 00:29:12.396 [2024-11-20 16:40:58.162127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.396 [2024-11-20 16:40:58.162135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.396 qpair failed and we were unable to recover it. 00:29:12.396 [2024-11-20 16:40:58.162438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.396 [2024-11-20 16:40:58.162445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.396 qpair failed and we were unable to recover it. 00:29:12.396 [2024-11-20 16:40:58.162754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.396 [2024-11-20 16:40:58.162762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.396 qpair failed and we were unable to recover it. 00:29:12.396 [2024-11-20 16:40:58.163042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.396 [2024-11-20 16:40:58.163049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.396 qpair failed and we were unable to recover it. 00:29:12.396 [2024-11-20 16:40:58.163258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.396 [2024-11-20 16:40:58.163264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.396 qpair failed and we were unable to recover it. 00:29:12.396 [2024-11-20 16:40:58.163440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.396 [2024-11-20 16:40:58.163449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.396 qpair failed and we were unable to recover it. 00:29:12.396 [2024-11-20 16:40:58.163621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.396 [2024-11-20 16:40:58.163628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.396 qpair failed and we were unable to recover it. 00:29:12.396 [2024-11-20 16:40:58.163942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.396 [2024-11-20 16:40:58.163948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.396 qpair failed and we were unable to recover it. 00:29:12.396 [2024-11-20 16:40:58.164265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.396 [2024-11-20 16:40:58.164272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.396 qpair failed and we were unable to recover it. 00:29:12.396 [2024-11-20 16:40:58.164551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.396 [2024-11-20 16:40:58.164557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.396 qpair failed and we were unable to recover it. 00:29:12.396 [2024-11-20 16:40:58.164855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.396 [2024-11-20 16:40:58.164861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.396 qpair failed and we were unable to recover it. 00:29:12.396 [2024-11-20 16:40:58.165151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.396 [2024-11-20 16:40:58.165159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.396 qpair failed and we were unable to recover it. 00:29:12.396 [2024-11-20 16:40:58.165477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.396 [2024-11-20 16:40:58.165484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.396 qpair failed and we were unable to recover it. 00:29:12.396 [2024-11-20 16:40:58.165774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.396 [2024-11-20 16:40:58.165781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.396 qpair failed and we were unable to recover it. 00:29:12.396 [2024-11-20 16:40:58.166098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.396 [2024-11-20 16:40:58.166105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.396 qpair failed and we were unable to recover it. 00:29:12.396 [2024-11-20 16:40:58.166473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.396 [2024-11-20 16:40:58.166480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.396 qpair failed and we were unable to recover it. 00:29:12.396 [2024-11-20 16:40:58.166785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.396 [2024-11-20 16:40:58.166792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.396 qpair failed and we were unable to recover it. 00:29:12.396 [2024-11-20 16:40:58.167102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.396 [2024-11-20 16:40:58.167108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.396 qpair failed and we were unable to recover it. 00:29:12.396 [2024-11-20 16:40:58.167391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.396 [2024-11-20 16:40:58.167398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.396 qpair failed and we were unable to recover it. 00:29:12.396 [2024-11-20 16:40:58.167720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.396 [2024-11-20 16:40:58.167727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.396 qpair failed and we were unable to recover it. 00:29:12.397 [2024-11-20 16:40:58.167919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.397 [2024-11-20 16:40:58.167926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.397 qpair failed and we were unable to recover it. 00:29:12.397 [2024-11-20 16:40:58.168248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.397 [2024-11-20 16:40:58.168254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.397 qpair failed and we were unable to recover it. 00:29:12.397 [2024-11-20 16:40:58.168514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.397 [2024-11-20 16:40:58.168520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.397 qpair failed and we were unable to recover it. 00:29:12.397 [2024-11-20 16:40:58.168847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.397 [2024-11-20 16:40:58.168853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.397 qpair failed and we were unable to recover it. 00:29:12.397 [2024-11-20 16:40:58.169138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.397 [2024-11-20 16:40:58.169144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.397 qpair failed and we were unable to recover it. 00:29:12.397 [2024-11-20 16:40:58.169443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.397 [2024-11-20 16:40:58.169450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.397 qpair failed and we were unable to recover it. 00:29:12.397 [2024-11-20 16:40:58.169636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.397 [2024-11-20 16:40:58.169642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.397 qpair failed and we were unable to recover it. 00:29:12.397 [2024-11-20 16:40:58.169858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.397 [2024-11-20 16:40:58.169864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.397 qpair failed and we were unable to recover it. 00:29:12.397 [2024-11-20 16:40:58.170171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.397 [2024-11-20 16:40:58.170179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.397 qpair failed and we were unable to recover it. 00:29:12.397 [2024-11-20 16:40:58.170521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.397 [2024-11-20 16:40:58.170527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.397 qpair failed and we were unable to recover it. 00:29:12.397 [2024-11-20 16:40:58.170803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.397 [2024-11-20 16:40:58.170809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.397 qpair failed and we were unable to recover it. 00:29:12.397 [2024-11-20 16:40:58.171120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.397 [2024-11-20 16:40:58.171127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.397 qpair failed and we were unable to recover it. 00:29:12.397 [2024-11-20 16:40:58.171437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.397 [2024-11-20 16:40:58.171444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.397 qpair failed and we were unable to recover it. 00:29:12.397 [2024-11-20 16:40:58.171715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.397 [2024-11-20 16:40:58.171722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.397 qpair failed and we were unable to recover it. 00:29:12.397 [2024-11-20 16:40:58.172027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.397 [2024-11-20 16:40:58.172034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.397 qpair failed and we were unable to recover it. 00:29:12.397 [2024-11-20 16:40:58.172354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.397 [2024-11-20 16:40:58.172361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.397 qpair failed and we were unable to recover it. 00:29:12.397 [2024-11-20 16:40:58.172653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.397 [2024-11-20 16:40:58.172660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.397 qpair failed and we were unable to recover it. 00:29:12.397 [2024-11-20 16:40:58.172965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.397 [2024-11-20 16:40:58.172972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.397 qpair failed and we were unable to recover it. 00:29:12.397 [2024-11-20 16:40:58.173211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.397 [2024-11-20 16:40:58.173218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.397 qpair failed and we were unable to recover it. 00:29:12.397 [2024-11-20 16:40:58.173536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.397 [2024-11-20 16:40:58.173544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.397 qpair failed and we were unable to recover it. 00:29:12.397 [2024-11-20 16:40:58.173836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.397 [2024-11-20 16:40:58.173843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.397 qpair failed and we were unable to recover it. 00:29:12.397 [2024-11-20 16:40:58.174143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.397 [2024-11-20 16:40:58.174151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.397 qpair failed and we were unable to recover it. 00:29:12.397 [2024-11-20 16:40:58.174350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.397 [2024-11-20 16:40:58.174357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.397 qpair failed and we were unable to recover it. 00:29:12.397 [2024-11-20 16:40:58.174628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.397 [2024-11-20 16:40:58.174635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.397 qpair failed and we were unable to recover it. 00:29:12.397 [2024-11-20 16:40:58.174935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.397 [2024-11-20 16:40:58.174942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.397 qpair failed and we were unable to recover it. 00:29:12.397 [2024-11-20 16:40:58.175257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.397 [2024-11-20 16:40:58.175268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.397 qpair failed and we were unable to recover it. 00:29:12.397 [2024-11-20 16:40:58.175555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.397 [2024-11-20 16:40:58.175561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.397 qpair failed and we were unable to recover it. 00:29:12.397 [2024-11-20 16:40:58.175872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.397 [2024-11-20 16:40:58.175879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.397 qpair failed and we were unable to recover it. 00:29:12.397 [2024-11-20 16:40:58.176159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.397 [2024-11-20 16:40:58.176166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.397 qpair failed and we were unable to recover it. 00:29:12.397 [2024-11-20 16:40:58.176460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.397 [2024-11-20 16:40:58.176467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.397 qpair failed and we were unable to recover it. 00:29:12.397 [2024-11-20 16:40:58.176770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.397 [2024-11-20 16:40:58.176777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.397 qpair failed and we were unable to recover it. 00:29:12.397 [2024-11-20 16:40:58.177093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.397 [2024-11-20 16:40:58.177100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.397 qpair failed and we were unable to recover it. 00:29:12.397 [2024-11-20 16:40:58.177426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.397 [2024-11-20 16:40:58.177433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.397 qpair failed and we were unable to recover it. 00:29:12.397 [2024-11-20 16:40:58.177741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.397 [2024-11-20 16:40:58.177748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.397 qpair failed and we were unable to recover it. 00:29:12.397 [2024-11-20 16:40:58.178055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.397 [2024-11-20 16:40:58.178062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.397 qpair failed and we were unable to recover it. 00:29:12.397 [2024-11-20 16:40:58.178099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.397 [2024-11-20 16:40:58.178106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.397 qpair failed and we were unable to recover it. 00:29:12.397 [2024-11-20 16:40:58.178379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.397 [2024-11-20 16:40:58.178386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.397 qpair failed and we were unable to recover it. 00:29:12.397 [2024-11-20 16:40:58.178597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.397 [2024-11-20 16:40:58.178604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.398 qpair failed and we were unable to recover it. 00:29:12.398 [2024-11-20 16:40:58.178912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.398 [2024-11-20 16:40:58.178918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.398 qpair failed and we were unable to recover it. 00:29:12.398 [2024-11-20 16:40:58.179306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.398 [2024-11-20 16:40:58.179313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.398 qpair failed and we were unable to recover it. 00:29:12.398 [2024-11-20 16:40:58.179601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.398 [2024-11-20 16:40:58.179608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.398 qpair failed and we were unable to recover it. 00:29:12.398 [2024-11-20 16:40:58.179921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.398 [2024-11-20 16:40:58.179928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.398 qpair failed and we were unable to recover it. 00:29:12.398 [2024-11-20 16:40:58.180219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.398 [2024-11-20 16:40:58.180226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.398 qpair failed and we were unable to recover it. 00:29:12.398 [2024-11-20 16:40:58.180559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.398 [2024-11-20 16:40:58.180566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.398 qpair failed and we were unable to recover it. 00:29:12.398 [2024-11-20 16:40:58.180766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.398 [2024-11-20 16:40:58.180773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.398 qpair failed and we were unable to recover it. 00:29:12.398 [2024-11-20 16:40:58.181108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.398 [2024-11-20 16:40:58.181114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.398 qpair failed and we were unable to recover it. 00:29:12.398 [2024-11-20 16:40:58.181413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.398 [2024-11-20 16:40:58.181419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.398 qpair failed and we were unable to recover it. 00:29:12.398 [2024-11-20 16:40:58.181735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.398 [2024-11-20 16:40:58.181742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.398 qpair failed and we were unable to recover it. 00:29:12.398 [2024-11-20 16:40:58.182032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.398 [2024-11-20 16:40:58.182038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.398 qpair failed and we were unable to recover it. 00:29:12.398 [2024-11-20 16:40:58.182359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.398 [2024-11-20 16:40:58.182365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.398 qpair failed and we were unable to recover it. 00:29:12.398 [2024-11-20 16:40:58.182674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.398 [2024-11-20 16:40:58.182681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.398 qpair failed and we were unable to recover it. 00:29:12.398 [2024-11-20 16:40:58.182887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.398 [2024-11-20 16:40:58.182894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.398 qpair failed and we were unable to recover it. 00:29:12.398 [2024-11-20 16:40:58.183182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.398 [2024-11-20 16:40:58.183189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.398 qpair failed and we were unable to recover it. 00:29:12.398 [2024-11-20 16:40:58.183356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.398 [2024-11-20 16:40:58.183363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.398 qpair failed and we were unable to recover it. 00:29:12.398 [2024-11-20 16:40:58.183681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.398 [2024-11-20 16:40:58.183689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.398 qpair failed and we were unable to recover it. 00:29:12.398 [2024-11-20 16:40:58.184006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.398 [2024-11-20 16:40:58.184014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.398 qpair failed and we were unable to recover it. 00:29:12.398 [2024-11-20 16:40:58.184204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.398 [2024-11-20 16:40:58.184211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.398 qpair failed and we were unable to recover it. 00:29:12.398 [2024-11-20 16:40:58.184511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.398 [2024-11-20 16:40:58.184518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.398 qpair failed and we were unable to recover it. 00:29:12.398 [2024-11-20 16:40:58.184828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.398 [2024-11-20 16:40:58.184834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.398 qpair failed and we were unable to recover it. 00:29:12.398 [2024-11-20 16:40:58.185142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.398 [2024-11-20 16:40:58.185149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.398 qpair failed and we were unable to recover it. 00:29:12.398 [2024-11-20 16:40:58.185346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.398 [2024-11-20 16:40:58.185353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.398 qpair failed and we were unable to recover it. 00:29:12.398 [2024-11-20 16:40:58.185551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.398 [2024-11-20 16:40:58.185557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.398 qpair failed and we were unable to recover it. 00:29:12.398 [2024-11-20 16:40:58.185837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.398 [2024-11-20 16:40:58.185843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.398 qpair failed and we were unable to recover it. 00:29:12.398 [2024-11-20 16:40:58.186175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.398 [2024-11-20 16:40:58.186182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.398 qpair failed and we were unable to recover it. 00:29:12.398 [2024-11-20 16:40:58.186489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.398 [2024-11-20 16:40:58.186495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.398 qpair failed and we were unable to recover it. 00:29:12.398 [2024-11-20 16:40:58.186760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.398 [2024-11-20 16:40:58.186768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.398 qpair failed and we were unable to recover it. 00:29:12.398 [2024-11-20 16:40:58.187088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.398 [2024-11-20 16:40:58.187095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.398 qpair failed and we were unable to recover it. 00:29:12.398 [2024-11-20 16:40:58.187404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.398 [2024-11-20 16:40:58.187410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.398 qpair failed and we were unable to recover it. 00:29:12.398 [2024-11-20 16:40:58.187719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.398 [2024-11-20 16:40:58.187726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.398 qpair failed and we were unable to recover it. 00:29:12.398 [2024-11-20 16:40:58.188039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.398 [2024-11-20 16:40:58.188046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.398 qpair failed and we were unable to recover it. 00:29:12.398 [2024-11-20 16:40:58.188347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.398 [2024-11-20 16:40:58.188353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.398 qpair failed and we were unable to recover it. 00:29:12.398 [2024-11-20 16:40:58.188639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.398 [2024-11-20 16:40:58.188645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.398 qpair failed and we were unable to recover it. 00:29:12.398 [2024-11-20 16:40:58.188935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.398 [2024-11-20 16:40:58.188941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.398 qpair failed and we were unable to recover it. 00:29:12.398 [2024-11-20 16:40:58.189263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.398 [2024-11-20 16:40:58.189270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.398 qpair failed and we were unable to recover it. 00:29:12.398 [2024-11-20 16:40:58.189586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.398 [2024-11-20 16:40:58.189592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.398 qpair failed and we were unable to recover it. 00:29:12.398 [2024-11-20 16:40:58.189918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.399 [2024-11-20 16:40:58.189925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.399 qpair failed and we were unable to recover it. 00:29:12.399 [2024-11-20 16:40:58.190207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.399 [2024-11-20 16:40:58.190214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.399 qpair failed and we were unable to recover it. 00:29:12.399 [2024-11-20 16:40:58.190529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.399 [2024-11-20 16:40:58.190536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.399 qpair failed and we were unable to recover it. 00:29:12.399 [2024-11-20 16:40:58.190710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.399 [2024-11-20 16:40:58.190717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.399 qpair failed and we were unable to recover it. 00:29:12.399 [2024-11-20 16:40:58.191045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.399 [2024-11-20 16:40:58.191053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.399 qpair failed and we were unable to recover it. 00:29:12.399 [2024-11-20 16:40:58.191244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.399 [2024-11-20 16:40:58.191251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.399 qpair failed and we were unable to recover it. 00:29:12.399 [2024-11-20 16:40:58.191590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.399 [2024-11-20 16:40:58.191597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.399 qpair failed and we were unable to recover it. 00:29:12.399 [2024-11-20 16:40:58.191903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.399 [2024-11-20 16:40:58.191910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.399 qpair failed and we were unable to recover it. 00:29:12.399 [2024-11-20 16:40:58.192215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.399 [2024-11-20 16:40:58.192222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.399 qpair failed and we were unable to recover it. 00:29:12.399 [2024-11-20 16:40:58.192516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.399 [2024-11-20 16:40:58.192523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.399 qpair failed and we were unable to recover it. 00:29:12.399 [2024-11-20 16:40:58.192830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.399 [2024-11-20 16:40:58.192837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.399 qpair failed and we were unable to recover it. 00:29:12.399 [2024-11-20 16:40:58.193143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.399 [2024-11-20 16:40:58.193150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.399 qpair failed and we were unable to recover it. 00:29:12.399 [2024-11-20 16:40:58.193452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.399 [2024-11-20 16:40:58.193459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.399 qpair failed and we were unable to recover it. 00:29:12.399 [2024-11-20 16:40:58.193775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.399 [2024-11-20 16:40:58.193782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.399 qpair failed and we were unable to recover it. 00:29:12.399 [2024-11-20 16:40:58.194095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.399 [2024-11-20 16:40:58.194102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.399 qpair failed and we were unable to recover it. 00:29:12.399 [2024-11-20 16:40:58.194423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.399 [2024-11-20 16:40:58.194430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.399 qpair failed and we were unable to recover it. 00:29:12.399 [2024-11-20 16:40:58.194743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.399 [2024-11-20 16:40:58.194750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.399 qpair failed and we were unable to recover it. 00:29:12.399 [2024-11-20 16:40:58.194961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.399 [2024-11-20 16:40:58.194967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.399 qpair failed and we were unable to recover it. 00:29:12.399 [2024-11-20 16:40:58.195278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.399 [2024-11-20 16:40:58.195285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.399 qpair failed and we were unable to recover it. 00:29:12.399 [2024-11-20 16:40:58.195479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.399 [2024-11-20 16:40:58.195486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.399 qpair failed and we were unable to recover it. 00:29:12.399 [2024-11-20 16:40:58.195647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.399 [2024-11-20 16:40:58.195653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.399 qpair failed and we were unable to recover it. 00:29:12.399 [2024-11-20 16:40:58.195849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.399 [2024-11-20 16:40:58.195856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.399 qpair failed and we were unable to recover it. 00:29:12.399 [2024-11-20 16:40:58.196151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.399 [2024-11-20 16:40:58.196158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.399 qpair failed and we were unable to recover it. 00:29:12.399 [2024-11-20 16:40:58.196483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.399 [2024-11-20 16:40:58.196490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.399 qpair failed and we were unable to recover it. 00:29:12.399 [2024-11-20 16:40:58.196781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.399 [2024-11-20 16:40:58.196788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.399 qpair failed and we were unable to recover it. 00:29:12.399 [2024-11-20 16:40:58.197107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.399 [2024-11-20 16:40:58.197114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.399 qpair failed and we were unable to recover it. 00:29:12.399 [2024-11-20 16:40:58.197406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.399 [2024-11-20 16:40:58.197413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.399 qpair failed and we were unable to recover it. 00:29:12.399 [2024-11-20 16:40:58.197726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.399 [2024-11-20 16:40:58.197732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.399 qpair failed and we were unable to recover it. 00:29:12.399 [2024-11-20 16:40:58.198015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.399 [2024-11-20 16:40:58.198022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.399 qpair failed and we were unable to recover it. 00:29:12.399 [2024-11-20 16:40:58.198360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.399 [2024-11-20 16:40:58.198367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.399 qpair failed and we were unable to recover it. 00:29:12.399 [2024-11-20 16:40:58.198655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.399 [2024-11-20 16:40:58.198662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.399 qpair failed and we were unable to recover it. 00:29:12.399 [2024-11-20 16:40:58.198976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.399 [2024-11-20 16:40:58.198988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.399 qpair failed and we were unable to recover it. 00:29:12.399 [2024-11-20 16:40:58.199321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.399 [2024-11-20 16:40:58.199328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.399 qpair failed and we were unable to recover it. 00:29:12.399 [2024-11-20 16:40:58.199654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.399 [2024-11-20 16:40:58.199661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.399 qpair failed and we were unable to recover it. 00:29:12.399 [2024-11-20 16:40:58.199959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.399 [2024-11-20 16:40:58.199966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.399 qpair failed and we were unable to recover it. 00:29:12.399 [2024-11-20 16:40:58.200284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.399 [2024-11-20 16:40:58.200291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.399 qpair failed and we were unable to recover it. 00:29:12.399 [2024-11-20 16:40:58.200583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.399 [2024-11-20 16:40:58.200591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.399 qpair failed and we were unable to recover it. 00:29:12.399 [2024-11-20 16:40:58.200889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.399 [2024-11-20 16:40:58.200897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.399 qpair failed and we were unable to recover it. 00:29:12.399 [2024-11-20 16:40:58.201193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.400 [2024-11-20 16:40:58.201201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.400 qpair failed and we were unable to recover it. 00:29:12.400 [2024-11-20 16:40:58.201490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.400 [2024-11-20 16:40:58.201497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.400 qpair failed and we were unable to recover it. 00:29:12.400 [2024-11-20 16:40:58.201806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.400 [2024-11-20 16:40:58.201814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.400 qpair failed and we were unable to recover it. 00:29:12.400 [2024-11-20 16:40:58.202095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.400 [2024-11-20 16:40:58.202102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.400 qpair failed and we were unable to recover it. 00:29:12.400 [2024-11-20 16:40:58.202398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.400 [2024-11-20 16:40:58.202413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.400 qpair failed and we were unable to recover it. 00:29:12.400 [2024-11-20 16:40:58.202694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.400 [2024-11-20 16:40:58.202701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.400 qpair failed and we were unable to recover it. 00:29:12.400 [2024-11-20 16:40:58.203010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.400 [2024-11-20 16:40:58.203017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.400 qpair failed and we were unable to recover it. 00:29:12.400 [2024-11-20 16:40:58.203329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.400 [2024-11-20 16:40:58.203335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.400 qpair failed and we were unable to recover it. 00:29:12.400 [2024-11-20 16:40:58.203644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.400 [2024-11-20 16:40:58.203650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.400 qpair failed and we were unable to recover it. 00:29:12.400 [2024-11-20 16:40:58.203964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.400 [2024-11-20 16:40:58.203971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.400 qpair failed and we were unable to recover it. 00:29:12.400 [2024-11-20 16:40:58.204254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.400 [2024-11-20 16:40:58.204270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.400 qpair failed and we were unable to recover it. 00:29:12.400 [2024-11-20 16:40:58.204582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.400 [2024-11-20 16:40:58.204588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.400 qpair failed and we were unable to recover it. 00:29:12.400 [2024-11-20 16:40:58.204967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.400 [2024-11-20 16:40:58.204973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.400 qpair failed and we were unable to recover it. 00:29:12.400 [2024-11-20 16:40:58.205258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.400 [2024-11-20 16:40:58.205265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.400 qpair failed and we were unable to recover it. 00:29:12.400 [2024-11-20 16:40:58.205443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.400 [2024-11-20 16:40:58.205451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.400 qpair failed and we were unable to recover it. 00:29:12.400 [2024-11-20 16:40:58.205714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.400 [2024-11-20 16:40:58.205721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.400 qpair failed and we were unable to recover it. 00:29:12.400 [2024-11-20 16:40:58.206010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.400 [2024-11-20 16:40:58.206018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.400 qpair failed and we were unable to recover it. 00:29:12.400 [2024-11-20 16:40:58.206232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.400 [2024-11-20 16:40:58.206239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.400 qpair failed and we were unable to recover it. 00:29:12.400 [2024-11-20 16:40:58.206527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.400 [2024-11-20 16:40:58.206534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.400 qpair failed and we were unable to recover it. 00:29:12.400 [2024-11-20 16:40:58.206836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.400 [2024-11-20 16:40:58.206844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.400 qpair failed and we were unable to recover it. 00:29:12.400 [2024-11-20 16:40:58.207001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.400 [2024-11-20 16:40:58.207008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.400 qpair failed and we were unable to recover it. 00:29:12.400 [2024-11-20 16:40:58.207317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.400 [2024-11-20 16:40:58.207325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.400 qpair failed and we were unable to recover it. 00:29:12.400 [2024-11-20 16:40:58.207627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.400 [2024-11-20 16:40:58.207633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.400 qpair failed and we were unable to recover it. 00:29:12.400 [2024-11-20 16:40:58.207828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.400 [2024-11-20 16:40:58.207835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.400 qpair failed and we were unable to recover it. 00:29:12.400 [2024-11-20 16:40:58.208145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.400 [2024-11-20 16:40:58.208152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.400 qpair failed and we were unable to recover it. 00:29:12.400 [2024-11-20 16:40:58.208520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.400 [2024-11-20 16:40:58.208527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.400 qpair failed and we were unable to recover it. 00:29:12.400 [2024-11-20 16:40:58.208825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.400 [2024-11-20 16:40:58.208831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.400 qpair failed and we were unable to recover it. 00:29:12.400 [2024-11-20 16:40:58.209138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.400 [2024-11-20 16:40:58.209145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.400 qpair failed and we were unable to recover it. 00:29:12.400 [2024-11-20 16:40:58.209433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.400 [2024-11-20 16:40:58.209440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.400 qpair failed and we were unable to recover it. 00:29:12.400 [2024-11-20 16:40:58.209734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.400 [2024-11-20 16:40:58.209740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.400 qpair failed and we were unable to recover it. 00:29:12.400 [2024-11-20 16:40:58.210041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.400 [2024-11-20 16:40:58.210048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.400 qpair failed and we were unable to recover it. 00:29:12.400 [2024-11-20 16:40:58.210254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.400 [2024-11-20 16:40:58.210261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.400 qpair failed and we were unable to recover it. 00:29:12.400 [2024-11-20 16:40:58.210460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.400 [2024-11-20 16:40:58.210467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.400 qpair failed and we were unable to recover it. 00:29:12.400 [2024-11-20 16:40:58.210771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.400 [2024-11-20 16:40:58.210778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.400 qpair failed and we were unable to recover it. 00:29:12.400 [2024-11-20 16:40:58.211138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.400 [2024-11-20 16:40:58.211144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.400 qpair failed and we were unable to recover it. 00:29:12.400 [2024-11-20 16:40:58.211524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.400 [2024-11-20 16:40:58.211531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.400 qpair failed and we were unable to recover it. 00:29:12.400 [2024-11-20 16:40:58.211828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.400 [2024-11-20 16:40:58.211835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.400 qpair failed and we were unable to recover it. 00:29:12.400 [2024-11-20 16:40:58.212028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.400 [2024-11-20 16:40:58.212035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.400 qpair failed and we were unable to recover it. 00:29:12.401 [2024-11-20 16:40:58.212291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.401 [2024-11-20 16:40:58.212297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.401 qpair failed and we were unable to recover it. 00:29:12.401 [2024-11-20 16:40:58.212606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.401 [2024-11-20 16:40:58.212613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.401 qpair failed and we were unable to recover it. 00:29:12.401 [2024-11-20 16:40:58.212944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.401 [2024-11-20 16:40:58.212950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.401 qpair failed and we were unable to recover it. 00:29:12.401 [2024-11-20 16:40:58.213128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.401 [2024-11-20 16:40:58.213135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.401 qpair failed and we were unable to recover it. 00:29:12.401 [2024-11-20 16:40:58.213431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.401 [2024-11-20 16:40:58.213437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.401 qpair failed and we were unable to recover it. 00:29:12.401 [2024-11-20 16:40:58.213729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.401 [2024-11-20 16:40:58.213736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.401 qpair failed and we were unable to recover it. 00:29:12.401 [2024-11-20 16:40:58.214051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.401 [2024-11-20 16:40:58.214058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.401 qpair failed and we were unable to recover it. 00:29:12.401 [2024-11-20 16:40:58.214305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.401 [2024-11-20 16:40:58.214312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.401 qpair failed and we were unable to recover it. 00:29:12.401 [2024-11-20 16:40:58.214680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.401 [2024-11-20 16:40:58.214687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.401 qpair failed and we were unable to recover it. 00:29:12.401 [2024-11-20 16:40:58.214969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.401 [2024-11-20 16:40:58.214976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.401 qpair failed and we were unable to recover it. 00:29:12.401 [2024-11-20 16:40:58.215300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.401 [2024-11-20 16:40:58.215307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.401 qpair failed and we were unable to recover it. 00:29:12.401 [2024-11-20 16:40:58.215597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.401 [2024-11-20 16:40:58.215604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.401 qpair failed and we were unable to recover it. 00:29:12.401 [2024-11-20 16:40:58.215914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.401 [2024-11-20 16:40:58.215921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.401 qpair failed and we were unable to recover it. 00:29:12.401 [2024-11-20 16:40:58.216222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.401 [2024-11-20 16:40:58.216229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.401 qpair failed and we were unable to recover it. 00:29:12.401 [2024-11-20 16:40:58.216573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.401 [2024-11-20 16:40:58.216581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.401 qpair failed and we were unable to recover it. 00:29:12.401 [2024-11-20 16:40:58.216883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.401 [2024-11-20 16:40:58.216890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.401 qpair failed and we were unable to recover it. 00:29:12.401 [2024-11-20 16:40:58.217216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.401 [2024-11-20 16:40:58.217223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.401 qpair failed and we were unable to recover it. 00:29:12.401 [2024-11-20 16:40:58.217578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.401 [2024-11-20 16:40:58.217585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.401 qpair failed and we were unable to recover it. 00:29:12.401 [2024-11-20 16:40:58.217746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.401 [2024-11-20 16:40:58.217754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.401 qpair failed and we were unable to recover it. 00:29:12.401 [2024-11-20 16:40:58.218042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.401 [2024-11-20 16:40:58.218049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.401 qpair failed and we were unable to recover it. 00:29:12.401 [2024-11-20 16:40:58.218354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.401 [2024-11-20 16:40:58.218361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.401 qpair failed and we were unable to recover it. 00:29:12.401 [2024-11-20 16:40:58.218663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.401 [2024-11-20 16:40:58.218672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.401 qpair failed and we were unable to recover it. 00:29:12.401 [2024-11-20 16:40:58.218954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.401 [2024-11-20 16:40:58.218961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.401 qpair failed and we were unable to recover it. 00:29:12.401 [2024-11-20 16:40:58.219256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.401 [2024-11-20 16:40:58.219262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.401 qpair failed and we were unable to recover it. 00:29:12.401 [2024-11-20 16:40:58.219466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.401 [2024-11-20 16:40:58.219473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.401 qpair failed and we were unable to recover it. 00:29:12.401 [2024-11-20 16:40:58.219657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.401 [2024-11-20 16:40:58.219664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.401 qpair failed and we were unable to recover it. 00:29:12.401 [2024-11-20 16:40:58.219855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.401 [2024-11-20 16:40:58.219862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.401 qpair failed and we were unable to recover it. 00:29:12.401 [2024-11-20 16:40:58.220136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.401 [2024-11-20 16:40:58.220143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.401 qpair failed and we were unable to recover it. 00:29:12.401 [2024-11-20 16:40:58.220460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.401 [2024-11-20 16:40:58.220467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.401 qpair failed and we were unable to recover it. 00:29:12.401 [2024-11-20 16:40:58.220805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.401 [2024-11-20 16:40:58.220811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.401 qpair failed and we were unable to recover it. 00:29:12.401 [2024-11-20 16:40:58.221131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.401 [2024-11-20 16:40:58.221139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.401 qpair failed and we were unable to recover it. 00:29:12.401 [2024-11-20 16:40:58.221462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.401 [2024-11-20 16:40:58.221468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.401 qpair failed and we were unable to recover it. 00:29:12.401 [2024-11-20 16:40:58.221771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.401 [2024-11-20 16:40:58.221778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.401 qpair failed and we were unable to recover it. 00:29:12.402 [2024-11-20 16:40:58.222001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.402 [2024-11-20 16:40:58.222009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.402 qpair failed and we were unable to recover it. 00:29:12.402 [2024-11-20 16:40:58.222294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.402 [2024-11-20 16:40:58.222300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.402 qpair failed and we were unable to recover it. 00:29:12.402 [2024-11-20 16:40:58.222706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.402 [2024-11-20 16:40:58.222712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.402 qpair failed and we were unable to recover it. 00:29:12.402 [2024-11-20 16:40:58.222995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.402 [2024-11-20 16:40:58.223002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.402 qpair failed and we were unable to recover it. 00:29:12.402 [2024-11-20 16:40:58.223325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.402 [2024-11-20 16:40:58.223332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.402 qpair failed and we were unable to recover it. 00:29:12.402 [2024-11-20 16:40:58.223629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.402 [2024-11-20 16:40:58.223636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.402 qpair failed and we were unable to recover it. 00:29:12.402 [2024-11-20 16:40:58.223952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.402 [2024-11-20 16:40:58.223959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.402 qpair failed and we were unable to recover it. 00:29:12.402 [2024-11-20 16:40:58.224115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.402 [2024-11-20 16:40:58.224122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.402 qpair failed and we were unable to recover it. 00:29:12.402 [2024-11-20 16:40:58.224423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.402 [2024-11-20 16:40:58.224430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.402 qpair failed and we were unable to recover it. 00:29:12.402 [2024-11-20 16:40:58.224734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.402 [2024-11-20 16:40:58.224741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.402 qpair failed and we were unable to recover it. 00:29:12.402 [2024-11-20 16:40:58.225012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.402 [2024-11-20 16:40:58.225019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.402 qpair failed and we were unable to recover it. 00:29:12.402 [2024-11-20 16:40:58.225335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.402 [2024-11-20 16:40:58.225341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.402 qpair failed and we were unable to recover it. 00:29:12.402 [2024-11-20 16:40:58.225501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.402 [2024-11-20 16:40:58.225508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.402 qpair failed and we were unable to recover it. 00:29:12.402 [2024-11-20 16:40:58.225823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.402 [2024-11-20 16:40:58.225830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.402 qpair failed and we were unable to recover it. 00:29:12.402 [2024-11-20 16:40:58.226211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.402 [2024-11-20 16:40:58.226218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.402 qpair failed and we were unable to recover it. 00:29:12.402 [2024-11-20 16:40:58.226390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.402 [2024-11-20 16:40:58.226397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.402 qpair failed and we were unable to recover it. 00:29:12.402 [2024-11-20 16:40:58.226752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.402 [2024-11-20 16:40:58.226758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.402 qpair failed and we were unable to recover it. 00:29:12.402 [2024-11-20 16:40:58.227051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.402 [2024-11-20 16:40:58.227058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.402 qpair failed and we were unable to recover it. 00:29:12.402 [2024-11-20 16:40:58.227342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.402 [2024-11-20 16:40:58.227349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.402 qpair failed and we were unable to recover it. 00:29:12.402 [2024-11-20 16:40:58.227622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.402 [2024-11-20 16:40:58.227629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.402 qpair failed and we were unable to recover it. 00:29:12.402 [2024-11-20 16:40:58.227955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.402 [2024-11-20 16:40:58.227961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.402 qpair failed and we were unable to recover it. 00:29:12.402 [2024-11-20 16:40:58.228135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.402 [2024-11-20 16:40:58.228142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.402 qpair failed and we were unable to recover it. 00:29:12.402 [2024-11-20 16:40:58.228397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.402 [2024-11-20 16:40:58.228404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.402 qpair failed and we were unable to recover it. 00:29:12.402 [2024-11-20 16:40:58.228657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.402 [2024-11-20 16:40:58.228663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.402 qpair failed and we were unable to recover it. 00:29:12.402 [2024-11-20 16:40:58.228831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.402 [2024-11-20 16:40:58.228839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.402 qpair failed and we were unable to recover it. 00:29:12.402 [2024-11-20 16:40:58.229194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.402 [2024-11-20 16:40:58.229201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.402 qpair failed and we were unable to recover it. 00:29:12.402 [2024-11-20 16:40:58.229497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.402 [2024-11-20 16:40:58.229504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.402 qpair failed and we were unable to recover it. 00:29:12.402 [2024-11-20 16:40:58.229699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.402 [2024-11-20 16:40:58.229706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.402 qpair failed and we were unable to recover it. 00:29:12.402 [2024-11-20 16:40:58.230005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.402 [2024-11-20 16:40:58.230014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.402 qpair failed and we were unable to recover it. 00:29:12.402 [2024-11-20 16:40:58.230327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.402 [2024-11-20 16:40:58.230334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.402 qpair failed and we were unable to recover it. 00:29:12.402 [2024-11-20 16:40:58.230618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.402 [2024-11-20 16:40:58.230625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.402 qpair failed and we were unable to recover it. 00:29:12.402 [2024-11-20 16:40:58.230931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.402 [2024-11-20 16:40:58.230937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.402 qpair failed and we were unable to recover it. 00:29:12.402 [2024-11-20 16:40:58.231248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.402 [2024-11-20 16:40:58.231255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.402 qpair failed and we were unable to recover it. 00:29:12.402 [2024-11-20 16:40:58.231564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.402 [2024-11-20 16:40:58.231570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.402 qpair failed and we were unable to recover it. 00:29:12.402 [2024-11-20 16:40:58.231853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.402 [2024-11-20 16:40:58.231868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.402 qpair failed and we were unable to recover it. 00:29:12.402 [2024-11-20 16:40:58.232171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.402 [2024-11-20 16:40:58.232178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.402 qpair failed and we were unable to recover it. 00:29:12.402 [2024-11-20 16:40:58.232462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.402 [2024-11-20 16:40:58.232468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.402 qpair failed and we were unable to recover it. 00:29:12.402 [2024-11-20 16:40:58.232771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.402 [2024-11-20 16:40:58.232777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.402 qpair failed and we were unable to recover it. 00:29:12.403 [2024-11-20 16:40:58.233084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.403 [2024-11-20 16:40:58.233091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.403 qpair failed and we were unable to recover it. 00:29:12.403 [2024-11-20 16:40:58.233411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.403 [2024-11-20 16:40:58.233418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.403 qpair failed and we were unable to recover it. 00:29:12.403 [2024-11-20 16:40:58.233735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.403 [2024-11-20 16:40:58.233741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.403 qpair failed and we were unable to recover it. 00:29:12.403 [2024-11-20 16:40:58.233911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.403 [2024-11-20 16:40:58.233918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.403 qpair failed and we were unable to recover it. 00:29:12.403 [2024-11-20 16:40:58.234013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.403 [2024-11-20 16:40:58.234020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.403 qpair failed and we were unable to recover it. 00:29:12.403 [2024-11-20 16:40:58.234292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.403 [2024-11-20 16:40:58.234300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.403 qpair failed and we were unable to recover it. 00:29:12.403 [2024-11-20 16:40:58.234587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.403 [2024-11-20 16:40:58.234594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.403 qpair failed and we were unable to recover it. 00:29:12.403 [2024-11-20 16:40:58.234723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.403 [2024-11-20 16:40:58.234730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.403 qpair failed and we were unable to recover it. 00:29:12.403 [2024-11-20 16:40:58.234991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.403 [2024-11-20 16:40:58.234999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.403 qpair failed and we were unable to recover it. 00:29:12.403 [2024-11-20 16:40:58.235303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.403 [2024-11-20 16:40:58.235310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.403 qpair failed and we were unable to recover it. 00:29:12.403 [2024-11-20 16:40:58.235636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.403 [2024-11-20 16:40:58.235643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.403 qpair failed and we were unable to recover it. 00:29:12.403 [2024-11-20 16:40:58.235994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.403 [2024-11-20 16:40:58.236002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.403 qpair failed and we were unable to recover it. 00:29:12.403 [2024-11-20 16:40:58.236299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.403 [2024-11-20 16:40:58.236306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.403 qpair failed and we were unable to recover it. 00:29:12.403 [2024-11-20 16:40:58.236499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.403 [2024-11-20 16:40:58.236506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.403 qpair failed and we were unable to recover it. 00:29:12.403 [2024-11-20 16:40:58.236761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.403 [2024-11-20 16:40:58.236768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.403 qpair failed and we were unable to recover it. 00:29:12.403 [2024-11-20 16:40:58.237079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.403 [2024-11-20 16:40:58.237086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.403 qpair failed and we were unable to recover it. 00:29:12.403 [2024-11-20 16:40:58.237297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.403 [2024-11-20 16:40:58.237303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.403 qpair failed and we were unable to recover it. 00:29:12.403 [2024-11-20 16:40:58.237572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.403 [2024-11-20 16:40:58.237578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.403 qpair failed and we were unable to recover it. 00:29:12.403 [2024-11-20 16:40:58.237881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.403 [2024-11-20 16:40:58.237888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.403 qpair failed and we were unable to recover it. 00:29:12.403 [2024-11-20 16:40:58.238195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.403 [2024-11-20 16:40:58.238202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.403 qpair failed and we were unable to recover it. 00:29:12.403 [2024-11-20 16:40:58.238474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.403 [2024-11-20 16:40:58.238481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.403 qpair failed and we were unable to recover it. 00:29:12.403 [2024-11-20 16:40:58.238800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.403 [2024-11-20 16:40:58.238808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.403 qpair failed and we were unable to recover it. 00:29:12.403 [2024-11-20 16:40:58.239134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.403 [2024-11-20 16:40:58.239141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.403 qpair failed and we were unable to recover it. 00:29:12.403 [2024-11-20 16:40:58.239448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.403 [2024-11-20 16:40:58.239455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.403 qpair failed and we were unable to recover it. 00:29:12.403 [2024-11-20 16:40:58.239764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.403 [2024-11-20 16:40:58.239771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.403 qpair failed and we were unable to recover it. 00:29:12.403 [2024-11-20 16:40:58.240095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.403 [2024-11-20 16:40:58.240102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.403 qpair failed and we were unable to recover it. 00:29:12.403 [2024-11-20 16:40:58.240275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.403 [2024-11-20 16:40:58.240282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.403 qpair failed and we were unable to recover it. 00:29:12.403 [2024-11-20 16:40:58.240661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.403 [2024-11-20 16:40:58.240668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.403 qpair failed and we were unable to recover it. 00:29:12.403 [2024-11-20 16:40:58.240970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.403 [2024-11-20 16:40:58.240976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.403 qpair failed and we were unable to recover it. 00:29:12.403 [2024-11-20 16:40:58.241253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.403 [2024-11-20 16:40:58.241260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.403 qpair failed and we were unable to recover it. 00:29:12.403 [2024-11-20 16:40:58.241464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.403 [2024-11-20 16:40:58.241472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.403 qpair failed and we were unable to recover it. 00:29:12.403 [2024-11-20 16:40:58.241774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.403 [2024-11-20 16:40:58.241781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.403 qpair failed and we were unable to recover it. 00:29:12.403 [2024-11-20 16:40:58.242090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.403 [2024-11-20 16:40:58.242098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.403 qpair failed and we were unable to recover it. 00:29:12.403 [2024-11-20 16:40:58.242370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.403 [2024-11-20 16:40:58.242377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.403 qpair failed and we were unable to recover it. 00:29:12.403 [2024-11-20 16:40:58.242709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.403 [2024-11-20 16:40:58.242715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.403 qpair failed and we were unable to recover it. 00:29:12.403 [2024-11-20 16:40:58.242911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.403 [2024-11-20 16:40:58.242918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.403 qpair failed and we were unable to recover it. 00:29:12.403 [2024-11-20 16:40:58.243214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.403 [2024-11-20 16:40:58.243220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.403 qpair failed and we were unable to recover it. 00:29:12.403 [2024-11-20 16:40:58.243510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.403 [2024-11-20 16:40:58.243517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.404 qpair failed and we were unable to recover it. 00:29:12.404 [2024-11-20 16:40:58.243801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.404 [2024-11-20 16:40:58.243808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.404 qpair failed and we were unable to recover it. 00:29:12.404 [2024-11-20 16:40:58.243969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.404 [2024-11-20 16:40:58.243977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.404 qpair failed and we were unable to recover it. 00:29:12.404 [2024-11-20 16:40:58.244360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.404 [2024-11-20 16:40:58.244366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.404 qpair failed and we were unable to recover it. 00:29:12.404 [2024-11-20 16:40:58.244513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.404 [2024-11-20 16:40:58.244520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.404 qpair failed and we were unable to recover it. 00:29:12.404 [2024-11-20 16:40:58.244786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.404 [2024-11-20 16:40:58.244793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.404 qpair failed and we were unable to recover it. 00:29:12.404 [2024-11-20 16:40:58.245075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.404 [2024-11-20 16:40:58.245082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.404 qpair failed and we were unable to recover it. 00:29:12.404 [2024-11-20 16:40:58.245386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.404 [2024-11-20 16:40:58.245400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.404 qpair failed and we were unable to recover it. 00:29:12.404 [2024-11-20 16:40:58.245703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.404 [2024-11-20 16:40:58.245709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.404 qpair failed and we were unable to recover it. 00:29:12.404 [2024-11-20 16:40:58.246004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.404 [2024-11-20 16:40:58.246011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.404 qpair failed and we were unable to recover it. 00:29:12.404 [2024-11-20 16:40:58.246378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.404 [2024-11-20 16:40:58.246385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.404 qpair failed and we were unable to recover it. 00:29:12.404 [2024-11-20 16:40:58.246695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.404 [2024-11-20 16:40:58.246702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.404 qpair failed and we were unable to recover it. 00:29:12.404 [2024-11-20 16:40:58.247023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.404 [2024-11-20 16:40:58.247030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.404 qpair failed and we were unable to recover it. 00:29:12.404 [2024-11-20 16:40:58.247347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.404 [2024-11-20 16:40:58.247353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.404 qpair failed and we were unable to recover it. 00:29:12.404 [2024-11-20 16:40:58.247547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.404 [2024-11-20 16:40:58.247554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.404 qpair failed and we were unable to recover it. 00:29:12.404 [2024-11-20 16:40:58.247823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.404 [2024-11-20 16:40:58.247829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.404 qpair failed and we were unable to recover it. 00:29:12.404 [2024-11-20 16:40:58.248148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.404 [2024-11-20 16:40:58.248155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.404 qpair failed and we were unable to recover it. 00:29:12.404 [2024-11-20 16:40:58.248463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.404 [2024-11-20 16:40:58.248469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.404 qpair failed and we were unable to recover it. 00:29:12.404 [2024-11-20 16:40:58.248751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.404 [2024-11-20 16:40:58.248757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.404 qpair failed and we were unable to recover it. 00:29:12.404 [2024-11-20 16:40:58.248956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.404 [2024-11-20 16:40:58.248962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.404 qpair failed and we were unable to recover it. 00:29:12.404 [2024-11-20 16:40:58.249268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.404 [2024-11-20 16:40:58.249275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.404 qpair failed and we were unable to recover it. 00:29:12.404 [2024-11-20 16:40:58.249557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.404 [2024-11-20 16:40:58.249564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.404 qpair failed and we were unable to recover it. 00:29:12.404 [2024-11-20 16:40:58.249872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.404 [2024-11-20 16:40:58.249878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.404 qpair failed and we were unable to recover it. 00:29:12.404 [2024-11-20 16:40:58.250153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.404 [2024-11-20 16:40:58.250161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.404 qpair failed and we were unable to recover it. 00:29:12.404 [2024-11-20 16:40:58.250476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.404 [2024-11-20 16:40:58.250483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.404 qpair failed and we were unable to recover it. 00:29:12.404 [2024-11-20 16:40:58.250793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.404 [2024-11-20 16:40:58.250800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.404 qpair failed and we were unable to recover it. 00:29:12.404 [2024-11-20 16:40:58.251111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.404 [2024-11-20 16:40:58.251118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.404 qpair failed and we were unable to recover it. 00:29:12.404 [2024-11-20 16:40:58.251261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.404 [2024-11-20 16:40:58.251268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.404 qpair failed and we were unable to recover it. 00:29:12.404 [2024-11-20 16:40:58.251492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.404 [2024-11-20 16:40:58.251499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.404 qpair failed and we were unable to recover it. 00:29:12.404 [2024-11-20 16:40:58.251707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.404 [2024-11-20 16:40:58.251713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.404 qpair failed and we were unable to recover it. 00:29:12.404 [2024-11-20 16:40:58.252023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.404 [2024-11-20 16:40:58.252030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.404 qpair failed and we were unable to recover it. 00:29:12.404 [2024-11-20 16:40:58.252234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.404 [2024-11-20 16:40:58.252242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.404 qpair failed and we were unable to recover it. 00:29:12.404 [2024-11-20 16:40:58.252542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.404 [2024-11-20 16:40:58.252548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.404 qpair failed and we were unable to recover it. 00:29:12.404 [2024-11-20 16:40:58.252834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.404 [2024-11-20 16:40:58.252843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.404 qpair failed and we were unable to recover it. 00:29:12.404 [2024-11-20 16:40:58.253130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.404 [2024-11-20 16:40:58.253137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.404 qpair failed and we were unable to recover it. 00:29:12.404 [2024-11-20 16:40:58.253522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.404 [2024-11-20 16:40:58.253529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.404 qpair failed and we were unable to recover it. 00:29:12.404 [2024-11-20 16:40:58.253820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.404 [2024-11-20 16:40:58.253827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.404 qpair failed and we were unable to recover it. 00:29:12.404 [2024-11-20 16:40:58.253986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.404 [2024-11-20 16:40:58.253994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.404 qpair failed and we were unable to recover it. 00:29:12.404 [2024-11-20 16:40:58.254318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.405 [2024-11-20 16:40:58.254324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.405 qpair failed and we were unable to recover it. 00:29:12.405 [2024-11-20 16:40:58.254631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.405 [2024-11-20 16:40:58.254638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.405 qpair failed and we were unable to recover it. 00:29:12.405 [2024-11-20 16:40:58.254932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.405 [2024-11-20 16:40:58.254939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.405 qpair failed and we were unable to recover it. 00:29:12.405 [2024-11-20 16:40:58.255148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.405 [2024-11-20 16:40:58.255155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.405 qpair failed and we were unable to recover it. 00:29:12.405 [2024-11-20 16:40:58.255469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.405 [2024-11-20 16:40:58.255476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.405 qpair failed and we were unable to recover it. 00:29:12.405 [2024-11-20 16:40:58.255768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.405 [2024-11-20 16:40:58.255775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.405 qpair failed and we were unable to recover it. 00:29:12.405 [2024-11-20 16:40:58.256093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.405 [2024-11-20 16:40:58.256100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.405 qpair failed and we were unable to recover it. 00:29:12.405 [2024-11-20 16:40:58.256418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.405 [2024-11-20 16:40:58.256425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.405 qpair failed and we were unable to recover it. 00:29:12.405 [2024-11-20 16:40:58.256719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.405 [2024-11-20 16:40:58.256725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.405 qpair failed and we were unable to recover it. 00:29:12.405 [2024-11-20 16:40:58.257034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.405 [2024-11-20 16:40:58.257041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.405 qpair failed and we were unable to recover it. 00:29:12.405 [2024-11-20 16:40:58.257358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.405 [2024-11-20 16:40:58.257364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.405 qpair failed and we were unable to recover it. 00:29:12.405 [2024-11-20 16:40:58.257659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.405 [2024-11-20 16:40:58.257666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.405 qpair failed and we were unable to recover it. 00:29:12.405 [2024-11-20 16:40:58.257842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.405 [2024-11-20 16:40:58.257849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.405 qpair failed and we were unable to recover it. 00:29:12.405 [2024-11-20 16:40:58.258145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.405 [2024-11-20 16:40:58.258152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.405 qpair failed and we were unable to recover it. 00:29:12.405 [2024-11-20 16:40:58.258453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.405 [2024-11-20 16:40:58.258460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.405 qpair failed and we were unable to recover it. 00:29:12.405 [2024-11-20 16:40:58.258763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.405 [2024-11-20 16:40:58.258771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.405 qpair failed and we were unable to recover it. 00:29:12.405 [2024-11-20 16:40:58.259070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.405 [2024-11-20 16:40:58.259077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.405 qpair failed and we were unable to recover it. 00:29:12.405 [2024-11-20 16:40:58.259402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.405 [2024-11-20 16:40:58.259408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.405 qpair failed and we were unable to recover it. 00:29:12.405 [2024-11-20 16:40:58.259762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.405 [2024-11-20 16:40:58.259768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.405 qpair failed and we were unable to recover it. 00:29:12.405 [2024-11-20 16:40:58.259929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.405 [2024-11-20 16:40:58.259936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.405 qpair failed and we were unable to recover it. 00:29:12.405 [2024-11-20 16:40:58.260258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.405 [2024-11-20 16:40:58.260264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.405 qpair failed and we were unable to recover it. 00:29:12.405 [2024-11-20 16:40:58.260466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.405 [2024-11-20 16:40:58.260472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.405 qpair failed and we were unable to recover it. 00:29:12.405 [2024-11-20 16:40:58.260783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.405 [2024-11-20 16:40:58.260790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.405 qpair failed and we were unable to recover it. 00:29:12.405 [2024-11-20 16:40:58.261116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.405 [2024-11-20 16:40:58.261122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.405 qpair failed and we were unable to recover it. 00:29:12.405 [2024-11-20 16:40:58.261502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.405 [2024-11-20 16:40:58.261508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.405 qpair failed and we were unable to recover it. 00:29:12.405 [2024-11-20 16:40:58.261813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.405 [2024-11-20 16:40:58.261820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.405 qpair failed and we were unable to recover it. 00:29:12.405 [2024-11-20 16:40:58.262133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.405 [2024-11-20 16:40:58.262140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.405 qpair failed and we were unable to recover it. 00:29:12.405 [2024-11-20 16:40:58.262441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.405 [2024-11-20 16:40:58.262447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.405 qpair failed and we were unable to recover it. 00:29:12.405 [2024-11-20 16:40:58.262757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.405 [2024-11-20 16:40:58.262763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.405 qpair failed and we were unable to recover it. 00:29:12.405 [2024-11-20 16:40:58.262920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.405 [2024-11-20 16:40:58.262927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.405 qpair failed and we were unable to recover it. 00:29:12.405 [2024-11-20 16:40:58.263127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.405 [2024-11-20 16:40:58.263134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.405 qpair failed and we were unable to recover it. 00:29:12.405 [2024-11-20 16:40:58.263409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.405 [2024-11-20 16:40:58.263415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.405 qpair failed and we were unable to recover it. 00:29:12.405 [2024-11-20 16:40:58.263741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.405 [2024-11-20 16:40:58.263748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.405 qpair failed and we were unable to recover it. 00:29:12.405 [2024-11-20 16:40:58.264046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.406 [2024-11-20 16:40:58.264052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.406 qpair failed and we were unable to recover it. 00:29:12.406 [2024-11-20 16:40:58.264372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.406 [2024-11-20 16:40:58.264379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.406 qpair failed and we were unable to recover it. 00:29:12.406 [2024-11-20 16:40:58.264669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.406 [2024-11-20 16:40:58.264677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.406 qpair failed and we were unable to recover it. 00:29:12.406 [2024-11-20 16:40:58.264979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.406 [2024-11-20 16:40:58.264989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.406 qpair failed and we were unable to recover it. 00:29:12.406 [2024-11-20 16:40:58.265311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.406 [2024-11-20 16:40:58.265317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.406 qpair failed and we were unable to recover it. 00:29:12.406 [2024-11-20 16:40:58.265603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.406 [2024-11-20 16:40:58.265610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.406 qpair failed and we were unable to recover it. 00:29:12.406 [2024-11-20 16:40:58.265917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.406 [2024-11-20 16:40:58.265925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.406 qpair failed and we were unable to recover it. 00:29:12.406 [2024-11-20 16:40:58.266232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.406 [2024-11-20 16:40:58.266239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.406 qpair failed and we were unable to recover it. 00:29:12.406 [2024-11-20 16:40:58.266527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.406 [2024-11-20 16:40:58.266534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.406 qpair failed and we were unable to recover it. 00:29:12.406 [2024-11-20 16:40:58.266862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.406 [2024-11-20 16:40:58.266869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.406 qpair failed and we were unable to recover it. 00:29:12.406 [2024-11-20 16:40:58.267032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.406 [2024-11-20 16:40:58.267041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.406 qpair failed and we were unable to recover it. 00:29:12.406 [2024-11-20 16:40:58.267334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.406 [2024-11-20 16:40:58.267340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.406 qpair failed and we were unable to recover it. 00:29:12.406 [2024-11-20 16:40:58.267653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.406 [2024-11-20 16:40:58.267659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.406 qpair failed and we were unable to recover it. 00:29:12.406 [2024-11-20 16:40:58.267969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.406 [2024-11-20 16:40:58.267976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.406 qpair failed and we were unable to recover it. 00:29:12.406 [2024-11-20 16:40:58.268357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.406 [2024-11-20 16:40:58.268365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.406 qpair failed and we were unable to recover it. 00:29:12.406 [2024-11-20 16:40:58.268654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.406 [2024-11-20 16:40:58.268662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.406 qpair failed and we were unable to recover it. 00:29:12.406 [2024-11-20 16:40:58.268961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.406 [2024-11-20 16:40:58.268969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.406 qpair failed and we were unable to recover it. 00:29:12.406 [2024-11-20 16:40:58.269253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.406 [2024-11-20 16:40:58.269261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.406 qpair failed and we were unable to recover it. 00:29:12.406 [2024-11-20 16:40:58.269561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.406 [2024-11-20 16:40:58.269569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.406 qpair failed and we were unable to recover it. 00:29:12.406 [2024-11-20 16:40:58.269874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.406 [2024-11-20 16:40:58.269881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.406 qpair failed and we were unable to recover it. 00:29:12.406 [2024-11-20 16:40:58.270164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.406 [2024-11-20 16:40:58.270171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.406 qpair failed and we were unable to recover it. 00:29:12.406 [2024-11-20 16:40:58.270489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.406 [2024-11-20 16:40:58.270497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.406 qpair failed and we were unable to recover it. 00:29:12.406 [2024-11-20 16:40:58.270795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.406 [2024-11-20 16:40:58.270802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.406 qpair failed and we were unable to recover it. 00:29:12.406 [2024-11-20 16:40:58.271168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.406 [2024-11-20 16:40:58.271175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.406 qpair failed and we were unable to recover it. 00:29:12.406 [2024-11-20 16:40:58.271484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.406 [2024-11-20 16:40:58.271492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.406 qpair failed and we were unable to recover it. 00:29:12.406 [2024-11-20 16:40:58.271811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.406 [2024-11-20 16:40:58.271818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.406 qpair failed and we were unable to recover it. 00:29:12.406 [2024-11-20 16:40:58.272112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.406 [2024-11-20 16:40:58.272120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.406 qpair failed and we were unable to recover it. 00:29:12.406 [2024-11-20 16:40:58.272424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.406 [2024-11-20 16:40:58.272432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.406 qpair failed and we were unable to recover it. 00:29:12.406 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2395146 Killed "${NVMF_APP[@]}" "$@" 00:29:12.406 [2024-11-20 16:40:58.272810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.406 [2024-11-20 16:40:58.272818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.406 qpair failed and we were unable to recover it. 00:29:12.406 [2024-11-20 16:40:58.273024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.406 [2024-11-20 16:40:58.273031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.406 qpair failed and we were unable to recover it. 00:29:12.406 16:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:12.406 [2024-11-20 16:40:58.273441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.406 [2024-11-20 16:40:58.273449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.406 qpair failed and we were unable to recover it. 00:29:12.406 [2024-11-20 16:40:58.273646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.406 [2024-11-20 16:40:58.273652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.406 qpair failed and we were unable to recover it. 00:29:12.406 16:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:12.406 [2024-11-20 16:40:58.273968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.406 [2024-11-20 16:40:58.273975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.406 qpair failed and we were unable to recover it. 00:29:12.406 16:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:12.406 [2024-11-20 16:40:58.274280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.406 [2024-11-20 16:40:58.274287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.406 qpair failed and we were unable to recover it. 00:29:12.406 16:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:12.406 [2024-11-20 16:40:58.274590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.406 16:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:12.406 [2024-11-20 16:40:58.274597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.406 qpair failed and we were unable to recover it. 00:29:12.406 [2024-11-20 16:40:58.274878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.407 [2024-11-20 16:40:58.274884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.407 qpair failed and we were unable to recover it. 00:29:12.407 [2024-11-20 16:40:58.275200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.407 [2024-11-20 16:40:58.275207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.407 qpair failed and we were unable to recover it. 00:29:12.407 [2024-11-20 16:40:58.275547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.407 [2024-11-20 16:40:58.275554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.407 qpair failed and we were unable to recover it. 00:29:12.407 [2024-11-20 16:40:58.275851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.407 [2024-11-20 16:40:58.275864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.407 qpair failed and we were unable to recover it. 00:29:12.407 [2024-11-20 16:40:58.276031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.407 [2024-11-20 16:40:58.276038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.407 qpair failed and we were unable to recover it. 00:29:12.407 [2024-11-20 16:40:58.276319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.407 [2024-11-20 16:40:58.276326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.407 qpair failed and we were unable to recover it. 00:29:12.407 [2024-11-20 16:40:58.276639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.407 [2024-11-20 16:40:58.276645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.407 qpair failed and we were unable to recover it. 00:29:12.407 [2024-11-20 16:40:58.276964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.407 [2024-11-20 16:40:58.276971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.407 qpair failed and we were unable to recover it. 00:29:12.407 [2024-11-20 16:40:58.277149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.407 [2024-11-20 16:40:58.277157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.407 qpair failed and we were unable to recover it. 00:29:12.407 [2024-11-20 16:40:58.277462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.407 [2024-11-20 16:40:58.277469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.407 qpair failed and we were unable to recover it. 00:29:12.407 [2024-11-20 16:40:58.277779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.407 [2024-11-20 16:40:58.277786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.407 qpair failed and we were unable to recover it. 00:29:12.407 [2024-11-20 16:40:58.277957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.407 [2024-11-20 16:40:58.277964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.407 qpair failed and we were unable to recover it. 00:29:12.407 [2024-11-20 16:40:58.278312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.407 [2024-11-20 16:40:58.278319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.407 qpair failed and we were unable to recover it. 00:29:12.407 [2024-11-20 16:40:58.278632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.407 [2024-11-20 16:40:58.278638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.407 qpair failed and we were unable to recover it. 00:29:12.407 [2024-11-20 16:40:58.278942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.407 [2024-11-20 16:40:58.278949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.407 qpair failed and we were unable to recover it. 00:29:12.407 [2024-11-20 16:40:58.279249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.407 [2024-11-20 16:40:58.279257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.407 qpair failed and we were unable to recover it. 00:29:12.407 [2024-11-20 16:40:58.279326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.407 [2024-11-20 16:40:58.279333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.407 qpair failed and we were unable to recover it. 00:29:12.407 [2024-11-20 16:40:58.279611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.407 [2024-11-20 16:40:58.279619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.407 qpair failed and we were unable to recover it. 00:29:12.407 [2024-11-20 16:40:58.279937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.407 [2024-11-20 16:40:58.279944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.407 qpair failed and we were unable to recover it. 00:29:12.407 [2024-11-20 16:40:58.280231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.407 [2024-11-20 16:40:58.280238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.407 qpair failed and we were unable to recover it. 00:29:12.407 [2024-11-20 16:40:58.280445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.407 [2024-11-20 16:40:58.280453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.407 qpair failed and we were unable to recover it. 00:29:12.407 [2024-11-20 16:40:58.280673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.407 [2024-11-20 16:40:58.280680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.407 qpair failed and we were unable to recover it. 00:29:12.407 [2024-11-20 16:40:58.280978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.407 [2024-11-20 16:40:58.280988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.407 qpair failed and we were unable to recover it. 00:29:12.407 [2024-11-20 16:40:58.281366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.407 [2024-11-20 16:40:58.281374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.407 qpair failed and we were unable to recover it. 00:29:12.407 [2024-11-20 16:40:58.281689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.407 [2024-11-20 16:40:58.281697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.407 qpair failed and we were unable to recover it. 00:29:12.407 [2024-11-20 16:40:58.282024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.407 [2024-11-20 16:40:58.282033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.407 qpair failed and we were unable to recover it. 00:29:12.407 16:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2396178 00:29:12.407 [2024-11-20 16:40:58.282258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.407 [2024-11-20 16:40:58.282266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.407 qpair failed and we were unable to recover it. 00:29:12.407 [2024-11-20 16:40:58.282445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.407 [2024-11-20 16:40:58.282453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.407 qpair failed and we were unable to recover it. 00:29:12.407 16:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2396178 00:29:12.407 [2024-11-20 16:40:58.282635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.407 [2024-11-20 16:40:58.282642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.407 qpair failed and we were unable to recover it. 00:29:12.407 16:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:12.407 16:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2396178 ']' 00:29:12.407 [2024-11-20 16:40:58.282952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.407 [2024-11-20 16:40:58.282962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.407 qpair failed and we were unable to recover it. 00:29:12.407 16:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:12.407 [2024-11-20 16:40:58.283272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.407 [2024-11-20 16:40:58.283280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.407 qpair failed and we were unable to recover it. 00:29:12.407 16:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:12.407 [2024-11-20 16:40:58.283599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.407 [2024-11-20 16:40:58.283607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.407 qpair failed and we were unable to recover it. 00:29:12.407 16:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:12.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:12.407 16:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:12.407 [2024-11-20 16:40:58.283915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.407 [2024-11-20 16:40:58.283923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.407 qpair failed and we were unable to recover it. 00:29:12.407 16:40:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:12.407 [2024-11-20 16:40:58.284318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.407 [2024-11-20 16:40:58.284327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.407 qpair failed and we were unable to recover it. 00:29:12.408 [2024-11-20 16:40:58.284497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.408 [2024-11-20 16:40:58.284505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.408 qpair failed and we were unable to recover it. 00:29:12.408 [2024-11-20 16:40:58.284780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.408 [2024-11-20 16:40:58.284788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.408 qpair failed and we were unable to recover it. 00:29:12.408 [2024-11-20 16:40:58.285103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.408 [2024-11-20 16:40:58.285110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.408 qpair failed and we were unable to recover it. 00:29:12.408 [2024-11-20 16:40:58.285442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.408 [2024-11-20 16:40:58.285449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.408 qpair failed and we were unable to recover it. 00:29:12.408 [2024-11-20 16:40:58.285741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.408 [2024-11-20 16:40:58.285748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.408 qpair failed and we were unable to recover it. 00:29:12.408 [2024-11-20 16:40:58.286043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.408 [2024-11-20 16:40:58.286050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.408 qpair failed and we were unable to recover it. 00:29:12.408 [2024-11-20 16:40:58.286342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.408 [2024-11-20 16:40:58.286350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.408 qpair failed and we were unable to recover it. 00:29:12.408 [2024-11-20 16:40:58.286658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.408 [2024-11-20 16:40:58.286677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.408 qpair failed and we were unable to recover it. 00:29:12.408 [2024-11-20 16:40:58.287005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.408 [2024-11-20 16:40:58.287014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.408 qpair failed and we were unable to recover it. 00:29:12.408 [2024-11-20 16:40:58.287392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.408 [2024-11-20 16:40:58.287400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.408 qpair failed and we were unable to recover it. 00:29:12.408 [2024-11-20 16:40:58.287588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.408 [2024-11-20 16:40:58.287595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.408 qpair failed and we were unable to recover it. 00:29:12.408 [2024-11-20 16:40:58.287913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.408 [2024-11-20 16:40:58.287921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.408 qpair failed and we were unable to recover it. 00:29:12.408 [2024-11-20 16:40:58.288129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.408 [2024-11-20 16:40:58.288140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.408 qpair failed and we were unable to recover it. 00:29:12.408 [2024-11-20 16:40:58.288474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.408 [2024-11-20 16:40:58.288481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.408 qpair failed and we were unable to recover it. 00:29:12.408 [2024-11-20 16:40:58.288796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.408 [2024-11-20 16:40:58.288804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.408 qpair failed and we were unable to recover it. 00:29:12.408 [2024-11-20 16:40:58.289118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.408 [2024-11-20 16:40:58.289126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.408 qpair failed and we were unable to recover it. 00:29:12.408 [2024-11-20 16:40:58.289207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.408 [2024-11-20 16:40:58.289214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.408 qpair failed and we were unable to recover it. 00:29:12.408 [2024-11-20 16:40:58.289483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.408 [2024-11-20 16:40:58.289491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.408 qpair failed and we were unable to recover it. 00:29:12.408 [2024-11-20 16:40:58.289694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.408 [2024-11-20 16:40:58.289701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.408 qpair failed and we were unable to recover it. 00:29:12.408 [2024-11-20 16:40:58.289971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.408 [2024-11-20 16:40:58.289986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.408 qpair failed and we were unable to recover it. 00:29:12.408 [2024-11-20 16:40:58.290289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.408 [2024-11-20 16:40:58.290297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.408 qpair failed and we were unable to recover it. 00:29:12.408 [2024-11-20 16:40:58.290572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.408 [2024-11-20 16:40:58.290580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.408 qpair failed and we were unable to recover it. 00:29:12.408 [2024-11-20 16:40:58.290879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.408 [2024-11-20 16:40:58.290887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.408 qpair failed and we were unable to recover it. 00:29:12.408 [2024-11-20 16:40:58.291247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.408 [2024-11-20 16:40:58.291254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.408 qpair failed and we were unable to recover it. 00:29:12.408 [2024-11-20 16:40:58.291555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.408 [2024-11-20 16:40:58.291562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.408 qpair failed and we were unable to recover it. 00:29:12.408 [2024-11-20 16:40:58.291876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.408 [2024-11-20 16:40:58.291884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.408 qpair failed and we were unable to recover it. 00:29:12.408 [2024-11-20 16:40:58.292084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.408 [2024-11-20 16:40:58.292091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.408 qpair failed and we were unable to recover it. 00:29:12.408 [2024-11-20 16:40:58.292364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.408 [2024-11-20 16:40:58.292372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.408 qpair failed and we were unable to recover it. 00:29:12.408 [2024-11-20 16:40:58.292702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.408 [2024-11-20 16:40:58.292709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.408 qpair failed and we were unable to recover it. 00:29:12.408 [2024-11-20 16:40:58.293014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.408 [2024-11-20 16:40:58.293021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.408 qpair failed and we were unable to recover it. 00:29:12.408 [2024-11-20 16:40:58.293334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.408 [2024-11-20 16:40:58.293341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.408 qpair failed and we were unable to recover it. 00:29:12.408 [2024-11-20 16:40:58.293641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.408 [2024-11-20 16:40:58.293648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.408 qpair failed and we were unable to recover it. 00:29:12.408 [2024-11-20 16:40:58.293853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.408 [2024-11-20 16:40:58.293860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.408 qpair failed and we were unable to recover it. 00:29:12.408 [2024-11-20 16:40:58.294194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.408 [2024-11-20 16:40:58.294201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.408 qpair failed and we were unable to recover it. 00:29:12.408 [2024-11-20 16:40:58.294404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.408 [2024-11-20 16:40:58.294411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.408 qpair failed and we were unable to recover it. 00:29:12.408 [2024-11-20 16:40:58.294613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.408 [2024-11-20 16:40:58.294621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.408 qpair failed and we were unable to recover it. 00:29:12.408 [2024-11-20 16:40:58.294937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.408 [2024-11-20 16:40:58.294944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.408 qpair failed and we were unable to recover it. 00:29:12.408 [2024-11-20 16:40:58.295228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.408 [2024-11-20 16:40:58.295235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.408 qpair failed and we were unable to recover it. 00:29:12.409 [2024-11-20 16:40:58.295586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.409 [2024-11-20 16:40:58.295593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.409 qpair failed and we were unable to recover it. 00:29:12.409 [2024-11-20 16:40:58.295765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.409 [2024-11-20 16:40:58.295772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.409 qpair failed and we were unable to recover it. 00:29:12.409 [2024-11-20 16:40:58.296107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.409 [2024-11-20 16:40:58.296114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.409 qpair failed and we were unable to recover it. 00:29:12.409 [2024-11-20 16:40:58.296414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.409 [2024-11-20 16:40:58.296421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.409 qpair failed and we were unable to recover it. 00:29:12.409 [2024-11-20 16:40:58.296726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.409 [2024-11-20 16:40:58.296733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.409 qpair failed and we were unable to recover it. 00:29:12.409 [2024-11-20 16:40:58.296974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.409 [2024-11-20 16:40:58.296985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.409 qpair failed and we were unable to recover it. 00:29:12.409 [2024-11-20 16:40:58.297203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.409 [2024-11-20 16:40:58.297210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.409 qpair failed and we were unable to recover it. 00:29:12.409 [2024-11-20 16:40:58.297530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.409 [2024-11-20 16:40:58.297537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.409 qpair failed and we were unable to recover it. 00:29:12.409 [2024-11-20 16:40:58.297824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.409 [2024-11-20 16:40:58.297832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.409 qpair failed and we were unable to recover it. 00:29:12.409 [2024-11-20 16:40:58.298147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.409 [2024-11-20 16:40:58.298153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.409 qpair failed and we were unable to recover it. 00:29:12.409 [2024-11-20 16:40:58.298355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.409 [2024-11-20 16:40:58.298362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.409 qpair failed and we were unable to recover it. 00:29:12.409 [2024-11-20 16:40:58.298656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.409 [2024-11-20 16:40:58.298663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.409 qpair failed and we were unable to recover it. 00:29:12.409 [2024-11-20 16:40:58.298989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.409 [2024-11-20 16:40:58.298996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.409 qpair failed and we were unable to recover it. 00:29:12.409 [2024-11-20 16:40:58.299342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.409 [2024-11-20 16:40:58.299348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.409 qpair failed and we were unable to recover it. 00:29:12.409 [2024-11-20 16:40:58.299633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.409 [2024-11-20 16:40:58.299640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.409 qpair failed and we were unable to recover it. 00:29:12.409 [2024-11-20 16:40:58.299724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.409 [2024-11-20 16:40:58.299731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.409 qpair failed and we were unable to recover it. 00:29:12.409 [2024-11-20 16:40:58.299921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.409 [2024-11-20 16:40:58.299928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.409 qpair failed and we were unable to recover it. 00:29:12.409 [2024-11-20 16:40:58.300298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.409 [2024-11-20 16:40:58.300305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.409 qpair failed and we were unable to recover it. 00:29:12.409 [2024-11-20 16:40:58.300493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.409 [2024-11-20 16:40:58.300499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.409 qpair failed and we were unable to recover it. 00:29:12.409 [2024-11-20 16:40:58.300828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.409 [2024-11-20 16:40:58.300836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.409 qpair failed and we were unable to recover it. 00:29:12.409 [2024-11-20 16:40:58.301042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.409 [2024-11-20 16:40:58.301050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.409 qpair failed and we were unable to recover it. 00:29:12.409 [2024-11-20 16:40:58.301317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.409 [2024-11-20 16:40:58.301328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.409 qpair failed and we were unable to recover it. 00:29:12.409 [2024-11-20 16:40:58.301632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.409 [2024-11-20 16:40:58.301640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.409 qpair failed and we were unable to recover it. 00:29:12.409 [2024-11-20 16:40:58.301956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.409 [2024-11-20 16:40:58.301963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.409 qpair failed and we were unable to recover it. 00:29:12.409 [2024-11-20 16:40:58.302298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.409 [2024-11-20 16:40:58.302306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.409 qpair failed and we were unable to recover it. 00:29:12.409 [2024-11-20 16:40:58.302621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.409 [2024-11-20 16:40:58.302629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.409 qpair failed and we were unable to recover it. 00:29:12.409 [2024-11-20 16:40:58.302960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.409 [2024-11-20 16:40:58.302968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.409 qpair failed and we were unable to recover it. 00:29:12.409 [2024-11-20 16:40:58.303285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.409 [2024-11-20 16:40:58.303292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.409 qpair failed and we were unable to recover it. 00:29:12.409 [2024-11-20 16:40:58.303605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.409 [2024-11-20 16:40:58.303613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.409 qpair failed and we were unable to recover it. 00:29:12.409 [2024-11-20 16:40:58.303939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.409 [2024-11-20 16:40:58.303946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.409 qpair failed and we were unable to recover it. 00:29:12.409 [2024-11-20 16:40:58.304223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.409 [2024-11-20 16:40:58.304231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.409 qpair failed and we were unable to recover it. 00:29:12.409 [2024-11-20 16:40:58.304601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.409 [2024-11-20 16:40:58.304608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.409 qpair failed and we were unable to recover it. 00:29:12.409 [2024-11-20 16:40:58.304917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.409 [2024-11-20 16:40:58.304925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.409 qpair failed and we were unable to recover it. 00:29:12.409 [2024-11-20 16:40:58.305232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.409 [2024-11-20 16:40:58.305240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.409 qpair failed and we were unable to recover it. 00:29:12.409 [2024-11-20 16:40:58.305558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.409 [2024-11-20 16:40:58.305566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.409 qpair failed and we were unable to recover it. 00:29:12.409 [2024-11-20 16:40:58.305826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.409 [2024-11-20 16:40:58.305833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.409 qpair failed and we were unable to recover it. 00:29:12.409 [2024-11-20 16:40:58.306141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.409 [2024-11-20 16:40:58.306149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.409 qpair failed and we were unable to recover it. 00:29:12.409 [2024-11-20 16:40:58.306350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.409 [2024-11-20 16:40:58.306357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.409 qpair failed and we were unable to recover it. 00:29:12.410 [2024-11-20 16:40:58.306560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.410 [2024-11-20 16:40:58.306567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.410 qpair failed and we were unable to recover it. 00:29:12.410 [2024-11-20 16:40:58.306856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.410 [2024-11-20 16:40:58.306863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.410 qpair failed and we were unable to recover it. 00:29:12.410 [2024-11-20 16:40:58.307043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.410 [2024-11-20 16:40:58.307050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.410 qpair failed and we were unable to recover it. 00:29:12.410 [2024-11-20 16:40:58.307324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.410 [2024-11-20 16:40:58.307331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.410 qpair failed and we were unable to recover it. 00:29:12.410 [2024-11-20 16:40:58.307643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.410 [2024-11-20 16:40:58.307651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.410 qpair failed and we were unable to recover it. 00:29:12.410 [2024-11-20 16:40:58.307866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.410 [2024-11-20 16:40:58.307873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.410 qpair failed and we were unable to recover it. 00:29:12.410 [2024-11-20 16:40:58.308250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.410 [2024-11-20 16:40:58.308257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.410 qpair failed and we were unable to recover it. 00:29:12.410 [2024-11-20 16:40:58.308603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.410 [2024-11-20 16:40:58.308610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.410 qpair failed and we were unable to recover it. 00:29:12.410 [2024-11-20 16:40:58.308787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.410 [2024-11-20 16:40:58.308794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.410 qpair failed and we were unable to recover it. 00:29:12.410 [2024-11-20 16:40:58.309027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.410 [2024-11-20 16:40:58.309034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.410 qpair failed and we were unable to recover it. 00:29:12.410 [2024-11-20 16:40:58.309314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.410 [2024-11-20 16:40:58.309320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.410 qpair failed and we were unable to recover it. 00:29:12.410 [2024-11-20 16:40:58.309735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.410 [2024-11-20 16:40:58.309742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.410 qpair failed and we were unable to recover it. 00:29:12.410 [2024-11-20 16:40:58.309914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.410 [2024-11-20 16:40:58.309920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.410 qpair failed and we were unable to recover it. 00:29:12.410 [2024-11-20 16:40:58.310230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.410 [2024-11-20 16:40:58.310237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.410 qpair failed and we were unable to recover it. 00:29:12.410 [2024-11-20 16:40:58.310388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.410 [2024-11-20 16:40:58.310395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.410 qpair failed and we were unable to recover it. 00:29:12.410 [2024-11-20 16:40:58.310575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.410 [2024-11-20 16:40:58.310583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.410 qpair failed and we were unable to recover it. 00:29:12.410 [2024-11-20 16:40:58.310777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.410 [2024-11-20 16:40:58.310785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.410 qpair failed and we were unable to recover it. 00:29:12.410 [2024-11-20 16:40:58.311113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.410 [2024-11-20 16:40:58.311120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.410 qpair failed and we were unable to recover it. 00:29:12.410 [2024-11-20 16:40:58.311341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.410 [2024-11-20 16:40:58.311349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.410 qpair failed and we were unable to recover it. 00:29:12.410 [2024-11-20 16:40:58.311680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.410 [2024-11-20 16:40:58.311688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.410 qpair failed and we were unable to recover it. 00:29:12.410 [2024-11-20 16:40:58.312037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.410 [2024-11-20 16:40:58.312044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.410 qpair failed and we were unable to recover it. 00:29:12.410 [2024-11-20 16:40:58.312327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.410 [2024-11-20 16:40:58.312333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.410 qpair failed and we were unable to recover it. 00:29:12.410 [2024-11-20 16:40:58.312550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.410 [2024-11-20 16:40:58.312557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.410 qpair failed and we were unable to recover it. 00:29:12.410 [2024-11-20 16:40:58.312869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.410 [2024-11-20 16:40:58.312878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.410 qpair failed and we were unable to recover it. 00:29:12.410 [2024-11-20 16:40:58.313104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.410 [2024-11-20 16:40:58.313111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.410 qpair failed and we were unable to recover it. 00:29:12.410 [2024-11-20 16:40:58.313451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.410 [2024-11-20 16:40:58.313458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.410 qpair failed and we were unable to recover it. 00:29:12.410 [2024-11-20 16:40:58.313778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.410 [2024-11-20 16:40:58.313785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.410 qpair failed and we were unable to recover it. 00:29:12.410 [2024-11-20 16:40:58.314105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.410 [2024-11-20 16:40:58.314112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.410 qpair failed and we were unable to recover it. 00:29:12.410 [2024-11-20 16:40:58.314501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.410 [2024-11-20 16:40:58.314507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.410 qpair failed and we were unable to recover it. 00:29:12.410 [2024-11-20 16:40:58.314807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.410 [2024-11-20 16:40:58.314814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.410 qpair failed and we were unable to recover it. 00:29:12.410 [2024-11-20 16:40:58.315014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.410 [2024-11-20 16:40:58.315022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.410 qpair failed and we were unable to recover it. 00:29:12.410 [2024-11-20 16:40:58.315370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.410 [2024-11-20 16:40:58.315376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.410 qpair failed and we were unable to recover it. 00:29:12.410 [2024-11-20 16:40:58.315686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.410 [2024-11-20 16:40:58.315693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.410 qpair failed and we were unable to recover it. 00:29:12.411 [2024-11-20 16:40:58.315877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.411 [2024-11-20 16:40:58.315885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.411 qpair failed and we were unable to recover it. 00:29:12.411 [2024-11-20 16:40:58.316208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.411 [2024-11-20 16:40:58.316215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.411 qpair failed and we were unable to recover it. 00:29:12.411 [2024-11-20 16:40:58.316506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.411 [2024-11-20 16:40:58.316514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.411 qpair failed and we were unable to recover it. 00:29:12.411 [2024-11-20 16:40:58.316834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.411 [2024-11-20 16:40:58.316841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.411 qpair failed and we were unable to recover it. 00:29:12.411 [2024-11-20 16:40:58.317144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.411 [2024-11-20 16:40:58.317151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.411 qpair failed and we were unable to recover it. 00:29:12.411 [2024-11-20 16:40:58.317465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.411 [2024-11-20 16:40:58.317473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.411 qpair failed and we were unable to recover it. 00:29:12.411 [2024-11-20 16:40:58.317794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.411 [2024-11-20 16:40:58.317800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.411 qpair failed and we were unable to recover it. 00:29:12.411 [2024-11-20 16:40:58.318107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.411 [2024-11-20 16:40:58.318114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.411 qpair failed and we were unable to recover it. 00:29:12.411 [2024-11-20 16:40:58.318319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.411 [2024-11-20 16:40:58.318325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.411 qpair failed and we were unable to recover it. 00:29:12.411 [2024-11-20 16:40:58.318647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.411 [2024-11-20 16:40:58.318654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.411 qpair failed and we were unable to recover it. 00:29:12.411 [2024-11-20 16:40:58.318820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.411 [2024-11-20 16:40:58.318827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.411 qpair failed and we were unable to recover it. 00:29:12.411 [2024-11-20 16:40:58.319113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.411 [2024-11-20 16:40:58.319119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.411 qpair failed and we were unable to recover it. 00:29:12.411 [2024-11-20 16:40:58.319431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.411 [2024-11-20 16:40:58.319443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.411 qpair failed and we were unable to recover it. 00:29:12.411 [2024-11-20 16:40:58.319778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.411 [2024-11-20 16:40:58.319785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.411 qpair failed and we were unable to recover it. 00:29:12.411 [2024-11-20 16:40:58.320093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.411 [2024-11-20 16:40:58.320100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.411 qpair failed and we were unable to recover it. 00:29:12.411 [2024-11-20 16:40:58.320469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.411 [2024-11-20 16:40:58.320477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.411 qpair failed and we were unable to recover it. 00:29:12.411 [2024-11-20 16:40:58.320790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.411 [2024-11-20 16:40:58.320796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.411 qpair failed and we were unable to recover it. 00:29:12.411 [2024-11-20 16:40:58.321084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.411 [2024-11-20 16:40:58.321091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.411 qpair failed and we were unable to recover it. 00:29:12.411 [2024-11-20 16:40:58.321408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.411 [2024-11-20 16:40:58.321415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.411 qpair failed and we were unable to recover it. 00:29:12.411 [2024-11-20 16:40:58.321715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.411 [2024-11-20 16:40:58.321727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.411 qpair failed and we were unable to recover it. 00:29:12.411 [2024-11-20 16:40:58.321881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.411 [2024-11-20 16:40:58.321889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.411 qpair failed and we were unable to recover it. 00:29:12.411 [2024-11-20 16:40:58.322263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.411 [2024-11-20 16:40:58.322270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.411 qpair failed and we were unable to recover it. 00:29:12.411 [2024-11-20 16:40:58.322613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.411 [2024-11-20 16:40:58.322620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.411 qpair failed and we were unable to recover it. 00:29:12.411 [2024-11-20 16:40:58.322931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.411 [2024-11-20 16:40:58.322939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.411 qpair failed and we were unable to recover it. 00:29:12.411 [2024-11-20 16:40:58.323237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.411 [2024-11-20 16:40:58.323244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.411 qpair failed and we were unable to recover it. 00:29:12.411 [2024-11-20 16:40:58.323452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.411 [2024-11-20 16:40:58.323459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.411 qpair failed and we were unable to recover it. 00:29:12.411 [2024-11-20 16:40:58.323793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.411 [2024-11-20 16:40:58.323800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.411 qpair failed and we were unable to recover it. 00:29:12.411 [2024-11-20 16:40:58.324123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.411 [2024-11-20 16:40:58.324131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.411 qpair failed and we were unable to recover it. 00:29:12.687 [2024-11-20 16:40:58.324467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.687 [2024-11-20 16:40:58.324475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.687 qpair failed and we were unable to recover it. 00:29:12.687 [2024-11-20 16:40:58.324640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.687 [2024-11-20 16:40:58.324647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.687 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-20 16:40:58.324965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-20 16:40:58.324974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-20 16:40:58.325262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-20 16:40:58.325269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-20 16:40:58.325448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-20 16:40:58.325456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-20 16:40:58.325647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-20 16:40:58.325654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-20 16:40:58.325820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-20 16:40:58.325827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-20 16:40:58.326027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-20 16:40:58.326034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-20 16:40:58.326209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-20 16:40:58.326215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-20 16:40:58.326580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-20 16:40:58.326586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-20 16:40:58.326970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-20 16:40:58.326977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-20 16:40:58.327338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-20 16:40:58.327345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-20 16:40:58.327639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-20 16:40:58.327647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-20 16:40:58.327986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-20 16:40:58.327994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-20 16:40:58.328280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-20 16:40:58.328287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-20 16:40:58.328596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-20 16:40:58.328603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-20 16:40:58.328932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-20 16:40:58.328939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-20 16:40:58.329018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-20 16:40:58.329025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-20 16:40:58.329352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-20 16:40:58.329360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-20 16:40:58.329692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-20 16:40:58.329699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-20 16:40:58.330012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-20 16:40:58.330020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-20 16:40:58.330319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-20 16:40:58.330327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-20 16:40:58.330637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-20 16:40:58.330644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-20 16:40:58.330926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-20 16:40:58.330933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-20 16:40:58.331279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-20 16:40:58.331286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-20 16:40:58.331608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-20 16:40:58.331615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-20 16:40:58.331935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-20 16:40:58.331942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-20 16:40:58.332255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-20 16:40:58.332262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-20 16:40:58.332568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-20 16:40:58.332574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-20 16:40:58.332864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-20 16:40:58.332871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-20 16:40:58.333253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-20 16:40:58.333259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-20 16:40:58.333541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-20 16:40:58.333548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-20 16:40:58.333838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-20 16:40:58.333844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-20 16:40:58.334149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-20 16:40:58.334156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-20 16:40:58.334341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-20 16:40:58.334348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-20 16:40:58.334678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.688 [2024-11-20 16:40:58.334684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.688 qpair failed and we were unable to recover it. 00:29:12.688 [2024-11-20 16:40:58.334992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-20 16:40:58.334999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-20 16:40:58.335230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-20 16:40:58.335237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-20 16:40:58.335413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-20 16:40:58.335420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-20 16:40:58.335692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-20 16:40:58.335699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-20 16:40:58.335987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-20 16:40:58.335994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-20 16:40:58.336380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-20 16:40:58.336387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-20 16:40:58.336686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-20 16:40:58.336694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-20 16:40:58.337008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-20 16:40:58.337015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-20 16:40:58.337354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-20 16:40:58.337360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-20 16:40:58.337569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-20 16:40:58.337575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-20 16:40:58.337759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-20 16:40:58.337766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-20 16:40:58.338077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-20 16:40:58.338084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-20 16:40:58.338404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-20 16:40:58.338411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-20 16:40:58.338722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-20 16:40:58.338728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-20 16:40:58.339033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-20 16:40:58.339041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-20 16:40:58.339354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-20 16:40:58.339361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-20 16:40:58.339680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-20 16:40:58.339688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-20 16:40:58.340037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-20 16:40:58.340044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-20 16:40:58.340099] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:29:12.689 [2024-11-20 16:40:58.340150] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:12.689 [2024-11-20 16:40:58.340360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-20 16:40:58.340368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-20 16:40:58.340542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-20 16:40:58.340549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-20 16:40:58.340731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-20 16:40:58.340737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-20 16:40:58.340936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-20 16:40:58.340943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-20 16:40:58.341252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-20 16:40:58.341259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-20 16:40:58.341587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-20 16:40:58.341595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-20 16:40:58.341914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-20 16:40:58.341921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-20 16:40:58.342255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-20 16:40:58.342263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-20 16:40:58.342658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-20 16:40:58.342666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-20 16:40:58.343057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-20 16:40:58.343065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-20 16:40:58.343378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-20 16:40:58.343385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-20 16:40:58.343673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-20 16:40:58.343681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-20 16:40:58.343992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-20 16:40:58.344000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-20 16:40:58.344269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-20 16:40:58.344277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-20 16:40:58.344592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-20 16:40:58.344600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-20 16:40:58.344795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-20 16:40:58.344802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-20 16:40:58.344977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-20 16:40:58.344994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-20 16:40:58.345305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-20 16:40:58.345313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.689 [2024-11-20 16:40:58.345489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.689 [2024-11-20 16:40:58.345497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.689 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-20 16:40:58.345812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-20 16:40:58.345820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-20 16:40:58.346140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-20 16:40:58.346148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-20 16:40:58.346320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-20 16:40:58.346329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-20 16:40:58.346526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-20 16:40:58.346534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-20 16:40:58.346733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-20 16:40:58.346741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-20 16:40:58.347079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-20 16:40:58.347087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-20 16:40:58.347299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-20 16:40:58.347307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-20 16:40:58.347646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-20 16:40:58.347653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-20 16:40:58.347931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-20 16:40:58.347942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-20 16:40:58.348120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-20 16:40:58.348128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-20 16:40:58.348395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-20 16:40:58.348403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-20 16:40:58.348727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-20 16:40:58.348735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-20 16:40:58.349048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-20 16:40:58.349056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-20 16:40:58.349359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-20 16:40:58.349367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-20 16:40:58.349677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-20 16:40:58.349685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-20 16:40:58.349881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-20 16:40:58.349889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-20 16:40:58.350197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-20 16:40:58.350205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-20 16:40:58.350379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-20 16:40:58.350386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-20 16:40:58.350573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-20 16:40:58.350580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-20 16:40:58.350757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-20 16:40:58.350765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-20 16:40:58.350954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-20 16:40:58.350962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-20 16:40:58.351262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-20 16:40:58.351270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-20 16:40:58.351587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-20 16:40:58.351595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-20 16:40:58.351906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-20 16:40:58.351914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-20 16:40:58.352178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-20 16:40:58.352186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-20 16:40:58.352521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-20 16:40:58.352529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-20 16:40:58.352838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-20 16:40:58.352846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-20 16:40:58.353140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-20 16:40:58.353148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-20 16:40:58.353406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-20 16:40:58.353414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-20 16:40:58.353713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-20 16:40:58.353721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-20 16:40:58.354043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-20 16:40:58.354051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-20 16:40:58.354327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-20 16:40:58.354335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-20 16:40:58.354635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-20 16:40:58.354643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-20 16:40:58.354961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-20 16:40:58.354969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-20 16:40:58.355356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-20 16:40:58.355363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-20 16:40:58.355629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-20 16:40:58.355636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-20 16:40:58.355959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-20 16:40:58.355965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.690 qpair failed and we were unable to recover it. 00:29:12.690 [2024-11-20 16:40:58.356277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.690 [2024-11-20 16:40:58.356284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-20 16:40:58.356594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-20 16:40:58.356601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-20 16:40:58.356884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-20 16:40:58.356891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-20 16:40:58.357212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-20 16:40:58.357219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-20 16:40:58.357515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-20 16:40:58.357523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-20 16:40:58.357837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-20 16:40:58.357844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-20 16:40:58.358134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-20 16:40:58.358141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-20 16:40:58.358468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-20 16:40:58.358475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-20 16:40:58.358793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-20 16:40:58.358799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-20 16:40:58.359118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-20 16:40:58.359125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-20 16:40:58.359479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-20 16:40:58.359485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-20 16:40:58.359789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-20 16:40:58.359798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-20 16:40:58.360111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-20 16:40:58.360118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-20 16:40:58.360416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-20 16:40:58.360424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-20 16:40:58.360708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-20 16:40:58.360715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-20 16:40:58.361054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-20 16:40:58.361061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-20 16:40:58.361386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-20 16:40:58.361392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-20 16:40:58.361698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-20 16:40:58.361705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-20 16:40:58.362006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-20 16:40:58.362013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-20 16:40:58.362365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-20 16:40:58.362373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-20 16:40:58.362679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-20 16:40:58.362685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-20 16:40:58.363002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-20 16:40:58.363010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-20 16:40:58.363177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-20 16:40:58.363185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-20 16:40:58.363498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-20 16:40:58.363505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-20 16:40:58.363789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-20 16:40:58.363796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-20 16:40:58.364102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-20 16:40:58.364109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-20 16:40:58.364302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-20 16:40:58.364309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-20 16:40:58.364648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-20 16:40:58.364654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-20 16:40:58.364975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-20 16:40:58.364985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-20 16:40:58.365135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-20 16:40:58.365150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-20 16:40:58.365462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-20 16:40:58.365469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-20 16:40:58.365782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-20 16:40:58.365789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-20 16:40:58.365984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-20 16:40:58.365992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-20 16:40:58.366163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-20 16:40:58.366171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-20 16:40:58.366453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-20 16:40:58.366460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-20 16:40:58.366759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-20 16:40:58.366766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-20 16:40:58.367099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.691 [2024-11-20 16:40:58.367107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.691 qpair failed and we were unable to recover it. 00:29:12.691 [2024-11-20 16:40:58.367412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-20 16:40:58.367419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-20 16:40:58.367738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-20 16:40:58.367745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-20 16:40:58.368070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-20 16:40:58.368078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-20 16:40:58.368382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-20 16:40:58.368389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-20 16:40:58.368686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-20 16:40:58.368693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-20 16:40:58.368978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-20 16:40:58.368988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-20 16:40:58.369293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-20 16:40:58.369300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-20 16:40:58.369506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-20 16:40:58.369513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-20 16:40:58.369672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-20 16:40:58.369679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-20 16:40:58.369992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-20 16:40:58.369999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-20 16:40:58.370362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-20 16:40:58.370370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-20 16:40:58.370686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-20 16:40:58.370693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-20 16:40:58.371008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-20 16:40:58.371015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-20 16:40:58.371335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-20 16:40:58.371342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-20 16:40:58.371654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-20 16:40:58.371662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-20 16:40:58.371987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-20 16:40:58.371994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-20 16:40:58.372313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-20 16:40:58.372321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-20 16:40:58.372616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-20 16:40:58.372622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-20 16:40:58.372877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-20 16:40:58.372883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-20 16:40:58.373182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-20 16:40:58.373189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-20 16:40:58.373355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-20 16:40:58.373362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-20 16:40:58.373706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-20 16:40:58.373713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-20 16:40:58.374025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-20 16:40:58.374032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-20 16:40:58.374219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-20 16:40:58.374225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-20 16:40:58.374556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-20 16:40:58.374562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-20 16:40:58.374871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-20 16:40:58.374878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-20 16:40:58.375199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-20 16:40:58.375207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-20 16:40:58.375371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-20 16:40:58.375378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-20 16:40:58.375758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-20 16:40:58.375766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-20 16:40:58.375966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-20 16:40:58.375973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-20 16:40:58.376205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-20 16:40:58.376212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.692 [2024-11-20 16:40:58.376503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.692 [2024-11-20 16:40:58.376509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.692 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-20 16:40:58.376784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-20 16:40:58.376792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-20 16:40:58.377131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-20 16:40:58.377139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-20 16:40:58.377442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-20 16:40:58.377449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-20 16:40:58.377647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-20 16:40:58.377653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-20 16:40:58.377946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-20 16:40:58.377953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-20 16:40:58.378277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-20 16:40:58.378284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-20 16:40:58.378570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-20 16:40:58.378582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-20 16:40:58.378865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-20 16:40:58.378872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-20 16:40:58.379194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-20 16:40:58.379201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-20 16:40:58.379503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-20 16:40:58.379510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-20 16:40:58.379914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-20 16:40:58.379921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-20 16:40:58.380239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-20 16:40:58.380246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-20 16:40:58.380567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-20 16:40:58.380574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-20 16:40:58.380857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-20 16:40:58.380864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-20 16:40:58.381039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-20 16:40:58.381046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-20 16:40:58.381390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-20 16:40:58.381397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-20 16:40:58.381720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-20 16:40:58.381727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-20 16:40:58.381906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-20 16:40:58.381913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-20 16:40:58.382293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-20 16:40:58.382300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-20 16:40:58.382512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-20 16:40:58.382518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-20 16:40:58.382780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-20 16:40:58.382786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-20 16:40:58.383016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-20 16:40:58.383023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-20 16:40:58.383210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-20 16:40:58.383218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-20 16:40:58.383302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-20 16:40:58.383308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-20 16:40:58.383488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-20 16:40:58.383495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-20 16:40:58.383778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-20 16:40:58.383785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-20 16:40:58.384100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-20 16:40:58.384107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-20 16:40:58.384439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-20 16:40:58.384446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-20 16:40:58.384738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-20 16:40:58.384745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-20 16:40:58.385052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-20 16:40:58.385059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-20 16:40:58.385383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-20 16:40:58.385390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-20 16:40:58.385589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-20 16:40:58.385596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-20 16:40:58.385886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-20 16:40:58.385892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-20 16:40:58.386207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-20 16:40:58.386214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-20 16:40:58.386511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-20 16:40:58.386517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-20 16:40:58.386832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-20 16:40:58.386839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-20 16:40:58.387150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-20 16:40:58.387157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-20 16:40:58.387361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.693 [2024-11-20 16:40:58.387368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.693 qpair failed and we were unable to recover it. 00:29:12.693 [2024-11-20 16:40:58.387681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-20 16:40:58.387687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-20 16:40:58.387856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-20 16:40:58.387863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-20 16:40:58.388103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-20 16:40:58.388110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-20 16:40:58.388423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-20 16:40:58.388430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-20 16:40:58.388724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-20 16:40:58.388730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-20 16:40:58.389027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-20 16:40:58.389034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-20 16:40:58.389234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-20 16:40:58.389241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-20 16:40:58.389520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-20 16:40:58.389527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-20 16:40:58.389803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-20 16:40:58.389810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-20 16:40:58.390128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-20 16:40:58.390135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-20 16:40:58.390441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-20 16:40:58.390449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-20 16:40:58.390759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-20 16:40:58.390767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-20 16:40:58.391045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-20 16:40:58.391053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-20 16:40:58.391320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-20 16:40:58.391327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-20 16:40:58.391670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-20 16:40:58.391678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-20 16:40:58.391990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-20 16:40:58.391997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-20 16:40:58.392168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-20 16:40:58.392175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-20 16:40:58.392608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-20 16:40:58.392614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-20 16:40:58.392904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-20 16:40:58.392910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-20 16:40:58.393229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-20 16:40:58.393236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-20 16:40:58.393527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-20 16:40:58.393534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-20 16:40:58.393848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-20 16:40:58.393855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-20 16:40:58.394145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-20 16:40:58.394152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-20 16:40:58.394318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-20 16:40:58.394325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-20 16:40:58.394523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-20 16:40:58.394531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-20 16:40:58.394679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-20 16:40:58.394685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-20 16:40:58.394987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-20 16:40:58.394994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-20 16:40:58.395313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-20 16:40:58.395320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-20 16:40:58.395503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-20 16:40:58.395510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-20 16:40:58.395797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-20 16:40:58.395804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-20 16:40:58.395966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-20 16:40:58.395973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-20 16:40:58.396128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-20 16:40:58.396135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-20 16:40:58.396470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-20 16:40:58.396476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-20 16:40:58.396631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-20 16:40:58.396638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-20 16:40:58.396926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-20 16:40:58.396933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-20 16:40:58.397274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-20 16:40:58.397281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-20 16:40:58.397596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-20 16:40:58.397604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.694 [2024-11-20 16:40:58.397905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.694 [2024-11-20 16:40:58.397912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.694 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-20 16:40:58.398254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-20 16:40:58.398261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-20 16:40:58.398603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-20 16:40:58.398610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-20 16:40:58.398925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-20 16:40:58.398933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-20 16:40:58.399232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-20 16:40:58.399239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-20 16:40:58.399554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-20 16:40:58.399560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-20 16:40:58.399872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-20 16:40:58.399880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-20 16:40:58.399919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-20 16:40:58.399926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-20 16:40:58.400089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-20 16:40:58.400096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-20 16:40:58.400367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-20 16:40:58.400373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-20 16:40:58.400557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-20 16:40:58.400564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-20 16:40:58.400822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-20 16:40:58.400835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-20 16:40:58.401129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-20 16:40:58.401136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-20 16:40:58.401362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-20 16:40:58.401369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-20 16:40:58.401723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-20 16:40:58.401730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-20 16:40:58.401908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-20 16:40:58.401915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-20 16:40:58.402246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-20 16:40:58.402253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-20 16:40:58.402589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-20 16:40:58.402596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-20 16:40:58.402909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-20 16:40:58.402916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-20 16:40:58.403216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-20 16:40:58.403224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-20 16:40:58.403403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-20 16:40:58.403410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-20 16:40:58.403667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-20 16:40:58.403674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-20 16:40:58.403980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-20 16:40:58.403990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-20 16:40:58.404283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-20 16:40:58.404290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-20 16:40:58.404605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-20 16:40:58.404611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-20 16:40:58.404925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-20 16:40:58.404932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-20 16:40:58.405230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-20 16:40:58.405238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-20 16:40:58.405541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-20 16:40:58.405551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-20 16:40:58.405851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-20 16:40:58.405857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-20 16:40:58.406160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-20 16:40:58.406167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.695 qpair failed and we were unable to recover it. 00:29:12.695 [2024-11-20 16:40:58.406499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.695 [2024-11-20 16:40:58.406506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-20 16:40:58.406881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-20 16:40:58.406888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-20 16:40:58.407191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-20 16:40:58.407198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-20 16:40:58.407514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-20 16:40:58.407521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-20 16:40:58.407860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-20 16:40:58.407868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-20 16:40:58.408193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-20 16:40:58.408200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-20 16:40:58.408484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-20 16:40:58.408491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-20 16:40:58.408803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-20 16:40:58.408809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-20 16:40:58.409113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-20 16:40:58.409120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-20 16:40:58.409450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-20 16:40:58.409457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-20 16:40:58.409761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-20 16:40:58.409768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-20 16:40:58.409937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-20 16:40:58.409944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-20 16:40:58.410225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-20 16:40:58.410232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-20 16:40:58.410559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-20 16:40:58.410566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-20 16:40:58.410856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-20 16:40:58.410864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-20 16:40:58.411144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-20 16:40:58.411151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-20 16:40:58.411454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-20 16:40:58.411468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-20 16:40:58.411756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-20 16:40:58.411762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-20 16:40:58.412079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-20 16:40:58.412087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-20 16:40:58.412436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-20 16:40:58.412442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-20 16:40:58.412494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-20 16:40:58.412500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-20 16:40:58.412661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-20 16:40:58.412668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-20 16:40:58.412948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-20 16:40:58.412956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-20 16:40:58.413254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-20 16:40:58.413261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-20 16:40:58.413419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-20 16:40:58.413426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-20 16:40:58.413582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-20 16:40:58.413589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-20 16:40:58.413755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-20 16:40:58.413762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-20 16:40:58.414056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-20 16:40:58.414063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-20 16:40:58.414379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-20 16:40:58.414385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-20 16:40:58.414585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-20 16:40:58.414592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-20 16:40:58.414901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-20 16:40:58.414908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-20 16:40:58.415237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-20 16:40:58.415244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-20 16:40:58.415576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-20 16:40:58.415583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-20 16:40:58.415894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-20 16:40:58.415902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-20 16:40:58.416122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-20 16:40:58.416130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-20 16:40:58.416433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-20 16:40:58.416440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.696 [2024-11-20 16:40:58.416767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.696 [2024-11-20 16:40:58.416775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.696 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-20 16:40:58.416994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-20 16:40:58.417003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-20 16:40:58.417300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-20 16:40:58.417307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-20 16:40:58.417627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-20 16:40:58.417634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-20 16:40:58.417937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-20 16:40:58.417944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-20 16:40:58.418131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-20 16:40:58.418146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-20 16:40:58.418447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-20 16:40:58.418453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-20 16:40:58.418616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-20 16:40:58.418623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-20 16:40:58.418893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-20 16:40:58.418907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-20 16:40:58.419334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-20 16:40:58.419341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-20 16:40:58.419514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-20 16:40:58.419520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-20 16:40:58.419894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-20 16:40:58.419901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-20 16:40:58.420212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-20 16:40:58.420219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-20 16:40:58.420552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-20 16:40:58.420558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-20 16:40:58.420900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-20 16:40:58.420907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-20 16:40:58.421086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-20 16:40:58.421094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-20 16:40:58.421244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-20 16:40:58.421250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-20 16:40:58.421540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-20 16:40:58.421547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-20 16:40:58.421698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-20 16:40:58.421704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-20 16:40:58.421858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-20 16:40:58.421864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-20 16:40:58.422042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-20 16:40:58.422049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-20 16:40:58.422199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-20 16:40:58.422206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-20 16:40:58.422501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-20 16:40:58.422508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-20 16:40:58.422824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-20 16:40:58.422831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-20 16:40:58.422991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-20 16:40:58.422998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-20 16:40:58.423339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-20 16:40:58.423345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-20 16:40:58.423650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-20 16:40:58.423657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-20 16:40:58.423958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-20 16:40:58.423966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-20 16:40:58.424265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-20 16:40:58.424272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-20 16:40:58.424589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-20 16:40:58.424596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-20 16:40:58.424760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.697 [2024-11-20 16:40:58.424768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.697 qpair failed and we were unable to recover it. 00:29:12.697 [2024-11-20 16:40:58.425083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.698 [2024-11-20 16:40:58.425090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.698 qpair failed and we were unable to recover it. 00:29:12.698 [2024-11-20 16:40:58.425406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.698 [2024-11-20 16:40:58.425413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.698 qpair failed and we were unable to recover it. 00:29:12.698 [2024-11-20 16:40:58.425728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.698 [2024-11-20 16:40:58.425734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.698 qpair failed and we were unable to recover it. 00:29:12.698 [2024-11-20 16:40:58.426046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.698 [2024-11-20 16:40:58.426053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.698 qpair failed and we were unable to recover it. 00:29:12.698 [2024-11-20 16:40:58.426238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.698 [2024-11-20 16:40:58.426245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.698 qpair failed and we were unable to recover it. 00:29:12.698 [2024-11-20 16:40:58.426598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.698 [2024-11-20 16:40:58.426605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.698 qpair failed and we were unable to recover it. 00:29:12.698 [2024-11-20 16:40:58.426923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.698 [2024-11-20 16:40:58.426930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.698 qpair failed and we were unable to recover it. 00:29:12.698 [2024-11-20 16:40:58.427138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.698 [2024-11-20 16:40:58.427145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.698 qpair failed and we were unable to recover it. 00:29:12.698 [2024-11-20 16:40:58.427490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.698 [2024-11-20 16:40:58.427496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.698 qpair failed and we were unable to recover it. 00:29:12.698 [2024-11-20 16:40:58.427865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.698 [2024-11-20 16:40:58.427871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.698 qpair failed and we were unable to recover it. 00:29:12.698 [2024-11-20 16:40:58.428149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.698 [2024-11-20 16:40:58.428158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.698 qpair failed and we were unable to recover it. 00:29:12.698 [2024-11-20 16:40:58.428478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.698 [2024-11-20 16:40:58.428485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.698 qpair failed and we were unable to recover it. 00:29:12.698 [2024-11-20 16:40:58.428799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.698 [2024-11-20 16:40:58.428806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.698 qpair failed and we were unable to recover it. 00:29:12.698 [2024-11-20 16:40:58.429111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.698 [2024-11-20 16:40:58.429119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.698 qpair failed and we were unable to recover it. 00:29:12.698 [2024-11-20 16:40:58.429424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.698 [2024-11-20 16:40:58.429431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.698 qpair failed and we were unable to recover it. 00:29:12.698 [2024-11-20 16:40:58.429748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.698 [2024-11-20 16:40:58.429756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.698 qpair failed and we were unable to recover it. 00:29:12.698 [2024-11-20 16:40:58.430077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.698 [2024-11-20 16:40:58.430084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.698 qpair failed and we were unable to recover it. 00:29:12.698 [2024-11-20 16:40:58.430393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.698 [2024-11-20 16:40:58.430399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.698 qpair failed and we were unable to recover it. 00:29:12.698 [2024-11-20 16:40:58.430723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.698 [2024-11-20 16:40:58.430729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.698 qpair failed and we were unable to recover it. 00:29:12.698 [2024-11-20 16:40:58.431055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.698 [2024-11-20 16:40:58.431062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.698 qpair failed and we were unable to recover it. 00:29:12.698 [2024-11-20 16:40:58.431385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.698 [2024-11-20 16:40:58.431392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.698 qpair failed and we were unable to recover it. 00:29:12.698 [2024-11-20 16:40:58.431701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.698 [2024-11-20 16:40:58.431708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.698 qpair failed and we were unable to recover it. 00:29:12.698 [2024-11-20 16:40:58.432028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.698 [2024-11-20 16:40:58.432035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.698 qpair failed and we were unable to recover it. 00:29:12.698 [2024-11-20 16:40:58.432347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.698 [2024-11-20 16:40:58.432354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.698 qpair failed and we were unable to recover it. 00:29:12.698 [2024-11-20 16:40:58.432670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.698 [2024-11-20 16:40:58.432677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.698 qpair failed and we were unable to recover it. 00:29:12.698 [2024-11-20 16:40:58.432996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.698 [2024-11-20 16:40:58.433002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.698 qpair failed and we were unable to recover it. 00:29:12.698 [2024-11-20 16:40:58.433310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.698 [2024-11-20 16:40:58.433316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.698 qpair failed and we were unable to recover it. 00:29:12.698 [2024-11-20 16:40:58.433640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.698 [2024-11-20 16:40:58.433647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.698 qpair failed and we were unable to recover it. 00:29:12.698 [2024-11-20 16:40:58.433961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.698 [2024-11-20 16:40:58.433967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.698 qpair failed and we were unable to recover it. 00:29:12.698 [2024-11-20 16:40:58.434157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.698 [2024-11-20 16:40:58.434164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.698 qpair failed and we were unable to recover it. 00:29:12.699 [2024-11-20 16:40:58.434444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.699 [2024-11-20 16:40:58.434451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.699 qpair failed and we were unable to recover it. 00:29:12.699 [2024-11-20 16:40:58.434805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.699 [2024-11-20 16:40:58.434812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.699 qpair failed and we were unable to recover it. 00:29:12.699 [2024-11-20 16:40:58.435130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.699 [2024-11-20 16:40:58.435137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.699 qpair failed and we were unable to recover it. 00:29:12.699 [2024-11-20 16:40:58.435508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.699 [2024-11-20 16:40:58.435514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.699 qpair failed and we were unable to recover it. 00:29:12.699 [2024-11-20 16:40:58.435773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.699 [2024-11-20 16:40:58.435779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.699 qpair failed and we were unable to recover it. 00:29:12.699 [2024-11-20 16:40:58.436078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.699 [2024-11-20 16:40:58.436085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.699 qpair failed and we were unable to recover it. 00:29:12.699 [2024-11-20 16:40:58.436404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.699 [2024-11-20 16:40:58.436411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.699 qpair failed and we were unable to recover it. 00:29:12.699 [2024-11-20 16:40:58.436743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.699 [2024-11-20 16:40:58.436750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.699 qpair failed and we were unable to recover it. 00:29:12.699 [2024-11-20 16:40:58.436937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.699 [2024-11-20 16:40:58.436943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.699 qpair failed and we were unable to recover it. 00:29:12.699 [2024-11-20 16:40:58.437129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.699 [2024-11-20 16:40:58.437136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.699 qpair failed and we were unable to recover it. 00:29:12.699 [2024-11-20 16:40:58.437453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.699 [2024-11-20 16:40:58.437460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.699 qpair failed and we were unable to recover it. 00:29:12.699 [2024-11-20 16:40:58.437700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.699 [2024-11-20 16:40:58.437708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.699 qpair failed and we were unable to recover it. 00:29:12.699 [2024-11-20 16:40:58.438022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.699 [2024-11-20 16:40:58.438029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.699 qpair failed and we were unable to recover it. 00:29:12.699 [2024-11-20 16:40:58.438253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.699 [2024-11-20 16:40:58.438260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.699 qpair failed and we were unable to recover it. 00:29:12.699 [2024-11-20 16:40:58.438486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.699 [2024-11-20 16:40:58.438492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.699 qpair failed and we were unable to recover it. 00:29:12.699 [2024-11-20 16:40:58.438790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.699 [2024-11-20 16:40:58.438798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.699 qpair failed and we were unable to recover it. 00:29:12.699 [2024-11-20 16:40:58.439125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.699 [2024-11-20 16:40:58.439132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.699 qpair failed and we were unable to recover it. 00:29:12.699 [2024-11-20 16:40:58.439453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.699 [2024-11-20 16:40:58.439459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.699 qpair failed and we were unable to recover it. 00:29:12.699 [2024-11-20 16:40:58.439623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.699 [2024-11-20 16:40:58.439629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.699 qpair failed and we were unable to recover it. 00:29:12.699 [2024-11-20 16:40:58.439933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.699 [2024-11-20 16:40:58.439940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.699 qpair failed and we were unable to recover it. 00:29:12.699 [2024-11-20 16:40:58.440079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:12.699 [2024-11-20 16:40:58.440238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.699 [2024-11-20 16:40:58.440245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.699 qpair failed and we were unable to recover it. 00:29:12.699 [2024-11-20 16:40:58.440572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.699 [2024-11-20 16:40:58.440580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.699 qpair failed and we were unable to recover it. 00:29:12.699 [2024-11-20 16:40:58.440891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.699 [2024-11-20 16:40:58.440899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.699 qpair failed and we were unable to recover it. 00:29:12.699 [2024-11-20 16:40:58.441214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.699 [2024-11-20 16:40:58.441221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.699 qpair failed and we were unable to recover it. 00:29:12.699 [2024-11-20 16:40:58.441520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.699 [2024-11-20 16:40:58.441528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.699 qpair failed and we were unable to recover it. 00:29:12.699 [2024-11-20 16:40:58.441844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.699 [2024-11-20 16:40:58.441850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.699 qpair failed and we were unable to recover it. 00:29:12.699 [2024-11-20 16:40:58.442051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.699 [2024-11-20 16:40:58.442058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.699 qpair failed and we were unable to recover it. 00:29:12.699 [2024-11-20 16:40:58.442335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.699 [2024-11-20 16:40:58.442342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.699 qpair failed and we were unable to recover it. 00:29:12.699 [2024-11-20 16:40:58.442652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.699 [2024-11-20 16:40:58.442660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.699 qpair failed and we were unable to recover it. 00:29:12.699 [2024-11-20 16:40:58.442963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.699 [2024-11-20 16:40:58.442970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.699 qpair failed and we were unable to recover it. 00:29:12.699 [2024-11-20 16:40:58.443292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.699 [2024-11-20 16:40:58.443299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.699 qpair failed and we were unable to recover it. 00:29:12.699 [2024-11-20 16:40:58.443471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.699 [2024-11-20 16:40:58.443479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.699 qpair failed and we were unable to recover it. 00:29:12.699 [2024-11-20 16:40:58.443770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.699 [2024-11-20 16:40:58.443778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.699 qpair failed and we were unable to recover it. 00:29:12.699 [2024-11-20 16:40:58.443958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.699 [2024-11-20 16:40:58.443968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.699 qpair failed and we were unable to recover it. 00:29:12.699 [2024-11-20 16:40:58.444352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.699 [2024-11-20 16:40:58.444360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.699 qpair failed and we were unable to recover it. 00:29:12.699 [2024-11-20 16:40:58.444637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.699 [2024-11-20 16:40:58.444644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.699 qpair failed and we were unable to recover it. 00:29:12.699 [2024-11-20 16:40:58.444970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.699 [2024-11-20 16:40:58.444978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.699 qpair failed and we were unable to recover it. 00:29:12.699 [2024-11-20 16:40:58.445382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.700 [2024-11-20 16:40:58.445390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.700 qpair failed and we were unable to recover it. 00:29:12.700 [2024-11-20 16:40:58.445715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.700 [2024-11-20 16:40:58.445723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.700 qpair failed and we were unable to recover it. 00:29:12.700 [2024-11-20 16:40:58.446043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.700 [2024-11-20 16:40:58.446050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.700 qpair failed and we were unable to recover it. 00:29:12.700 [2024-11-20 16:40:58.446432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.700 [2024-11-20 16:40:58.446440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.700 qpair failed and we were unable to recover it. 00:29:12.700 [2024-11-20 16:40:58.446612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.700 [2024-11-20 16:40:58.446619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.700 qpair failed and we were unable to recover it. 00:29:12.700 [2024-11-20 16:40:58.446973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.700 [2024-11-20 16:40:58.446984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.700 qpair failed and we were unable to recover it. 00:29:12.700 [2024-11-20 16:40:58.447312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.700 [2024-11-20 16:40:58.447319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.700 qpair failed and we were unable to recover it. 00:29:12.700 [2024-11-20 16:40:58.447613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.700 [2024-11-20 16:40:58.447620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.700 qpair failed and we were unable to recover it. 00:29:12.700 [2024-11-20 16:40:58.447928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.700 [2024-11-20 16:40:58.447936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.700 qpair failed and we were unable to recover it. 00:29:12.700 [2024-11-20 16:40:58.448163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.700 [2024-11-20 16:40:58.448170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.700 qpair failed and we were unable to recover it. 00:29:12.700 [2024-11-20 16:40:58.448449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.700 [2024-11-20 16:40:58.448457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.700 qpair failed and we were unable to recover it. 00:29:12.700 [2024-11-20 16:40:58.448766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.700 [2024-11-20 16:40:58.448773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.700 qpair failed and we were unable to recover it. 00:29:12.700 [2024-11-20 16:40:58.449083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.700 [2024-11-20 16:40:58.449091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.700 qpair failed and we were unable to recover it. 00:29:12.700 [2024-11-20 16:40:58.449391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.700 [2024-11-20 16:40:58.449398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.700 qpair failed and we were unable to recover it. 00:29:12.700 [2024-11-20 16:40:58.449702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.700 [2024-11-20 16:40:58.449710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.700 qpair failed and we were unable to recover it. 00:29:12.700 [2024-11-20 16:40:58.450000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.700 [2024-11-20 16:40:58.450007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.700 qpair failed and we were unable to recover it. 00:29:12.700 [2024-11-20 16:40:58.450186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.700 [2024-11-20 16:40:58.450193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.700 qpair failed and we were unable to recover it. 00:29:12.700 [2024-11-20 16:40:58.450594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.700 [2024-11-20 16:40:58.450601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.700 qpair failed and we were unable to recover it. 00:29:12.700 [2024-11-20 16:40:58.450905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.700 [2024-11-20 16:40:58.450912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.700 qpair failed and we were unable to recover it. 00:29:12.700 [2024-11-20 16:40:58.451248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.700 [2024-11-20 16:40:58.451255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.700 qpair failed and we were unable to recover it. 00:29:12.700 [2024-11-20 16:40:58.451416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.700 [2024-11-20 16:40:58.451423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.700 qpair failed and we were unable to recover it. 00:29:12.700 [2024-11-20 16:40:58.451581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.700 [2024-11-20 16:40:58.451589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.700 qpair failed and we were unable to recover it. 00:29:12.700 [2024-11-20 16:40:58.451936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.700 [2024-11-20 16:40:58.451943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.700 qpair failed and we were unable to recover it. 00:29:12.700 [2024-11-20 16:40:58.452297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.700 [2024-11-20 16:40:58.452306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.700 qpair failed and we were unable to recover it. 00:29:12.700 [2024-11-20 16:40:58.452613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.700 [2024-11-20 16:40:58.452620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.700 qpair failed and we were unable to recover it. 00:29:12.700 [2024-11-20 16:40:58.452951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.700 [2024-11-20 16:40:58.452957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.700 qpair failed and we were unable to recover it. 00:29:12.700 [2024-11-20 16:40:58.453186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.700 [2024-11-20 16:40:58.453193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.700 qpair failed and we were unable to recover it. 00:29:12.700 [2024-11-20 16:40:58.453516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.700 [2024-11-20 16:40:58.453523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.700 qpair failed and we were unable to recover it. 00:29:12.700 [2024-11-20 16:40:58.453809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.700 [2024-11-20 16:40:58.453816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.700 qpair failed and we were unable to recover it. 00:29:12.700 [2024-11-20 16:40:58.454133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.700 [2024-11-20 16:40:58.454141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.700 qpair failed and we were unable to recover it. 00:29:12.700 [2024-11-20 16:40:58.454347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.700 [2024-11-20 16:40:58.454353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.700 qpair failed and we were unable to recover it. 00:29:12.700 [2024-11-20 16:40:58.454717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.700 [2024-11-20 16:40:58.454724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.701 qpair failed and we were unable to recover it. 00:29:12.701 [2024-11-20 16:40:58.455009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.701 [2024-11-20 16:40:58.455016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.701 qpair failed and we were unable to recover it. 00:29:12.701 [2024-11-20 16:40:58.455197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.701 [2024-11-20 16:40:58.455204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.701 qpair failed and we were unable to recover it. 00:29:12.701 [2024-11-20 16:40:58.455495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.701 [2024-11-20 16:40:58.455502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.701 qpair failed and we were unable to recover it. 00:29:12.701 [2024-11-20 16:40:58.455809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.701 [2024-11-20 16:40:58.455816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.701 qpair failed and we were unable to recover it. 00:29:12.701 [2024-11-20 16:40:58.456125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.701 [2024-11-20 16:40:58.456132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.701 qpair failed and we were unable to recover it. 00:29:12.701 [2024-11-20 16:40:58.456308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.701 [2024-11-20 16:40:58.456315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.701 qpair failed and we were unable to recover it. 00:29:12.701 [2024-11-20 16:40:58.456632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.701 [2024-11-20 16:40:58.456639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.701 qpair failed and we were unable to recover it. 00:29:12.701 [2024-11-20 16:40:58.456871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.701 [2024-11-20 16:40:58.456878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.701 qpair failed and we were unable to recover it. 00:29:12.701 [2024-11-20 16:40:58.457055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.701 [2024-11-20 16:40:58.457063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.701 qpair failed and we were unable to recover it. 00:29:12.701 [2024-11-20 16:40:58.457215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.701 [2024-11-20 16:40:58.457222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.701 qpair failed and we were unable to recover it. 00:29:12.701 [2024-11-20 16:40:58.457532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.701 [2024-11-20 16:40:58.457539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.701 qpair failed and we were unable to recover it. 00:29:12.701 [2024-11-20 16:40:58.457821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.701 [2024-11-20 16:40:58.457834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.701 qpair failed and we were unable to recover it. 00:29:12.701 [2024-11-20 16:40:58.458137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.701 [2024-11-20 16:40:58.458144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.701 qpair failed and we were unable to recover it. 00:29:12.701 [2024-11-20 16:40:58.458341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.701 [2024-11-20 16:40:58.458348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.701 qpair failed and we were unable to recover it. 00:29:12.701 [2024-11-20 16:40:58.458531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.701 [2024-11-20 16:40:58.458538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.701 qpair failed and we were unable to recover it. 00:29:12.701 [2024-11-20 16:40:58.458854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.701 [2024-11-20 16:40:58.458861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.701 qpair failed and we were unable to recover it. 00:29:12.701 [2024-11-20 16:40:58.459248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.701 [2024-11-20 16:40:58.459255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.701 qpair failed and we were unable to recover it. 00:29:12.701 [2024-11-20 16:40:58.459547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.701 [2024-11-20 16:40:58.459555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.701 qpair failed and we were unable to recover it. 00:29:12.701 [2024-11-20 16:40:58.459881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.701 [2024-11-20 16:40:58.459887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.701 qpair failed and we were unable to recover it. 00:29:12.701 [2024-11-20 16:40:58.460264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.701 [2024-11-20 16:40:58.460272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.701 qpair failed and we were unable to recover it. 00:29:12.701 [2024-11-20 16:40:58.460559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.701 [2024-11-20 16:40:58.460566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.701 qpair failed and we were unable to recover it. 00:29:12.701 [2024-11-20 16:40:58.460851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.701 [2024-11-20 16:40:58.460865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.701 qpair failed and we were unable to recover it. 00:29:12.701 [2024-11-20 16:40:58.461153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.701 [2024-11-20 16:40:58.461160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.701 qpair failed and we were unable to recover it. 00:29:12.701 [2024-11-20 16:40:58.461375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.701 [2024-11-20 16:40:58.461383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.701 qpair failed and we were unable to recover it. 00:29:12.701 [2024-11-20 16:40:58.461569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.701 [2024-11-20 16:40:58.461576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.701 qpair failed and we were unable to recover it. 00:29:12.701 [2024-11-20 16:40:58.461893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.701 [2024-11-20 16:40:58.461900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.701 qpair failed and we were unable to recover it. 00:29:12.701 [2024-11-20 16:40:58.462199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.701 [2024-11-20 16:40:58.462206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.701 qpair failed and we were unable to recover it. 00:29:12.701 [2024-11-20 16:40:58.462498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.701 [2024-11-20 16:40:58.462505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.701 qpair failed and we were unable to recover it. 00:29:12.701 [2024-11-20 16:40:58.462824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.701 [2024-11-20 16:40:58.462831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.701 qpair failed and we were unable to recover it. 00:29:12.701 [2024-11-20 16:40:58.463138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.701 [2024-11-20 16:40:58.463145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.701 qpair failed and we were unable to recover it. 00:29:12.701 [2024-11-20 16:40:58.463466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.701 [2024-11-20 16:40:58.463473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.701 qpair failed and we were unable to recover it. 00:29:12.701 [2024-11-20 16:40:58.463795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.701 [2024-11-20 16:40:58.463803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.701 qpair failed and we were unable to recover it. 00:29:12.701 [2024-11-20 16:40:58.464111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.701 [2024-11-20 16:40:58.464119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.701 qpair failed and we were unable to recover it. 00:29:12.701 [2024-11-20 16:40:58.464430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.702 [2024-11-20 16:40:58.464436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.702 qpair failed and we were unable to recover it. 00:29:12.702 [2024-11-20 16:40:58.464738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.702 [2024-11-20 16:40:58.464744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.702 qpair failed and we were unable to recover it. 00:29:12.702 [2024-11-20 16:40:58.464917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.702 [2024-11-20 16:40:58.464923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.702 qpair failed and we were unable to recover it. 00:29:12.702 [2024-11-20 16:40:58.465297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.702 [2024-11-20 16:40:58.465304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.702 qpair failed and we were unable to recover it. 00:29:12.702 [2024-11-20 16:40:58.465605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.702 [2024-11-20 16:40:58.465612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.702 qpair failed and we were unable to recover it. 00:29:12.702 [2024-11-20 16:40:58.465902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.702 [2024-11-20 16:40:58.465908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.702 qpair failed and we were unable to recover it. 00:29:12.702 [2024-11-20 16:40:58.466269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.702 [2024-11-20 16:40:58.466277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.702 qpair failed and we were unable to recover it. 00:29:12.702 [2024-11-20 16:40:58.466596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.702 [2024-11-20 16:40:58.466603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.702 qpair failed and we were unable to recover it. 00:29:12.702 [2024-11-20 16:40:58.466810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.702 [2024-11-20 16:40:58.466817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.702 qpair failed and we were unable to recover it. 00:29:12.702 [2024-11-20 16:40:58.467108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.702 [2024-11-20 16:40:58.467115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.702 qpair failed and we were unable to recover it. 00:29:12.702 [2024-11-20 16:40:58.467424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.702 [2024-11-20 16:40:58.467430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.702 qpair failed and we were unable to recover it. 00:29:12.702 [2024-11-20 16:40:58.467774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.702 [2024-11-20 16:40:58.467781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.702 qpair failed and we were unable to recover it. 00:29:12.702 [2024-11-20 16:40:58.468057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.702 [2024-11-20 16:40:58.468064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.702 qpair failed and we were unable to recover it. 00:29:12.702 [2024-11-20 16:40:58.468387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.702 [2024-11-20 16:40:58.468394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.702 qpair failed and we were unable to recover it. 00:29:12.702 [2024-11-20 16:40:58.468564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.702 [2024-11-20 16:40:58.468571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.702 qpair failed and we were unable to recover it. 00:29:12.702 [2024-11-20 16:40:58.468769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.702 [2024-11-20 16:40:58.468776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.702 qpair failed and we were unable to recover it. 00:29:12.702 [2024-11-20 16:40:58.469077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.702 [2024-11-20 16:40:58.469084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.702 qpair failed and we were unable to recover it. 00:29:12.702 [2024-11-20 16:40:58.469406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.702 [2024-11-20 16:40:58.469413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.702 qpair failed and we were unable to recover it. 00:29:12.702 [2024-11-20 16:40:58.469723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.702 [2024-11-20 16:40:58.469730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.702 qpair failed and we were unable to recover it. 00:29:12.702 [2024-11-20 16:40:58.469947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.702 [2024-11-20 16:40:58.469954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.702 qpair failed and we were unable to recover it. 00:29:12.702 [2024-11-20 16:40:58.470254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.702 [2024-11-20 16:40:58.470261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.702 qpair failed and we were unable to recover it. 00:29:12.702 [2024-11-20 16:40:58.470597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.702 [2024-11-20 16:40:58.470604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.702 qpair failed and we were unable to recover it. 00:29:12.702 [2024-11-20 16:40:58.470915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.702 [2024-11-20 16:40:58.470921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.702 qpair failed and we were unable to recover it. 00:29:12.702 [2024-11-20 16:40:58.471277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.702 [2024-11-20 16:40:58.471284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.702 qpair failed and we were unable to recover it. 00:29:12.702 [2024-11-20 16:40:58.471597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.702 [2024-11-20 16:40:58.471604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.702 qpair failed and we were unable to recover it. 00:29:12.702 [2024-11-20 16:40:58.471910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.702 [2024-11-20 16:40:58.471917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.702 qpair failed and we were unable to recover it. 00:29:12.702 [2024-11-20 16:40:58.472275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.702 [2024-11-20 16:40:58.472283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.702 qpair failed and we were unable to recover it. 00:29:12.702 [2024-11-20 16:40:58.472595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.702 [2024-11-20 16:40:58.472602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.702 qpair failed and we were unable to recover it. 00:29:12.702 [2024-11-20 16:40:58.472798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.702 [2024-11-20 16:40:58.472805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.702 qpair failed and we were unable to recover it. 00:29:12.702 [2024-11-20 16:40:58.473005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.702 [2024-11-20 16:40:58.473012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.702 qpair failed and we were unable to recover it. 00:29:12.702 [2024-11-20 16:40:58.473308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.702 [2024-11-20 16:40:58.473316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.702 qpair failed and we were unable to recover it. 00:29:12.702 [2024-11-20 16:40:58.473596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.702 [2024-11-20 16:40:58.473604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.702 qpair failed and we were unable to recover it. 00:29:12.702 [2024-11-20 16:40:58.473794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.702 [2024-11-20 16:40:58.473802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.702 qpair failed and we were unable to recover it. 00:29:12.703 [2024-11-20 16:40:58.474116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.703 [2024-11-20 16:40:58.474123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.703 qpair failed and we were unable to recover it. 00:29:12.703 [2024-11-20 16:40:58.474419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.703 [2024-11-20 16:40:58.474425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.703 qpair failed and we were unable to recover it. 00:29:12.703 [2024-11-20 16:40:58.474727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.703 [2024-11-20 16:40:58.474734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.703 qpair failed and we were unable to recover it. 00:29:12.703 [2024-11-20 16:40:58.474878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.703 [2024-11-20 16:40:58.474884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.703 qpair failed and we were unable to recover it. 00:29:12.703 [2024-11-20 16:40:58.475246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.703 [2024-11-20 16:40:58.475253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.703 qpair failed and we were unable to recover it. 00:29:12.703 [2024-11-20 16:40:58.475552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.703 [2024-11-20 16:40:58.475561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.703 qpair failed and we were unable to recover it. 00:29:12.703 [2024-11-20 16:40:58.475919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.703 [2024-11-20 16:40:58.475926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.703 qpair failed and we were unable to recover it. 00:29:12.703 [2024-11-20 16:40:58.476219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.703 [2024-11-20 16:40:58.476226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.703 qpair failed and we were unable to recover it. 00:29:12.703 [2024-11-20 16:40:58.476278] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:12.703 [2024-11-20 16:40:58.476307] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:12.703 [2024-11-20 16:40:58.476315] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:12.703 [2024-11-20 16:40:58.476322] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:12.703 [2024-11-20 16:40:58.476328] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:12.703 [2024-11-20 16:40:58.476499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.703 [2024-11-20 16:40:58.476507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.703 qpair failed and we were unable to recover it. 00:29:12.703 [2024-11-20 16:40:58.476688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.703 [2024-11-20 16:40:58.476696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.703 qpair failed and we were unable to recover it. 00:29:12.703 [2024-11-20 16:40:58.477020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.703 [2024-11-20 16:40:58.477027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.703 qpair failed and we were unable to recover it. 00:29:12.703 [2024-11-20 16:40:58.477327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.703 [2024-11-20 16:40:58.477333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.703 qpair failed and we were unable to recover it. 00:29:12.703 [2024-11-20 16:40:58.477643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.703 [2024-11-20 16:40:58.477650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.703 qpair failed and we were unable to recover it. 00:29:12.703 [2024-11-20 16:40:58.477924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.703 [2024-11-20 16:40:58.477931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.703 qpair failed and we were unable to recover it. 00:29:12.703 [2024-11-20 16:40:58.477977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:12.703 [2024-11-20 16:40:58.478184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.703 [2024-11-20 16:40:58.478191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.703 qpair failed and we were unable to recover it. 00:29:12.703 [2024-11-20 16:40:58.478100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:12.703 [2024-11-20 16:40:58.478439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:12.703 [2024-11-20 16:40:58.478439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:12.703 [2024-11-20 16:40:58.478569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.703 [2024-11-20 16:40:58.478578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.703 qpair failed and we were unable to recover it. 00:29:12.703 [2024-11-20 16:40:58.478652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.703 [2024-11-20 16:40:58.478659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.703 qpair failed and we were unable to recover it. 00:29:12.703 [2024-11-20 16:40:58.478949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.703 [2024-11-20 16:40:58.478955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.703 qpair failed and we were unable to recover it. 00:29:12.703 [2024-11-20 16:40:58.479266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.703 [2024-11-20 16:40:58.479273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.703 qpair failed and we were unable to recover it. 00:29:12.703 [2024-11-20 16:40:58.479583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.703 [2024-11-20 16:40:58.479589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.703 qpair failed and we were unable to recover it. 00:29:12.703 [2024-11-20 16:40:58.479772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.703 [2024-11-20 16:40:58.479778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.703 qpair failed and we were unable to recover it. 00:29:12.703 [2024-11-20 16:40:58.479970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.703 [2024-11-20 16:40:58.479976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.703 qpair failed and we were unable to recover it. 00:29:12.703 [2024-11-20 16:40:58.480309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.703 [2024-11-20 16:40:58.480315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.703 qpair failed and we were unable to recover it. 00:29:12.703 [2024-11-20 16:40:58.480603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.703 [2024-11-20 16:40:58.480610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.703 qpair failed and we were unable to recover it. 00:29:12.703 [2024-11-20 16:40:58.480793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.703 [2024-11-20 16:40:58.480800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.703 qpair failed and we were unable to recover it. 00:29:12.703 [2024-11-20 16:40:58.481087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.704 [2024-11-20 16:40:58.481095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.704 qpair failed and we were unable to recover it. 00:29:12.704 [2024-11-20 16:40:58.481398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.704 [2024-11-20 16:40:58.481406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.704 qpair failed and we were unable to recover it. 00:29:12.704 [2024-11-20 16:40:58.481721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.704 [2024-11-20 16:40:58.481728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.704 qpair failed and we were unable to recover it. 00:29:12.704 [2024-11-20 16:40:58.481923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.704 [2024-11-20 16:40:58.481930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.704 qpair failed and we were unable to recover it. 00:29:12.704 [2024-11-20 16:40:58.482262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.704 [2024-11-20 16:40:58.482269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.704 qpair failed and we were unable to recover it. 00:29:12.704 [2024-11-20 16:40:58.482563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.704 [2024-11-20 16:40:58.482570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.704 qpair failed and we were unable to recover it. 00:29:12.704 [2024-11-20 16:40:58.482630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.704 [2024-11-20 16:40:58.482637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.704 qpair failed and we were unable to recover it. 00:29:12.704 [2024-11-20 16:40:58.482952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.704 [2024-11-20 16:40:58.482959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.704 qpair failed and we were unable to recover it. 00:29:12.704 [2024-11-20 16:40:58.483274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.704 [2024-11-20 16:40:58.483281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.704 qpair failed and we were unable to recover it. 00:29:12.704 [2024-11-20 16:40:58.483483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.704 [2024-11-20 16:40:58.483491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.704 qpair failed and we were unable to recover it. 00:29:12.704 [2024-11-20 16:40:58.483640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.704 [2024-11-20 16:40:58.483647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.704 qpair failed and we were unable to recover it. 00:29:12.704 [2024-11-20 16:40:58.483855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.704 [2024-11-20 16:40:58.483863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.704 qpair failed and we were unable to recover it. 00:29:12.704 [2024-11-20 16:40:58.484163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.704 [2024-11-20 16:40:58.484171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.704 qpair failed and we were unable to recover it. 00:29:12.704 [2024-11-20 16:40:58.484381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.704 [2024-11-20 16:40:58.484388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.704 qpair failed and we were unable to recover it. 00:29:12.704 [2024-11-20 16:40:58.484675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.704 [2024-11-20 16:40:58.484682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.704 qpair failed and we were unable to recover it. 00:29:12.704 [2024-11-20 16:40:58.484890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.704 [2024-11-20 16:40:58.484897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.704 qpair failed and we were unable to recover it. 00:29:12.704 [2024-11-20 16:40:58.485085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.704 [2024-11-20 16:40:58.485093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.704 qpair failed and we were unable to recover it. 00:29:12.704 [2024-11-20 16:40:58.485265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.704 [2024-11-20 16:40:58.485273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.704 qpair failed and we were unable to recover it. 00:29:12.704 [2024-11-20 16:40:58.485570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.704 [2024-11-20 16:40:58.485578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.704 qpair failed and we were unable to recover it. 00:29:12.704 [2024-11-20 16:40:58.485796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.704 [2024-11-20 16:40:58.485804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.704 qpair failed and we were unable to recover it. 00:29:12.704 [2024-11-20 16:40:58.486192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.704 [2024-11-20 16:40:58.486200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.704 qpair failed and we were unable to recover it. 00:29:12.704 [2024-11-20 16:40:58.486523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.704 [2024-11-20 16:40:58.486530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.704 qpair failed and we were unable to recover it. 00:29:12.704 [2024-11-20 16:40:58.486725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.704 [2024-11-20 16:40:58.486732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.704 qpair failed and we were unable to recover it. 00:29:12.704 [2024-11-20 16:40:58.487013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.704 [2024-11-20 16:40:58.487021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.704 qpair failed and we were unable to recover it. 00:29:12.704 [2024-11-20 16:40:58.487108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.704 [2024-11-20 16:40:58.487115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.704 qpair failed and we were unable to recover it. 00:29:12.704 [2024-11-20 16:40:58.487369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.704 [2024-11-20 16:40:58.487377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.704 qpair failed and we were unable to recover it. 00:29:12.704 [2024-11-20 16:40:58.487650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.704 [2024-11-20 16:40:58.487658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.704 qpair failed and we were unable to recover it. 00:29:12.704 [2024-11-20 16:40:58.487973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.704 [2024-11-20 16:40:58.487984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.704 qpair failed and we were unable to recover it. 00:29:12.704 [2024-11-20 16:40:58.488174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.704 [2024-11-20 16:40:58.488181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.704 qpair failed and we were unable to recover it. 00:29:12.704 [2024-11-20 16:40:58.488402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.704 [2024-11-20 16:40:58.488410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.704 qpair failed and we were unable to recover it. 00:29:12.704 [2024-11-20 16:40:58.488535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.704 [2024-11-20 16:40:58.488545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.704 qpair failed and we were unable to recover it. 00:29:12.704 [2024-11-20 16:40:58.488863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.704 [2024-11-20 16:40:58.488871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.704 qpair failed and we were unable to recover it. 00:29:12.704 [2024-11-20 16:40:58.489196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.704 [2024-11-20 16:40:58.489204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.704 qpair failed and we were unable to recover it. 00:29:12.704 [2024-11-20 16:40:58.489398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.704 [2024-11-20 16:40:58.489405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.704 qpair failed and we were unable to recover it. 00:29:12.704 [2024-11-20 16:40:58.489587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.704 [2024-11-20 16:40:58.489594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.704 qpair failed and we were unable to recover it. 00:29:12.704 [2024-11-20 16:40:58.489904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.704 [2024-11-20 16:40:58.489912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.704 qpair failed and we were unable to recover it. 00:29:12.704 [2024-11-20 16:40:58.490224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.705 [2024-11-20 16:40:58.490232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.705 qpair failed and we were unable to recover it. 00:29:12.705 [2024-11-20 16:40:58.490405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.705 [2024-11-20 16:40:58.490412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.705 qpair failed and we were unable to recover it. 00:29:12.705 [2024-11-20 16:40:58.490700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.705 [2024-11-20 16:40:58.490707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.705 qpair failed and we were unable to recover it. 00:29:12.705 [2024-11-20 16:40:58.491021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.705 [2024-11-20 16:40:58.491029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.705 qpair failed and we were unable to recover it. 00:29:12.705 [2024-11-20 16:40:58.491209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.705 [2024-11-20 16:40:58.491216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.705 qpair failed and we were unable to recover it. 00:29:12.705 [2024-11-20 16:40:58.491407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.705 [2024-11-20 16:40:58.491414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.705 qpair failed and we were unable to recover it. 00:29:12.705 [2024-11-20 16:40:58.491717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.705 [2024-11-20 16:40:58.491725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.705 qpair failed and we were unable to recover it. 00:29:12.705 [2024-11-20 16:40:58.492032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.705 [2024-11-20 16:40:58.492040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.705 qpair failed and we were unable to recover it. 00:29:12.705 [2024-11-20 16:40:58.492227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.705 [2024-11-20 16:40:58.492235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.705 qpair failed and we were unable to recover it. 00:29:12.705 [2024-11-20 16:40:58.492536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.705 [2024-11-20 16:40:58.492544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.705 qpair failed and we were unable to recover it. 00:29:12.705 [2024-11-20 16:40:58.492839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.705 [2024-11-20 16:40:58.492846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.705 qpair failed and we were unable to recover it. 00:29:12.705 [2024-11-20 16:40:58.493105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.705 [2024-11-20 16:40:58.493113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.705 qpair failed and we were unable to recover it. 00:29:12.705 [2024-11-20 16:40:58.493300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.705 [2024-11-20 16:40:58.493308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.705 qpair failed and we were unable to recover it. 00:29:12.705 [2024-11-20 16:40:58.493493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.705 [2024-11-20 16:40:58.493500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.705 qpair failed and we were unable to recover it. 00:29:12.705 [2024-11-20 16:40:58.493870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.705 [2024-11-20 16:40:58.493878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.705 qpair failed and we were unable to recover it. 00:29:12.705 [2024-11-20 16:40:58.494212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.705 [2024-11-20 16:40:58.494220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.705 qpair failed and we were unable to recover it. 00:29:12.705 [2024-11-20 16:40:58.494448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.705 [2024-11-20 16:40:58.494455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.705 qpair failed and we were unable to recover it. 00:29:12.705 [2024-11-20 16:40:58.494542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.705 [2024-11-20 16:40:58.494550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.705 qpair failed and we were unable to recover it. 00:29:12.705 [2024-11-20 16:40:58.494820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.705 [2024-11-20 16:40:58.494827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.705 qpair failed and we were unable to recover it. 00:29:12.705 [2024-11-20 16:40:58.495161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.705 [2024-11-20 16:40:58.495168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.705 qpair failed and we were unable to recover it. 00:29:12.705 [2024-11-20 16:40:58.495330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.705 [2024-11-20 16:40:58.495337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.705 qpair failed and we were unable to recover it. 00:29:12.705 [2024-11-20 16:40:58.495588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.705 [2024-11-20 16:40:58.495596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.705 qpair failed and we were unable to recover it. 00:29:12.705 [2024-11-20 16:40:58.495919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.705 [2024-11-20 16:40:58.495927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.705 qpair failed and we were unable to recover it. 00:29:12.705 [2024-11-20 16:40:58.496119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.705 [2024-11-20 16:40:58.496126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.705 qpair failed and we were unable to recover it. 00:29:12.705 [2024-11-20 16:40:58.496410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.705 [2024-11-20 16:40:58.496418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.705 qpair failed and we were unable to recover it. 00:29:12.705 [2024-11-20 16:40:58.496720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.705 [2024-11-20 16:40:58.496727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.705 qpair failed and we were unable to recover it. 00:29:12.705 [2024-11-20 16:40:58.497039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.705 [2024-11-20 16:40:58.497046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.705 qpair failed and we were unable to recover it. 00:29:12.705 [2024-11-20 16:40:58.497360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.705 [2024-11-20 16:40:58.497367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.705 qpair failed and we were unable to recover it. 00:29:12.705 [2024-11-20 16:40:58.497677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.705 [2024-11-20 16:40:58.497684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.705 qpair failed and we were unable to recover it. 00:29:12.705 [2024-11-20 16:40:58.497875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.705 [2024-11-20 16:40:58.497881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.705 qpair failed and we were unable to recover it. 00:29:12.705 [2024-11-20 16:40:58.498223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.705 [2024-11-20 16:40:58.498230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.705 qpair failed and we were unable to recover it. 00:29:12.705 [2024-11-20 16:40:58.498401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.705 [2024-11-20 16:40:58.498408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.705 qpair failed and we were unable to recover it. 00:29:12.705 [2024-11-20 16:40:58.498575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.705 [2024-11-20 16:40:58.498581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.705 qpair failed and we were unable to recover it. 00:29:12.705 [2024-11-20 16:40:58.498915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.705 [2024-11-20 16:40:58.498922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.705 qpair failed and we were unable to recover it. 00:29:12.705 [2024-11-20 16:40:58.499103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.705 [2024-11-20 16:40:58.499112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.705 qpair failed and we were unable to recover it. 00:29:12.705 [2024-11-20 16:40:58.499356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.705 [2024-11-20 16:40:58.499362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.705 qpair failed and we were unable to recover it. 00:29:12.705 [2024-11-20 16:40:58.499519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.706 [2024-11-20 16:40:58.499525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.706 qpair failed and we were unable to recover it. 00:29:12.706 [2024-11-20 16:40:58.499589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.706 [2024-11-20 16:40:58.499595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.706 qpair failed and we were unable to recover it. 00:29:12.706 [2024-11-20 16:40:58.499988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.706 [2024-11-20 16:40:58.499995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.706 qpair failed and we were unable to recover it. 00:29:12.706 [2024-11-20 16:40:58.500188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.706 [2024-11-20 16:40:58.500195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.706 qpair failed and we were unable to recover it. 00:29:12.706 [2024-11-20 16:40:58.500565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.706 [2024-11-20 16:40:58.500571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.706 qpair failed and we were unable to recover it. 00:29:12.706 [2024-11-20 16:40:58.500909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.706 [2024-11-20 16:40:58.500916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.706 qpair failed and we were unable to recover it. 00:29:12.706 [2024-11-20 16:40:58.501193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.706 [2024-11-20 16:40:58.501200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.706 qpair failed and we were unable to recover it. 00:29:12.706 [2024-11-20 16:40:58.501371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.706 [2024-11-20 16:40:58.501378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.706 qpair failed and we were unable to recover it. 00:29:12.706 [2024-11-20 16:40:58.501535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.706 [2024-11-20 16:40:58.501542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.706 qpair failed and we were unable to recover it. 00:29:12.706 [2024-11-20 16:40:58.501845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.706 [2024-11-20 16:40:58.501852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.706 qpair failed and we were unable to recover it. 00:29:12.706 [2024-11-20 16:40:58.502025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.706 [2024-11-20 16:40:58.502033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.706 qpair failed and we were unable to recover it. 00:29:12.706 [2024-11-20 16:40:58.502348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.706 [2024-11-20 16:40:58.502355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.706 qpair failed and we were unable to recover it. 00:29:12.706 [2024-11-20 16:40:58.502654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.706 [2024-11-20 16:40:58.502661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.706 qpair failed and we were unable to recover it. 00:29:12.706 [2024-11-20 16:40:58.502849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.706 [2024-11-20 16:40:58.502855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.706 qpair failed and we were unable to recover it. 00:29:12.706 [2024-11-20 16:40:58.503168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.706 [2024-11-20 16:40:58.503176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.706 qpair failed and we were unable to recover it. 00:29:12.706 [2024-11-20 16:40:58.503485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.706 [2024-11-20 16:40:58.503492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.706 qpair failed and we were unable to recover it. 00:29:12.706 [2024-11-20 16:40:58.503546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.706 [2024-11-20 16:40:58.503553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.706 qpair failed and we were unable to recover it. 00:29:12.706 [2024-11-20 16:40:58.503724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.706 [2024-11-20 16:40:58.503731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.706 qpair failed and we were unable to recover it. 00:29:12.706 [2024-11-20 16:40:58.504053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.706 [2024-11-20 16:40:58.504060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.706 qpair failed and we were unable to recover it. 00:29:12.706 [2024-11-20 16:40:58.504234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.706 [2024-11-20 16:40:58.504241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.706 qpair failed and we were unable to recover it. 00:29:12.706 [2024-11-20 16:40:58.504552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.706 [2024-11-20 16:40:58.504560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.706 qpair failed and we were unable to recover it. 00:29:12.706 [2024-11-20 16:40:58.504729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.706 [2024-11-20 16:40:58.504737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.706 qpair failed and we were unable to recover it. 00:29:12.706 [2024-11-20 16:40:58.504987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.706 [2024-11-20 16:40:58.504995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.706 qpair failed and we were unable to recover it. 00:29:12.706 [2024-11-20 16:40:58.505315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.706 [2024-11-20 16:40:58.505323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.706 qpair failed and we were unable to recover it. 00:29:12.706 [2024-11-20 16:40:58.505489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.706 [2024-11-20 16:40:58.505496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.706 qpair failed and we were unable to recover it. 00:29:12.706 [2024-11-20 16:40:58.505864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.706 [2024-11-20 16:40:58.505871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.706 qpair failed and we were unable to recover it. 00:29:12.706 [2024-11-20 16:40:58.506204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.706 [2024-11-20 16:40:58.506211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.706 qpair failed and we were unable to recover it. 00:29:12.706 [2024-11-20 16:40:58.506526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.706 [2024-11-20 16:40:58.506533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.706 qpair failed and we were unable to recover it. 00:29:12.706 [2024-11-20 16:40:58.506847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.706 [2024-11-20 16:40:58.506854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.706 qpair failed and we were unable to recover it. 00:29:12.706 [2024-11-20 16:40:58.507248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.706 [2024-11-20 16:40:58.507255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.706 qpair failed and we were unable to recover it. 00:29:12.706 [2024-11-20 16:40:58.507455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.707 [2024-11-20 16:40:58.507462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.707 qpair failed and we were unable to recover it. 00:29:12.707 [2024-11-20 16:40:58.507660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.707 [2024-11-20 16:40:58.507667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.707 qpair failed and we were unable to recover it. 00:29:12.707 [2024-11-20 16:40:58.507845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.707 [2024-11-20 16:40:58.507852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.707 qpair failed and we were unable to recover it. 00:29:12.707 [2024-11-20 16:40:58.508022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.707 [2024-11-20 16:40:58.508028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.707 qpair failed and we were unable to recover it. 00:29:12.707 [2024-11-20 16:40:58.508281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.707 [2024-11-20 16:40:58.508288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.707 qpair failed and we were unable to recover it. 00:29:12.707 [2024-11-20 16:40:58.508611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.707 [2024-11-20 16:40:58.508617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.707 qpair failed and we were unable to recover it. 00:29:12.707 [2024-11-20 16:40:58.508945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.707 [2024-11-20 16:40:58.508952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.707 qpair failed and we were unable to recover it. 00:29:12.707 [2024-11-20 16:40:58.509012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.707 [2024-11-20 16:40:58.509019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.707 qpair failed and we were unable to recover it. 00:29:12.707 [2024-11-20 16:40:58.509159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.707 [2024-11-20 16:40:58.509168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.707 qpair failed and we were unable to recover it. 00:29:12.707 [2024-11-20 16:40:58.509310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.707 [2024-11-20 16:40:58.509317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.707 qpair failed and we were unable to recover it. 00:29:12.707 [2024-11-20 16:40:58.509620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.707 [2024-11-20 16:40:58.509627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.707 qpair failed and we were unable to recover it. 00:29:12.707 [2024-11-20 16:40:58.509944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.707 [2024-11-20 16:40:58.509951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.707 qpair failed and we were unable to recover it. 00:29:12.707 [2024-11-20 16:40:58.510053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.707 [2024-11-20 16:40:58.510060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.707 qpair failed and we were unable to recover it. 00:29:12.707 Read completed with error (sct=0, sc=8) 00:29:12.707 starting I/O failed 00:29:12.707 Read completed with error (sct=0, sc=8) 00:29:12.707 starting I/O failed 00:29:12.707 Read completed with error (sct=0, sc=8) 00:29:12.707 starting I/O failed 00:29:12.707 Read completed with error (sct=0, sc=8) 00:29:12.707 starting I/O failed 00:29:12.707 Read completed with error (sct=0, sc=8) 00:29:12.707 starting I/O failed 00:29:12.707 Read completed with error (sct=0, sc=8) 00:29:12.707 starting I/O failed 00:29:12.707 Read completed with error (sct=0, sc=8) 00:29:12.707 starting I/O failed 00:29:12.707 Read completed with error (sct=0, sc=8) 00:29:12.707 starting I/O failed 00:29:12.707 Write completed with error (sct=0, sc=8) 00:29:12.707 starting I/O failed 00:29:12.707 Read completed with error (sct=0, sc=8) 00:29:12.707 starting I/O failed 00:29:12.707 Write completed with error (sct=0, sc=8) 00:29:12.707 starting I/O failed 00:29:12.707 Read completed with error (sct=0, sc=8) 00:29:12.707 starting I/O failed 00:29:12.707 Write completed with error (sct=0, sc=8) 00:29:12.707 starting I/O failed 00:29:12.707 Read completed with error (sct=0, sc=8) 00:29:12.707 starting I/O failed 00:29:12.707 Read completed with error (sct=0, sc=8) 00:29:12.707 starting I/O failed 00:29:12.707 Write completed with error (sct=0, sc=8) 00:29:12.707 starting I/O failed 00:29:12.707 Read completed with error (sct=0, sc=8) 00:29:12.707 starting I/O failed 00:29:12.707 Write completed with error (sct=0, sc=8) 00:29:12.707 starting I/O failed 00:29:12.707 Write completed with error (sct=0, sc=8) 00:29:12.707 starting I/O failed 00:29:12.707 Read completed with error (sct=0, sc=8) 00:29:12.707 starting I/O failed 00:29:12.707 Write completed with error (sct=0, sc=8) 00:29:12.707 starting I/O failed 00:29:12.707 Read completed with error (sct=0, sc=8) 00:29:12.707 starting I/O failed 00:29:12.707 Write completed with error (sct=0, sc=8) 00:29:12.707 starting I/O failed 00:29:12.707 Write completed with error (sct=0, sc=8) 00:29:12.707 starting I/O failed 00:29:12.707 Read completed with error (sct=0, sc=8) 00:29:12.707 starting I/O failed 00:29:12.707 Read completed with error (sct=0, sc=8) 00:29:12.707 starting I/O failed 00:29:12.707 Write completed with error (sct=0, sc=8) 00:29:12.707 starting I/O failed 00:29:12.707 Read completed with error (sct=0, sc=8) 00:29:12.707 starting I/O failed 00:29:12.707 Write completed with error (sct=0, sc=8) 00:29:12.707 starting I/O failed 00:29:12.707 Read completed with error (sct=0, sc=8) 00:29:12.707 starting I/O failed 00:29:12.707 Read completed with error (sct=0, sc=8) 00:29:12.707 starting I/O failed 00:29:12.707 Write completed with error (sct=0, sc=8) 00:29:12.707 starting I/O failed 00:29:12.707 [2024-11-20 16:40:58.510790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.707 [2024-11-20 16:40:58.511396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.707 [2024-11-20 16:40:58.511497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe980000b90 with addr=10.0.0.2, port=4420 00:29:12.707 qpair failed and we were unable to recover it. 00:29:12.707 [2024-11-20 16:40:58.511747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.707 [2024-11-20 16:40:58.511755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.707 qpair failed and we were unable to recover it. 00:29:12.707 [2024-11-20 16:40:58.511923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.707 [2024-11-20 16:40:58.511930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.707 qpair failed and we were unable to recover it. 00:29:12.707 [2024-11-20 16:40:58.512214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.707 [2024-11-20 16:40:58.512221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.707 qpair failed and we were unable to recover it. 00:29:12.707 [2024-11-20 16:40:58.512399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.707 [2024-11-20 16:40:58.512406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.707 qpair failed and we were unable to recover it. 00:29:12.707 [2024-11-20 16:40:58.512615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.707 [2024-11-20 16:40:58.512622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.707 qpair failed and we were unable to recover it. 00:29:12.707 [2024-11-20 16:40:58.512901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.707 [2024-11-20 16:40:58.512907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.707 qpair failed and we were unable to recover it. 00:29:12.707 [2024-11-20 16:40:58.513168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.707 [2024-11-20 16:40:58.513175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.707 qpair failed and we were unable to recover it. 00:29:12.707 [2024-11-20 16:40:58.513504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.707 [2024-11-20 16:40:58.513511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.707 qpair failed and we were unable to recover it. 00:29:12.708 [2024-11-20 16:40:58.513817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.708 [2024-11-20 16:40:58.513824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.708 qpair failed and we were unable to recover it. 00:29:12.708 [2024-11-20 16:40:58.514001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.708 [2024-11-20 16:40:58.514008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.708 qpair failed and we were unable to recover it. 00:29:12.708 [2024-11-20 16:40:58.514193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.708 [2024-11-20 16:40:58.514200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.708 qpair failed and we were unable to recover it. 00:29:12.708 [2024-11-20 16:40:58.514501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.708 [2024-11-20 16:40:58.514508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.708 qpair failed and we were unable to recover it. 00:29:12.708 [2024-11-20 16:40:58.514702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.708 [2024-11-20 16:40:58.514710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.708 qpair failed and we were unable to recover it. 00:29:12.708 [2024-11-20 16:40:58.515074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.708 [2024-11-20 16:40:58.515081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.708 qpair failed and we were unable to recover it. 00:29:12.708 [2024-11-20 16:40:58.515426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.708 [2024-11-20 16:40:58.515435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.708 qpair failed and we were unable to recover it. 00:29:12.708 [2024-11-20 16:40:58.515757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.708 [2024-11-20 16:40:58.515764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.708 qpair failed and we were unable to recover it. 00:29:12.708 [2024-11-20 16:40:58.516183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.708 [2024-11-20 16:40:58.516190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.708 qpair failed and we were unable to recover it. 00:29:12.708 [2024-11-20 16:40:58.516488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.708 [2024-11-20 16:40:58.516496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.708 qpair failed and we were unable to recover it. 00:29:12.708 [2024-11-20 16:40:58.516803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.708 [2024-11-20 16:40:58.516810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.708 qpair failed and we were unable to recover it. 00:29:12.708 [2024-11-20 16:40:58.517111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.708 [2024-11-20 16:40:58.517118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.708 qpair failed and we were unable to recover it. 00:29:12.708 [2024-11-20 16:40:58.517440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.708 [2024-11-20 16:40:58.517447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.708 qpair failed and we were unable to recover it. 00:29:12.708 [2024-11-20 16:40:58.517776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.708 [2024-11-20 16:40:58.517783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.708 qpair failed and we were unable to recover it. 00:29:12.708 [2024-11-20 16:40:58.517943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.708 [2024-11-20 16:40:58.517950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.708 qpair failed and we were unable to recover it. 00:29:12.708 [2024-11-20 16:40:58.518269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.708 [2024-11-20 16:40:58.518276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.708 qpair failed and we were unable to recover it. 00:29:12.708 [2024-11-20 16:40:58.518607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.708 [2024-11-20 16:40:58.518614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.708 qpair failed and we were unable to recover it. 00:29:12.708 [2024-11-20 16:40:58.518781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.708 [2024-11-20 16:40:58.518795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.708 qpair failed and we were unable to recover it. 00:29:12.708 [2024-11-20 16:40:58.518970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.708 [2024-11-20 16:40:58.518977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.708 qpair failed and we were unable to recover it. 00:29:12.708 [2024-11-20 16:40:58.519060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.708 [2024-11-20 16:40:58.519068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.708 qpair failed and we were unable to recover it. 00:29:12.708 [2024-11-20 16:40:58.519257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.708 [2024-11-20 16:40:58.519264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.708 qpair failed and we were unable to recover it. 00:29:12.708 [2024-11-20 16:40:58.519578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.708 [2024-11-20 16:40:58.519585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.708 qpair failed and we were unable to recover it. 00:29:12.708 [2024-11-20 16:40:58.519751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.708 [2024-11-20 16:40:58.519758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.708 qpair failed and we were unable to recover it. 00:29:12.708 [2024-11-20 16:40:58.520031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.708 [2024-11-20 16:40:58.520038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.708 qpair failed and we were unable to recover it. 00:29:12.708 [2024-11-20 16:40:58.520355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.708 [2024-11-20 16:40:58.520361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.708 qpair failed and we were unable to recover it. 00:29:12.708 [2024-11-20 16:40:58.520535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.708 [2024-11-20 16:40:58.520542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.708 qpair failed and we were unable to recover it. 00:29:12.708 [2024-11-20 16:40:58.520917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.708 [2024-11-20 16:40:58.520924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.708 qpair failed and we were unable to recover it. 00:29:12.708 [2024-11-20 16:40:58.521319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.708 [2024-11-20 16:40:58.521326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.708 qpair failed and we were unable to recover it. 00:29:12.708 [2024-11-20 16:40:58.521483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.708 [2024-11-20 16:40:58.521489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.708 qpair failed and we were unable to recover it. 00:29:12.708 [2024-11-20 16:40:58.521781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.708 [2024-11-20 16:40:58.521788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.709 qpair failed and we were unable to recover it. 00:29:12.709 [2024-11-20 16:40:58.522114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.709 [2024-11-20 16:40:58.522121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.709 qpair failed and we were unable to recover it. 00:29:12.709 [2024-11-20 16:40:58.522291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.709 [2024-11-20 16:40:58.522298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.709 qpair failed and we were unable to recover it. 00:29:12.709 [2024-11-20 16:40:58.522630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.709 [2024-11-20 16:40:58.522638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.709 qpair failed and we were unable to recover it. 00:29:12.709 [2024-11-20 16:40:58.522937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.709 [2024-11-20 16:40:58.522945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.709 qpair failed and we were unable to recover it. 00:29:12.709 [2024-11-20 16:40:58.523242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.709 [2024-11-20 16:40:58.523250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.709 qpair failed and we were unable to recover it. 00:29:12.709 [2024-11-20 16:40:58.523300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.709 [2024-11-20 16:40:58.523306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.709 qpair failed and we were unable to recover it. 00:29:12.709 [2024-11-20 16:40:58.523620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.709 [2024-11-20 16:40:58.523627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.709 qpair failed and we were unable to recover it. 00:29:12.709 [2024-11-20 16:40:58.523811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.709 [2024-11-20 16:40:58.523818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.709 qpair failed and we were unable to recover it. 00:29:12.709 [2024-11-20 16:40:58.524140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.709 [2024-11-20 16:40:58.524148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.709 qpair failed and we were unable to recover it. 00:29:12.709 [2024-11-20 16:40:58.524372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.709 [2024-11-20 16:40:58.524379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.709 qpair failed and we were unable to recover it. 00:29:12.709 [2024-11-20 16:40:58.524417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.709 [2024-11-20 16:40:58.524425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.709 qpair failed and we were unable to recover it. 00:29:12.709 [2024-11-20 16:40:58.524590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.709 [2024-11-20 16:40:58.524598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.709 qpair failed and we were unable to recover it. 00:29:12.709 [2024-11-20 16:40:58.524646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.709 [2024-11-20 16:40:58.524653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.709 qpair failed and we were unable to recover it. 00:29:12.709 [2024-11-20 16:40:58.524870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.709 [2024-11-20 16:40:58.524878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.709 qpair failed and we were unable to recover it. 00:29:12.709 [2024-11-20 16:40:58.525193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.709 [2024-11-20 16:40:58.525200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.709 qpair failed and we were unable to recover it. 00:29:12.709 [2024-11-20 16:40:58.525520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.709 [2024-11-20 16:40:58.525528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.709 qpair failed and we were unable to recover it. 00:29:12.709 [2024-11-20 16:40:58.525840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.709 [2024-11-20 16:40:58.525850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.709 qpair failed and we were unable to recover it. 00:29:12.709 [2024-11-20 16:40:58.526151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.709 [2024-11-20 16:40:58.526159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.709 qpair failed and we were unable to recover it. 00:29:12.709 [2024-11-20 16:40:58.526335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.709 [2024-11-20 16:40:58.526342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.709 qpair failed and we were unable to recover it. 00:29:12.709 [2024-11-20 16:40:58.526682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.709 [2024-11-20 16:40:58.526689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.709 qpair failed and we were unable to recover it. 00:29:12.709 [2024-11-20 16:40:58.526994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.709 [2024-11-20 16:40:58.527002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.709 qpair failed and we were unable to recover it. 00:29:12.709 [2024-11-20 16:40:58.527314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.709 [2024-11-20 16:40:58.527321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.709 qpair failed and we were unable to recover it. 00:29:12.709 [2024-11-20 16:40:58.527397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.709 [2024-11-20 16:40:58.527404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.709 qpair failed and we were unable to recover it. 00:29:12.709 [2024-11-20 16:40:58.527584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.709 [2024-11-20 16:40:58.527590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.709 qpair failed and we were unable to recover it. 00:29:12.709 [2024-11-20 16:40:58.527761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.709 [2024-11-20 16:40:58.527767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.709 qpair failed and we were unable to recover it. 00:29:12.709 [2024-11-20 16:40:58.527962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.709 [2024-11-20 16:40:58.527969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.709 qpair failed and we were unable to recover it. 00:29:12.709 [2024-11-20 16:40:58.528266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.709 [2024-11-20 16:40:58.528273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.709 qpair failed and we were unable to recover it. 00:29:12.709 [2024-11-20 16:40:58.528588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.709 [2024-11-20 16:40:58.528594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.709 qpair failed and we were unable to recover it. 00:29:12.709 [2024-11-20 16:40:58.528921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.709 [2024-11-20 16:40:58.528927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.709 qpair failed and we were unable to recover it. 00:29:12.709 [2024-11-20 16:40:58.529229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.709 [2024-11-20 16:40:58.529237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.709 qpair failed and we were unable to recover it. 00:29:12.709 [2024-11-20 16:40:58.529550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.709 [2024-11-20 16:40:58.529557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.709 qpair failed and we were unable to recover it. 00:29:12.709 [2024-11-20 16:40:58.529972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.709 [2024-11-20 16:40:58.529979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.709 qpair failed and we were unable to recover it. 00:29:12.709 [2024-11-20 16:40:58.530358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.709 [2024-11-20 16:40:58.530365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.709 qpair failed and we were unable to recover it. 00:29:12.709 [2024-11-20 16:40:58.530665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.709 [2024-11-20 16:40:58.530672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.709 qpair failed and we were unable to recover it. 00:29:12.709 [2024-11-20 16:40:58.530838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.709 [2024-11-20 16:40:58.530846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.709 qpair failed and we were unable to recover it. 00:29:12.709 [2024-11-20 16:40:58.531115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.709 [2024-11-20 16:40:58.531122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.709 qpair failed and we were unable to recover it. 00:29:12.709 [2024-11-20 16:40:58.531317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.710 [2024-11-20 16:40:58.531323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.710 qpair failed and we were unable to recover it. 00:29:12.710 [2024-11-20 16:40:58.531513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.710 [2024-11-20 16:40:58.531519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.710 qpair failed and we were unable to recover it. 00:29:12.710 [2024-11-20 16:40:58.531839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.710 [2024-11-20 16:40:58.531847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.710 qpair failed and we were unable to recover it. 00:29:12.710 [2024-11-20 16:40:58.532021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.710 [2024-11-20 16:40:58.532029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.710 qpair failed and we were unable to recover it. 00:29:12.710 [2024-11-20 16:40:58.532334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.710 [2024-11-20 16:40:58.532341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.710 qpair failed and we were unable to recover it. 00:29:12.710 [2024-11-20 16:40:58.532533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.710 [2024-11-20 16:40:58.532540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.710 qpair failed and we were unable to recover it. 00:29:12.710 [2024-11-20 16:40:58.532707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.710 [2024-11-20 16:40:58.532714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.710 qpair failed and we were unable to recover it. 00:29:12.710 [2024-11-20 16:40:58.532904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.710 [2024-11-20 16:40:58.532911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.710 qpair failed and we were unable to recover it. 00:29:12.710 [2024-11-20 16:40:58.533211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.710 [2024-11-20 16:40:58.533218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.710 qpair failed and we were unable to recover it. 00:29:12.710 [2024-11-20 16:40:58.533533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.710 [2024-11-20 16:40:58.533540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.710 qpair failed and we were unable to recover it. 00:29:12.710 [2024-11-20 16:40:58.533904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.710 [2024-11-20 16:40:58.533910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.710 qpair failed and we were unable to recover it. 00:29:12.710 [2024-11-20 16:40:58.534140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.710 [2024-11-20 16:40:58.534147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.710 qpair failed and we were unable to recover it. 00:29:12.710 [2024-11-20 16:40:58.534445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.710 [2024-11-20 16:40:58.534459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.710 qpair failed and we were unable to recover it. 00:29:12.710 [2024-11-20 16:40:58.534762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.710 [2024-11-20 16:40:58.534768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.710 qpair failed and we were unable to recover it. 00:29:12.710 [2024-11-20 16:40:58.535050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.710 [2024-11-20 16:40:58.535058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.710 qpair failed and we were unable to recover it. 00:29:12.710 [2024-11-20 16:40:58.535375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.710 [2024-11-20 16:40:58.535381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.710 qpair failed and we were unable to recover it. 00:29:12.710 [2024-11-20 16:40:58.535702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.710 [2024-11-20 16:40:58.535709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.710 qpair failed and we were unable to recover it. 00:29:12.710 [2024-11-20 16:40:58.536026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.710 [2024-11-20 16:40:58.536033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.710 qpair failed and we were unable to recover it. 00:29:12.710 [2024-11-20 16:40:58.536345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.710 [2024-11-20 16:40:58.536353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.710 qpair failed and we were unable to recover it. 00:29:12.710 [2024-11-20 16:40:58.536546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.710 [2024-11-20 16:40:58.536554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.710 qpair failed and we were unable to recover it. 00:29:12.710 [2024-11-20 16:40:58.536897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.710 [2024-11-20 16:40:58.536905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.710 qpair failed and we were unable to recover it. 00:29:12.710 [2024-11-20 16:40:58.537207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.710 [2024-11-20 16:40:58.537221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.710 qpair failed and we were unable to recover it. 00:29:12.710 [2024-11-20 16:40:58.537410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.710 [2024-11-20 16:40:58.537417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.710 qpair failed and we were unable to recover it. 00:29:12.710 [2024-11-20 16:40:58.537619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.710 [2024-11-20 16:40:58.537626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.710 qpair failed and we were unable to recover it. 00:29:12.710 [2024-11-20 16:40:58.537675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.710 [2024-11-20 16:40:58.537681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.710 qpair failed and we were unable to recover it. 00:29:12.710 [2024-11-20 16:40:58.537886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.710 [2024-11-20 16:40:58.537893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.710 qpair failed and we were unable to recover it. 00:29:12.710 [2024-11-20 16:40:58.538209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.710 [2024-11-20 16:40:58.538216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.710 qpair failed and we were unable to recover it. 00:29:12.710 [2024-11-20 16:40:58.538254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.710 [2024-11-20 16:40:58.538260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.710 qpair failed and we were unable to recover it. 00:29:12.710 [2024-11-20 16:40:58.538609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.710 [2024-11-20 16:40:58.538616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.710 qpair failed and we were unable to recover it. 00:29:12.710 [2024-11-20 16:40:58.538792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.710 [2024-11-20 16:40:58.538799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.710 qpair failed and we were unable to recover it. 00:29:12.710 [2024-11-20 16:40:58.538997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.710 [2024-11-20 16:40:58.539004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.710 qpair failed and we were unable to recover it. 00:29:12.710 [2024-11-20 16:40:58.539427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.710 [2024-11-20 16:40:58.539433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.710 qpair failed and we were unable to recover it. 00:29:12.710 [2024-11-20 16:40:58.539738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.710 [2024-11-20 16:40:58.539745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.710 qpair failed and we were unable to recover it. 00:29:12.710 [2024-11-20 16:40:58.540115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.710 [2024-11-20 16:40:58.540122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.710 qpair failed and we were unable to recover it. 00:29:12.710 [2024-11-20 16:40:58.540417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.710 [2024-11-20 16:40:58.540425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.710 qpair failed and we were unable to recover it. 00:29:12.710 [2024-11-20 16:40:58.540748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.711 [2024-11-20 16:40:58.540756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.711 qpair failed and we were unable to recover it. 00:29:12.711 [2024-11-20 16:40:58.541056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.711 [2024-11-20 16:40:58.541063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.711 qpair failed and we were unable to recover it. 00:29:12.711 [2024-11-20 16:40:58.541398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.711 [2024-11-20 16:40:58.541405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.711 qpair failed and we were unable to recover it. 00:29:12.711 [2024-11-20 16:40:58.541743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.711 [2024-11-20 16:40:58.541750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.711 qpair failed and we were unable to recover it. 00:29:12.711 [2024-11-20 16:40:58.542080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.711 [2024-11-20 16:40:58.542087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.711 qpair failed and we were unable to recover it. 00:29:12.711 [2024-11-20 16:40:58.542380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.711 [2024-11-20 16:40:58.542386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.711 qpair failed and we were unable to recover it. 00:29:12.711 [2024-11-20 16:40:58.542549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.711 [2024-11-20 16:40:58.542556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.711 qpair failed and we were unable to recover it. 00:29:12.711 [2024-11-20 16:40:58.542755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.711 [2024-11-20 16:40:58.542762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.711 qpair failed and we were unable to recover it. 00:29:12.711 [2024-11-20 16:40:58.542796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.711 [2024-11-20 16:40:58.542802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.711 qpair failed and we were unable to recover it. 00:29:12.711 [2024-11-20 16:40:58.542985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.711 [2024-11-20 16:40:58.542992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.711 qpair failed and we were unable to recover it. 00:29:12.711 [2024-11-20 16:40:58.543282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.711 [2024-11-20 16:40:58.543288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.711 qpair failed and we were unable to recover it. 00:29:12.711 [2024-11-20 16:40:58.543472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.711 [2024-11-20 16:40:58.543479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.711 qpair failed and we were unable to recover it. 00:29:12.711 [2024-11-20 16:40:58.543673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.711 [2024-11-20 16:40:58.543680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.711 qpair failed and we were unable to recover it. 00:29:12.711 [2024-11-20 16:40:58.544007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.711 [2024-11-20 16:40:58.544013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.711 qpair failed and we were unable to recover it. 00:29:12.711 [2024-11-20 16:40:58.544315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.711 [2024-11-20 16:40:58.544321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.711 qpair failed and we were unable to recover it. 00:29:12.711 [2024-11-20 16:40:58.544527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.711 [2024-11-20 16:40:58.544534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.711 qpair failed and we were unable to recover it. 00:29:12.711 [2024-11-20 16:40:58.544689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.711 [2024-11-20 16:40:58.544696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.711 qpair failed and we were unable to recover it. 00:29:12.711 [2024-11-20 16:40:58.545006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.711 [2024-11-20 16:40:58.545013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.711 qpair failed and we were unable to recover it. 00:29:12.711 [2024-11-20 16:40:58.545197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.711 [2024-11-20 16:40:58.545205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.711 qpair failed and we were unable to recover it. 00:29:12.711 [2024-11-20 16:40:58.545428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.711 [2024-11-20 16:40:58.545434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.711 qpair failed and we were unable to recover it. 00:29:12.711 [2024-11-20 16:40:58.545773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.711 [2024-11-20 16:40:58.545780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.711 qpair failed and we were unable to recover it. 00:29:12.711 [2024-11-20 16:40:58.545985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.711 [2024-11-20 16:40:58.545992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.711 qpair failed and we were unable to recover it. 00:29:12.711 [2024-11-20 16:40:58.546301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.711 [2024-11-20 16:40:58.546308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.711 qpair failed and we were unable to recover it. 00:29:12.711 [2024-11-20 16:40:58.546477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.711 [2024-11-20 16:40:58.546483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.711 qpair failed and we were unable to recover it. 00:29:12.711 [2024-11-20 16:40:58.546759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.711 [2024-11-20 16:40:58.546766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.711 qpair failed and we were unable to recover it. 00:29:12.711 [2024-11-20 16:40:58.546953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.711 [2024-11-20 16:40:58.546963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.711 qpair failed and we were unable to recover it. 00:29:12.711 [2024-11-20 16:40:58.547260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.711 [2024-11-20 16:40:58.547268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.711 qpair failed and we were unable to recover it. 00:29:12.711 [2024-11-20 16:40:58.547553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.711 [2024-11-20 16:40:58.547561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.711 qpair failed and we were unable to recover it. 00:29:12.711 [2024-11-20 16:40:58.547867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.711 [2024-11-20 16:40:58.547874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.711 qpair failed and we were unable to recover it. 00:29:12.711 [2024-11-20 16:40:58.548190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.711 [2024-11-20 16:40:58.548205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.711 qpair failed and we were unable to recover it. 00:29:12.711 [2024-11-20 16:40:58.548372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.711 [2024-11-20 16:40:58.548378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.711 qpair failed and we were unable to recover it. 00:29:12.712 [2024-11-20 16:40:58.548562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.712 [2024-11-20 16:40:58.548569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.712 qpair failed and we were unable to recover it. 00:29:12.712 [2024-11-20 16:40:58.548929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.712 [2024-11-20 16:40:58.548936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.712 qpair failed and we were unable to recover it. 00:29:12.712 [2024-11-20 16:40:58.549258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.712 [2024-11-20 16:40:58.549265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.712 qpair failed and we were unable to recover it. 00:29:12.712 [2024-11-20 16:40:58.549585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.712 [2024-11-20 16:40:58.549593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.712 qpair failed and we were unable to recover it. 00:29:12.712 [2024-11-20 16:40:58.549766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.712 [2024-11-20 16:40:58.549775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.712 qpair failed and we were unable to recover it. 00:29:12.712 [2024-11-20 16:40:58.550023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.712 [2024-11-20 16:40:58.550030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.712 qpair failed and we were unable to recover it. 00:29:12.712 [2024-11-20 16:40:58.550361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.712 [2024-11-20 16:40:58.550368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.712 qpair failed and we were unable to recover it. 00:29:12.712 [2024-11-20 16:40:58.550533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.712 [2024-11-20 16:40:58.550541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.712 qpair failed and we were unable to recover it. 00:29:12.712 [2024-11-20 16:40:58.550611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.712 [2024-11-20 16:40:58.550619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.712 qpair failed and we were unable to recover it. 00:29:12.712 [2024-11-20 16:40:58.550823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.712 [2024-11-20 16:40:58.550830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.712 qpair failed and we were unable to recover it. 00:29:12.712 [2024-11-20 16:40:58.550995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.712 [2024-11-20 16:40:58.551001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.712 qpair failed and we were unable to recover it. 00:29:12.712 [2024-11-20 16:40:58.551278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.712 [2024-11-20 16:40:58.551292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.712 qpair failed and we were unable to recover it. 00:29:12.712 [2024-11-20 16:40:58.551463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.712 [2024-11-20 16:40:58.551470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.712 qpair failed and we were unable to recover it. 00:29:12.712 [2024-11-20 16:40:58.551634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.712 [2024-11-20 16:40:58.551641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.712 qpair failed and we were unable to recover it. 00:29:12.712 [2024-11-20 16:40:58.552019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.712 [2024-11-20 16:40:58.552026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.712 qpair failed and we were unable to recover it. 00:29:12.712 [2024-11-20 16:40:58.552180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.712 [2024-11-20 16:40:58.552187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.712 qpair failed and we were unable to recover it. 00:29:12.712 [2024-11-20 16:40:58.552477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.712 [2024-11-20 16:40:58.552484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.712 qpair failed and we were unable to recover it. 00:29:12.712 [2024-11-20 16:40:58.552798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.712 [2024-11-20 16:40:58.552805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.712 qpair failed and we were unable to recover it. 00:29:12.712 [2024-11-20 16:40:58.553122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.712 [2024-11-20 16:40:58.553128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.712 qpair failed and we were unable to recover it. 00:29:12.712 [2024-11-20 16:40:58.553299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.712 [2024-11-20 16:40:58.553306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.712 qpair failed and we were unable to recover it. 00:29:12.712 [2024-11-20 16:40:58.553548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.712 [2024-11-20 16:40:58.553555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.712 qpair failed and we were unable to recover it. 00:29:12.712 [2024-11-20 16:40:58.553890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.712 [2024-11-20 16:40:58.553896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.712 qpair failed and we were unable to recover it. 00:29:12.712 [2024-11-20 16:40:58.554103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.712 [2024-11-20 16:40:58.554109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.712 qpair failed and we were unable to recover it. 00:29:12.712 [2024-11-20 16:40:58.554539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.712 [2024-11-20 16:40:58.554546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.712 qpair failed and we were unable to recover it. 00:29:12.712 [2024-11-20 16:40:58.554586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.712 [2024-11-20 16:40:58.554592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.712 qpair failed and we were unable to recover it. 00:29:12.712 [2024-11-20 16:40:58.554751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.712 [2024-11-20 16:40:58.554757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.712 qpair failed and we were unable to recover it. 00:29:12.712 [2024-11-20 16:40:58.555048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.712 [2024-11-20 16:40:58.555055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.712 qpair failed and we were unable to recover it. 00:29:12.712 [2024-11-20 16:40:58.555376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.712 [2024-11-20 16:40:58.555382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.712 qpair failed and we were unable to recover it. 00:29:12.712 [2024-11-20 16:40:58.555687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.712 [2024-11-20 16:40:58.555694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.712 qpair failed and we were unable to recover it. 00:29:12.712 [2024-11-20 16:40:58.556000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.712 [2024-11-20 16:40:58.556006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.712 qpair failed and we were unable to recover it. 00:29:12.712 [2024-11-20 16:40:58.556161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.712 [2024-11-20 16:40:58.556169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.712 qpair failed and we were unable to recover it. 00:29:12.712 [2024-11-20 16:40:58.556434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.712 [2024-11-20 16:40:58.556441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.712 qpair failed and we were unable to recover it. 00:29:12.712 [2024-11-20 16:40:58.556752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.712 [2024-11-20 16:40:58.556758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.713 qpair failed and we were unable to recover it. 00:29:12.713 [2024-11-20 16:40:58.557046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.713 [2024-11-20 16:40:58.557053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.713 qpair failed and we were unable to recover it. 00:29:12.713 [2024-11-20 16:40:58.557342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.713 [2024-11-20 16:40:58.557351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.713 qpair failed and we were unable to recover it. 00:29:12.713 [2024-11-20 16:40:58.557530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.713 [2024-11-20 16:40:58.557537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.713 qpair failed and we were unable to recover it. 00:29:12.713 [2024-11-20 16:40:58.557903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.713 [2024-11-20 16:40:58.557911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.713 qpair failed and we were unable to recover it. 00:29:12.713 [2024-11-20 16:40:58.558087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.713 [2024-11-20 16:40:58.558095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.713 qpair failed and we were unable to recover it. 00:29:12.713 [2024-11-20 16:40:58.558283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.713 [2024-11-20 16:40:58.558289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.713 qpair failed and we were unable to recover it. 00:29:12.713 [2024-11-20 16:40:58.558546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.713 [2024-11-20 16:40:58.558552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.713 qpair failed and we were unable to recover it. 00:29:12.713 [2024-11-20 16:40:58.558735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.713 [2024-11-20 16:40:58.558743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.713 qpair failed and we were unable to recover it. 00:29:12.713 [2024-11-20 16:40:58.559046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.713 [2024-11-20 16:40:58.559053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.713 qpair failed and we were unable to recover it. 00:29:12.713 [2024-11-20 16:40:58.559283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.713 [2024-11-20 16:40:58.559289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.713 qpair failed and we were unable to recover it. 00:29:12.713 [2024-11-20 16:40:58.559572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.713 [2024-11-20 16:40:58.559579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.713 qpair failed and we were unable to recover it. 00:29:12.713 [2024-11-20 16:40:58.559740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.713 [2024-11-20 16:40:58.559746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.713 qpair failed and we were unable to recover it. 00:29:12.713 [2024-11-20 16:40:58.560019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.713 [2024-11-20 16:40:58.560026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.713 qpair failed and we were unable to recover it. 00:29:12.713 [2024-11-20 16:40:58.560302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.713 [2024-11-20 16:40:58.560308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.713 qpair failed and we were unable to recover it. 00:29:12.713 [2024-11-20 16:40:58.560588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.713 [2024-11-20 16:40:58.560595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.713 qpair failed and we were unable to recover it. 00:29:12.713 [2024-11-20 16:40:58.560906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.713 [2024-11-20 16:40:58.560913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.713 qpair failed and we were unable to recover it. 00:29:12.713 [2024-11-20 16:40:58.561141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.713 [2024-11-20 16:40:58.561148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.713 qpair failed and we were unable to recover it. 00:29:12.713 [2024-11-20 16:40:58.561322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.713 [2024-11-20 16:40:58.561328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.713 qpair failed and we were unable to recover it. 00:29:12.713 [2024-11-20 16:40:58.561656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.713 [2024-11-20 16:40:58.561662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.713 qpair failed and we were unable to recover it. 00:29:12.713 [2024-11-20 16:40:58.562004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.713 [2024-11-20 16:40:58.562012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.713 qpair failed and we were unable to recover it. 00:29:12.713 [2024-11-20 16:40:58.562285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.713 [2024-11-20 16:40:58.562291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.713 qpair failed and we were unable to recover it. 00:29:12.713 [2024-11-20 16:40:58.562580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.713 [2024-11-20 16:40:58.562587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.713 qpair failed and we were unable to recover it. 00:29:12.713 [2024-11-20 16:40:58.562892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.713 [2024-11-20 16:40:58.562899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.713 qpair failed and we were unable to recover it. 00:29:12.713 [2024-11-20 16:40:58.563277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.713 [2024-11-20 16:40:58.563284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.713 qpair failed and we were unable to recover it. 00:29:12.713 [2024-11-20 16:40:58.563534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.713 [2024-11-20 16:40:58.563540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.713 qpair failed and we were unable to recover it. 00:29:12.713 [2024-11-20 16:40:58.563862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.713 [2024-11-20 16:40:58.563868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.713 qpair failed and we were unable to recover it. 00:29:12.713 [2024-11-20 16:40:58.564173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.713 [2024-11-20 16:40:58.564180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.713 qpair failed and we were unable to recover it. 00:29:12.713 [2024-11-20 16:40:58.564495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.713 [2024-11-20 16:40:58.564501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.713 qpair failed and we were unable to recover it. 00:29:12.713 [2024-11-20 16:40:58.564655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.713 [2024-11-20 16:40:58.564661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.713 qpair failed and we were unable to recover it. 00:29:12.713 [2024-11-20 16:40:58.564826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.713 [2024-11-20 16:40:58.564833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.713 qpair failed and we were unable to recover it. 00:29:12.713 [2024-11-20 16:40:58.565129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.713 [2024-11-20 16:40:58.565136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.713 qpair failed and we were unable to recover it. 00:29:12.714 [2024-11-20 16:40:58.565452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.714 [2024-11-20 16:40:58.565458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.714 qpair failed and we were unable to recover it. 00:29:12.714 [2024-11-20 16:40:58.565495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.714 [2024-11-20 16:40:58.565502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.714 qpair failed and we were unable to recover it. 00:29:12.714 [2024-11-20 16:40:58.565784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.714 [2024-11-20 16:40:58.565791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.714 qpair failed and we were unable to recover it. 00:29:12.714 [2024-11-20 16:40:58.566167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.714 [2024-11-20 16:40:58.566174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.714 qpair failed and we were unable to recover it. 00:29:12.714 [2024-11-20 16:40:58.566485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.714 [2024-11-20 16:40:58.566492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.714 qpair failed and we were unable to recover it. 00:29:12.714 [2024-11-20 16:40:58.566794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.714 [2024-11-20 16:40:58.566802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.714 qpair failed and we were unable to recover it. 00:29:12.714 [2024-11-20 16:40:58.567095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.714 [2024-11-20 16:40:58.567102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.714 qpair failed and we were unable to recover it. 00:29:12.714 [2024-11-20 16:40:58.567398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.714 [2024-11-20 16:40:58.567404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.714 qpair failed and we were unable to recover it. 00:29:12.714 [2024-11-20 16:40:58.567720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.714 [2024-11-20 16:40:58.567726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.714 qpair failed and we were unable to recover it. 00:29:12.714 [2024-11-20 16:40:58.568029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.714 [2024-11-20 16:40:58.568036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.714 qpair failed and we were unable to recover it. 00:29:12.714 [2024-11-20 16:40:58.568218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.714 [2024-11-20 16:40:58.568226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.714 qpair failed and we were unable to recover it. 00:29:12.714 [2024-11-20 16:40:58.568514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.714 [2024-11-20 16:40:58.568521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.714 qpair failed and we were unable to recover it. 00:29:12.714 [2024-11-20 16:40:58.568823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.714 [2024-11-20 16:40:58.568829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.714 qpair failed and we were unable to recover it. 00:29:12.714 [2024-11-20 16:40:58.569138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.714 [2024-11-20 16:40:58.569145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.714 qpair failed and we were unable to recover it. 00:29:12.714 [2024-11-20 16:40:58.569370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.714 [2024-11-20 16:40:58.569377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.714 qpair failed and we were unable to recover it. 00:29:12.714 [2024-11-20 16:40:58.569688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.714 [2024-11-20 16:40:58.569695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.714 qpair failed and we were unable to recover it. 00:29:12.714 [2024-11-20 16:40:58.569910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.714 [2024-11-20 16:40:58.569917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.714 qpair failed and we were unable to recover it. 00:29:12.714 [2024-11-20 16:40:58.570270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.714 [2024-11-20 16:40:58.570276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.714 qpair failed and we were unable to recover it. 00:29:12.714 [2024-11-20 16:40:58.570410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.714 [2024-11-20 16:40:58.570417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.714 qpair failed and we were unable to recover it. 00:29:12.714 [2024-11-20 16:40:58.570752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.714 [2024-11-20 16:40:58.570759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.714 qpair failed and we were unable to recover it. 00:29:12.714 [2024-11-20 16:40:58.570935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.714 [2024-11-20 16:40:58.570942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.714 qpair failed and we were unable to recover it. 00:29:12.714 [2024-11-20 16:40:58.571247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.714 [2024-11-20 16:40:58.571254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.714 qpair failed and we were unable to recover it. 00:29:12.714 [2024-11-20 16:40:58.571572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.714 [2024-11-20 16:40:58.571578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.714 qpair failed and we were unable to recover it. 00:29:12.714 [2024-11-20 16:40:58.571873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.714 [2024-11-20 16:40:58.571880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.714 qpair failed and we were unable to recover it. 00:29:12.714 [2024-11-20 16:40:58.572050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.714 [2024-11-20 16:40:58.572057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.714 qpair failed and we were unable to recover it. 00:29:12.714 [2024-11-20 16:40:58.572358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.714 [2024-11-20 16:40:58.572365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.714 qpair failed and we were unable to recover it. 00:29:12.714 [2024-11-20 16:40:58.572648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.714 [2024-11-20 16:40:58.572662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.714 qpair failed and we were unable to recover it. 00:29:12.714 [2024-11-20 16:40:58.572987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.714 [2024-11-20 16:40:58.572994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.714 qpair failed and we were unable to recover it. 00:29:12.714 [2024-11-20 16:40:58.573404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.714 [2024-11-20 16:40:58.573410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.714 qpair failed and we were unable to recover it. 00:29:12.714 [2024-11-20 16:40:58.573637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.714 [2024-11-20 16:40:58.573643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.714 qpair failed and we were unable to recover it. 00:29:12.714 [2024-11-20 16:40:58.573985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.714 [2024-11-20 16:40:58.573992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.714 qpair failed and we were unable to recover it. 00:29:12.715 [2024-11-20 16:40:58.574299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.715 [2024-11-20 16:40:58.574306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.715 qpair failed and we were unable to recover it. 00:29:12.715 [2024-11-20 16:40:58.574484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.715 [2024-11-20 16:40:58.574491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.715 qpair failed and we were unable to recover it. 00:29:12.715 [2024-11-20 16:40:58.574772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.715 [2024-11-20 16:40:58.574779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.715 qpair failed and we were unable to recover it. 00:29:12.715 [2024-11-20 16:40:58.574975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.715 [2024-11-20 16:40:58.574989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.715 qpair failed and we were unable to recover it. 00:29:12.715 [2024-11-20 16:40:58.575291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.715 [2024-11-20 16:40:58.575298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.715 qpair failed and we were unable to recover it. 00:29:12.715 [2024-11-20 16:40:58.575615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.715 [2024-11-20 16:40:58.575623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.715 qpair failed and we were unable to recover it. 00:29:12.715 [2024-11-20 16:40:58.575908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.715 [2024-11-20 16:40:58.575915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.715 qpair failed and we were unable to recover it. 00:29:12.715 [2024-11-20 16:40:58.576003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.715 [2024-11-20 16:40:58.576010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.715 qpair failed and we were unable to recover it. 00:29:12.715 [2024-11-20 16:40:58.576221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.715 [2024-11-20 16:40:58.576227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.715 qpair failed and we were unable to recover it. 00:29:12.715 [2024-11-20 16:40:58.576556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.715 [2024-11-20 16:40:58.576563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.715 qpair failed and we were unable to recover it. 00:29:12.715 [2024-11-20 16:40:58.576716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.715 [2024-11-20 16:40:58.576723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.715 qpair failed and we were unable to recover it. 00:29:12.715 [2024-11-20 16:40:58.576882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.715 [2024-11-20 16:40:58.576889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.715 qpair failed and we were unable to recover it. 00:29:12.715 [2024-11-20 16:40:58.577170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.715 [2024-11-20 16:40:58.577178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.715 qpair failed and we were unable to recover it. 00:29:12.715 [2024-11-20 16:40:58.577521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.715 [2024-11-20 16:40:58.577528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.715 qpair failed and we were unable to recover it. 00:29:12.715 [2024-11-20 16:40:58.577681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.715 [2024-11-20 16:40:58.577689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.715 qpair failed and we were unable to recover it. 00:29:12.715 [2024-11-20 16:40:58.578001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.715 [2024-11-20 16:40:58.578009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.715 qpair failed and we were unable to recover it. 00:29:12.715 [2024-11-20 16:40:58.578186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.715 [2024-11-20 16:40:58.578192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.715 qpair failed and we were unable to recover it. 00:29:12.715 [2024-11-20 16:40:58.578511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.715 [2024-11-20 16:40:58.578518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.715 qpair failed and we were unable to recover it. 00:29:12.715 [2024-11-20 16:40:58.578687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.715 [2024-11-20 16:40:58.578695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.715 qpair failed and we were unable to recover it. 00:29:12.715 [2024-11-20 16:40:58.578978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.715 [2024-11-20 16:40:58.578992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.715 qpair failed and we were unable to recover it. 00:29:12.715 [2024-11-20 16:40:58.579328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.715 [2024-11-20 16:40:58.579335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.715 qpair failed and we were unable to recover it. 00:29:12.715 [2024-11-20 16:40:58.579657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.715 [2024-11-20 16:40:58.579663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.715 qpair failed and we were unable to recover it. 00:29:12.715 [2024-11-20 16:40:58.579837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.715 [2024-11-20 16:40:58.579843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.715 qpair failed and we were unable to recover it. 00:29:12.715 [2024-11-20 16:40:58.580136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.715 [2024-11-20 16:40:58.580143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.715 qpair failed and we were unable to recover it. 00:29:12.715 [2024-11-20 16:40:58.580504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.715 [2024-11-20 16:40:58.580510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.715 qpair failed and we were unable to recover it. 00:29:12.715 [2024-11-20 16:40:58.580890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.715 [2024-11-20 16:40:58.580896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.715 qpair failed and we were unable to recover it. 00:29:12.715 [2024-11-20 16:40:58.581086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.715 [2024-11-20 16:40:58.581093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.715 qpair failed and we were unable to recover it. 00:29:12.715 [2024-11-20 16:40:58.581128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.715 [2024-11-20 16:40:58.581135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.715 qpair failed and we were unable to recover it. 00:29:12.715 [2024-11-20 16:40:58.581412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.715 [2024-11-20 16:40:58.581419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.715 qpair failed and we were unable to recover it. 00:29:12.715 [2024-11-20 16:40:58.581621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.715 [2024-11-20 16:40:58.581628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.715 qpair failed and we were unable to recover it. 00:29:12.715 [2024-11-20 16:40:58.581948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.715 [2024-11-20 16:40:58.581954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.715 qpair failed and we were unable to recover it. 00:29:12.715 [2024-11-20 16:40:58.582367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.715 [2024-11-20 16:40:58.582374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.715 qpair failed and we were unable to recover it. 00:29:12.715 [2024-11-20 16:40:58.582525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.715 [2024-11-20 16:40:58.582531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.715 qpair failed and we were unable to recover it. 00:29:12.715 [2024-11-20 16:40:58.582893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.715 [2024-11-20 16:40:58.582899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.715 qpair failed and we were unable to recover it. 00:29:12.715 [2024-11-20 16:40:58.583224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.715 [2024-11-20 16:40:58.583231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.715 qpair failed and we were unable to recover it. 00:29:12.715 [2024-11-20 16:40:58.583543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.715 [2024-11-20 16:40:58.583549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.715 qpair failed and we were unable to recover it. 00:29:12.715 [2024-11-20 16:40:58.583742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.715 [2024-11-20 16:40:58.583748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.715 qpair failed and we were unable to recover it. 00:29:12.715 [2024-11-20 16:40:58.584091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.716 [2024-11-20 16:40:58.584098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.716 qpair failed and we were unable to recover it. 00:29:12.716 [2024-11-20 16:40:58.584271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.716 [2024-11-20 16:40:58.584277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.716 qpair failed and we were unable to recover it. 00:29:12.716 [2024-11-20 16:40:58.584569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.716 [2024-11-20 16:40:58.584576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.716 qpair failed and we were unable to recover it. 00:29:12.716 [2024-11-20 16:40:58.584885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.716 [2024-11-20 16:40:58.584891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.716 qpair failed and we were unable to recover it. 00:29:12.716 [2024-11-20 16:40:58.585199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.716 [2024-11-20 16:40:58.585207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.716 qpair failed and we were unable to recover it. 00:29:12.716 [2024-11-20 16:40:58.585501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.716 [2024-11-20 16:40:58.585507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.716 qpair failed and we were unable to recover it. 00:29:12.716 [2024-11-20 16:40:58.585798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.716 [2024-11-20 16:40:58.585805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.716 qpair failed and we were unable to recover it. 00:29:12.716 [2024-11-20 16:40:58.586115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.716 [2024-11-20 16:40:58.586123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.716 qpair failed and we were unable to recover it. 00:29:12.716 [2024-11-20 16:40:58.586473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.716 [2024-11-20 16:40:58.586481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.716 qpair failed and we were unable to recover it. 00:29:12.716 [2024-11-20 16:40:58.586810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.716 [2024-11-20 16:40:58.586818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.716 qpair failed and we were unable to recover it. 00:29:12.716 [2024-11-20 16:40:58.587127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.716 [2024-11-20 16:40:58.587134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.716 qpair failed and we were unable to recover it. 00:29:12.716 [2024-11-20 16:40:58.587325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.716 [2024-11-20 16:40:58.587332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.716 qpair failed and we were unable to recover it. 00:29:12.716 [2024-11-20 16:40:58.587521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.716 [2024-11-20 16:40:58.587528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.716 qpair failed and we were unable to recover it. 00:29:12.716 [2024-11-20 16:40:58.587703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.716 [2024-11-20 16:40:58.587710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.716 qpair failed and we were unable to recover it. 00:29:12.716 [2024-11-20 16:40:58.588009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.716 [2024-11-20 16:40:58.588016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.716 qpair failed and we were unable to recover it. 00:29:12.716 [2024-11-20 16:40:58.588183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.716 [2024-11-20 16:40:58.588190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.716 qpair failed and we were unable to recover it. 00:29:12.716 [2024-11-20 16:40:58.588568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.716 [2024-11-20 16:40:58.588574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.716 qpair failed and we were unable to recover it. 00:29:12.716 [2024-11-20 16:40:58.588876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.716 [2024-11-20 16:40:58.588883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.716 qpair failed and we were unable to recover it. 00:29:12.716 [2024-11-20 16:40:58.589221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.716 [2024-11-20 16:40:58.589228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.716 qpair failed and we were unable to recover it. 00:29:12.716 [2024-11-20 16:40:58.589387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.716 [2024-11-20 16:40:58.589394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.716 qpair failed and we were unable to recover it. 00:29:12.716 [2024-11-20 16:40:58.589750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.716 [2024-11-20 16:40:58.589757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.716 qpair failed and we were unable to recover it. 00:29:12.716 [2024-11-20 16:40:58.589946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.716 [2024-11-20 16:40:58.589953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.716 qpair failed and we were unable to recover it. 00:29:12.716 [2024-11-20 16:40:58.590233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.716 [2024-11-20 16:40:58.590241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.716 qpair failed and we were unable to recover it. 00:29:12.716 [2024-11-20 16:40:58.590565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.716 [2024-11-20 16:40:58.590572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.716 qpair failed and we were unable to recover it. 00:29:12.716 [2024-11-20 16:40:58.590961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.716 [2024-11-20 16:40:58.590968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.716 qpair failed and we were unable to recover it. 00:29:12.716 [2024-11-20 16:40:58.591134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.716 [2024-11-20 16:40:58.591140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.716 qpair failed and we were unable to recover it. 00:29:12.716 [2024-11-20 16:40:58.591318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.716 [2024-11-20 16:40:58.591324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.716 qpair failed and we were unable to recover it. 00:29:12.716 [2024-11-20 16:40:58.591626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.716 [2024-11-20 16:40:58.591633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.716 qpair failed and we were unable to recover it. 00:29:12.716 [2024-11-20 16:40:58.591934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.716 [2024-11-20 16:40:58.591941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.716 qpair failed and we were unable to recover it. 00:29:12.716 [2024-11-20 16:40:58.592261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.716 [2024-11-20 16:40:58.592268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.716 qpair failed and we were unable to recover it. 00:29:12.716 [2024-11-20 16:40:58.592652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.716 [2024-11-20 16:40:58.592659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.716 qpair failed and we were unable to recover it. 00:29:12.716 [2024-11-20 16:40:58.592969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.716 [2024-11-20 16:40:58.592976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.716 qpair failed and we were unable to recover it. 00:29:12.716 [2024-11-20 16:40:58.593319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.716 [2024-11-20 16:40:58.593325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.716 qpair failed and we were unable to recover it. 00:29:12.716 [2024-11-20 16:40:58.593482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.716 [2024-11-20 16:40:58.593489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.716 qpair failed and we were unable to recover it. 00:29:12.716 [2024-11-20 16:40:58.593772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.716 [2024-11-20 16:40:58.593779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.716 qpair failed and we were unable to recover it. 00:29:12.716 [2024-11-20 16:40:58.593942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.716 [2024-11-20 16:40:58.593949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.716 qpair failed and we were unable to recover it. 00:29:12.716 [2024-11-20 16:40:58.594257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.716 [2024-11-20 16:40:58.594264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.716 qpair failed and we were unable to recover it. 00:29:12.716 [2024-11-20 16:40:58.594425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.716 [2024-11-20 16:40:58.594431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.716 qpair failed and we were unable to recover it. 00:29:12.716 [2024-11-20 16:40:58.594797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.717 [2024-11-20 16:40:58.594804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.717 qpair failed and we were unable to recover it. 00:29:12.717 [2024-11-20 16:40:58.595118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.717 [2024-11-20 16:40:58.595125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.717 qpair failed and we were unable to recover it. 00:29:12.717 [2024-11-20 16:40:58.595417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.717 [2024-11-20 16:40:58.595425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.717 qpair failed and we were unable to recover it. 00:29:12.717 [2024-11-20 16:40:58.595731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.717 [2024-11-20 16:40:58.595738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.717 qpair failed and we were unable to recover it. 00:29:12.717 [2024-11-20 16:40:58.595918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.717 [2024-11-20 16:40:58.595925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.717 qpair failed and we were unable to recover it. 00:29:12.717 [2024-11-20 16:40:58.596098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.717 [2024-11-20 16:40:58.596105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.717 qpair failed and we were unable to recover it. 00:29:12.717 [2024-11-20 16:40:58.596401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.717 [2024-11-20 16:40:58.596407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.717 qpair failed and we were unable to recover it. 00:29:12.717 [2024-11-20 16:40:58.596584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.717 [2024-11-20 16:40:58.596591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.717 qpair failed and we were unable to recover it. 00:29:12.717 [2024-11-20 16:40:58.596865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.717 [2024-11-20 16:40:58.596872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.717 qpair failed and we were unable to recover it. 00:29:12.717 [2024-11-20 16:40:58.597209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.717 [2024-11-20 16:40:58.597216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.717 qpair failed and we were unable to recover it. 00:29:12.717 [2024-11-20 16:40:58.597507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.717 [2024-11-20 16:40:58.597515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.717 qpair failed and we were unable to recover it. 00:29:12.717 [2024-11-20 16:40:58.597685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.717 [2024-11-20 16:40:58.597692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.717 qpair failed and we were unable to recover it. 00:29:12.717 [2024-11-20 16:40:58.597911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.717 [2024-11-20 16:40:58.597918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.717 qpair failed and we were unable to recover it. 00:29:12.717 [2024-11-20 16:40:58.598224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.717 [2024-11-20 16:40:58.598232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.717 qpair failed and we were unable to recover it. 00:29:12.717 [2024-11-20 16:40:58.598493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.717 [2024-11-20 16:40:58.598499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.717 qpair failed and we were unable to recover it. 00:29:12.717 [2024-11-20 16:40:58.598684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.717 [2024-11-20 16:40:58.598690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.717 qpair failed and we were unable to recover it. 00:29:12.717 [2024-11-20 16:40:58.598863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.717 [2024-11-20 16:40:58.598870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.717 qpair failed and we were unable to recover it. 00:29:12.717 [2024-11-20 16:40:58.599181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.717 [2024-11-20 16:40:58.599187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.717 qpair failed and we were unable to recover it. 00:29:12.717 [2024-11-20 16:40:58.599393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.717 [2024-11-20 16:40:58.599400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.717 qpair failed and we were unable to recover it. 00:29:12.717 [2024-11-20 16:40:58.599725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.717 [2024-11-20 16:40:58.599731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.717 qpair failed and we were unable to recover it. 00:29:12.717 [2024-11-20 16:40:58.599915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.717 [2024-11-20 16:40:58.599922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.717 qpair failed and we were unable to recover it. 00:29:12.717 [2024-11-20 16:40:58.600192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.717 [2024-11-20 16:40:58.600200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.717 qpair failed and we were unable to recover it. 00:29:12.717 [2024-11-20 16:40:58.600508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.717 [2024-11-20 16:40:58.600515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.717 qpair failed and we were unable to recover it. 00:29:12.717 [2024-11-20 16:40:58.600798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.717 [2024-11-20 16:40:58.600811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.717 qpair failed and we were unable to recover it. 00:29:12.717 [2024-11-20 16:40:58.600957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.717 [2024-11-20 16:40:58.600966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.717 qpair failed and we were unable to recover it. 00:29:12.717 [2024-11-20 16:40:58.601251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.717 [2024-11-20 16:40:58.601258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.717 qpair failed and we were unable to recover it. 00:29:12.717 [2024-11-20 16:40:58.601558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.717 [2024-11-20 16:40:58.601565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.717 qpair failed and we were unable to recover it. 00:29:12.717 [2024-11-20 16:40:58.601717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.717 [2024-11-20 16:40:58.601724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.717 qpair failed and we were unable to recover it. 00:29:12.717 [2024-11-20 16:40:58.601896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.717 [2024-11-20 16:40:58.601904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.717 qpair failed and we were unable to recover it. 00:29:12.717 [2024-11-20 16:40:58.602227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.717 [2024-11-20 16:40:58.602233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.717 qpair failed and we were unable to recover it. 00:29:12.717 [2024-11-20 16:40:58.602585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.717 [2024-11-20 16:40:58.602591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.717 qpair failed and we were unable to recover it. 00:29:12.717 [2024-11-20 16:40:58.602721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.717 [2024-11-20 16:40:58.602729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.717 qpair failed and we were unable to recover it. 00:29:12.717 [2024-11-20 16:40:58.602904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.718 [2024-11-20 16:40:58.602911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.718 qpair failed and we were unable to recover it. 00:29:12.718 [2024-11-20 16:40:58.603233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.718 [2024-11-20 16:40:58.603240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.718 qpair failed and we were unable to recover it. 00:29:12.718 [2024-11-20 16:40:58.603282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.718 [2024-11-20 16:40:58.603288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.718 qpair failed and we were unable to recover it. 00:29:12.718 [2024-11-20 16:40:58.603504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.718 [2024-11-20 16:40:58.603511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.718 qpair failed and we were unable to recover it. 00:29:12.718 [2024-11-20 16:40:58.603817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.718 [2024-11-20 16:40:58.603824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.718 qpair failed and we were unable to recover it. 00:29:12.718 [2024-11-20 16:40:58.603980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.718 [2024-11-20 16:40:58.603994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.718 qpair failed and we were unable to recover it. 00:29:12.718 [2024-11-20 16:40:58.604156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.718 [2024-11-20 16:40:58.604163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.718 qpair failed and we were unable to recover it. 00:29:12.718 [2024-11-20 16:40:58.604478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.718 [2024-11-20 16:40:58.604485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.718 qpair failed and we were unable to recover it. 00:29:12.718 [2024-11-20 16:40:58.604803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.718 [2024-11-20 16:40:58.604810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.718 qpair failed and we were unable to recover it. 00:29:12.718 [2024-11-20 16:40:58.604987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.718 [2024-11-20 16:40:58.604994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.718 qpair failed and we were unable to recover it. 00:29:12.718 [2024-11-20 16:40:58.605286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.718 [2024-11-20 16:40:58.605293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.718 qpair failed and we were unable to recover it. 00:29:12.718 [2024-11-20 16:40:58.605609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.718 [2024-11-20 16:40:58.605616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.718 qpair failed and we were unable to recover it. 00:29:12.718 [2024-11-20 16:40:58.605926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.718 [2024-11-20 16:40:58.605933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.718 qpair failed and we were unable to recover it. 00:29:12.718 [2024-11-20 16:40:58.606211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.718 [2024-11-20 16:40:58.606218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.718 qpair failed and we were unable to recover it. 00:29:12.718 [2024-11-20 16:40:58.606508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.718 [2024-11-20 16:40:58.606515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.718 qpair failed and we were unable to recover it. 00:29:12.718 [2024-11-20 16:40:58.606887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.718 [2024-11-20 16:40:58.606894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.718 qpair failed and we were unable to recover it. 00:29:12.718 [2024-11-20 16:40:58.607185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.718 [2024-11-20 16:40:58.607192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.718 qpair failed and we were unable to recover it. 00:29:12.718 [2024-11-20 16:40:58.607502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.718 [2024-11-20 16:40:58.607508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.718 qpair failed and we were unable to recover it. 00:29:12.718 [2024-11-20 16:40:58.607817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.718 [2024-11-20 16:40:58.607824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.718 qpair failed and we were unable to recover it. 00:29:12.718 [2024-11-20 16:40:58.608061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.718 [2024-11-20 16:40:58.608068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.718 qpair failed and we were unable to recover it. 00:29:12.718 [2024-11-20 16:40:58.608461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.718 [2024-11-20 16:40:58.608467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.718 qpair failed and we were unable to recover it. 00:29:12.718 [2024-11-20 16:40:58.608762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.718 [2024-11-20 16:40:58.608768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.718 qpair failed and we were unable to recover it. 00:29:12.718 [2024-11-20 16:40:58.609075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.718 [2024-11-20 16:40:58.609082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.718 qpair failed and we were unable to recover it. 00:29:12.718 [2024-11-20 16:40:58.609465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.718 [2024-11-20 16:40:58.609472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.718 qpair failed and we were unable to recover it. 00:29:12.718 [2024-11-20 16:40:58.609775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.718 [2024-11-20 16:40:58.609781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.718 qpair failed and we were unable to recover it. 00:29:12.718 [2024-11-20 16:40:58.609969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.718 [2024-11-20 16:40:58.609975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.718 qpair failed and we were unable to recover it. 00:29:12.718 [2024-11-20 16:40:58.610194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.718 [2024-11-20 16:40:58.610201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.718 qpair failed and we were unable to recover it. 00:29:12.718 [2024-11-20 16:40:58.610484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.718 [2024-11-20 16:40:58.610491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.718 qpair failed and we were unable to recover it. 00:29:12.718 [2024-11-20 16:40:58.610845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.718 [2024-11-20 16:40:58.610852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.718 qpair failed and we were unable to recover it. 00:29:12.718 [2024-11-20 16:40:58.611133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.718 [2024-11-20 16:40:58.611140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.718 qpair failed and we were unable to recover it. 00:29:12.718 [2024-11-20 16:40:58.611305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.718 [2024-11-20 16:40:58.611312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.718 qpair failed and we were unable to recover it. 00:29:12.718 [2024-11-20 16:40:58.611494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.718 [2024-11-20 16:40:58.611501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.718 qpair failed and we were unable to recover it. 00:29:12.718 [2024-11-20 16:40:58.611755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.718 [2024-11-20 16:40:58.611763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.718 qpair failed and we were unable to recover it. 00:29:12.718 [2024-11-20 16:40:58.612070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.718 [2024-11-20 16:40:58.612078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.718 qpair failed and we were unable to recover it. 00:29:12.718 [2024-11-20 16:40:58.612391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.718 [2024-11-20 16:40:58.612399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.718 qpair failed and we were unable to recover it. 00:29:12.718 [2024-11-20 16:40:58.612578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.718 [2024-11-20 16:40:58.612585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.718 qpair failed and we were unable to recover it. 00:29:12.718 [2024-11-20 16:40:58.612910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.718 [2024-11-20 16:40:58.612917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.718 qpair failed and we were unable to recover it. 00:29:12.718 [2024-11-20 16:40:58.613136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.719 [2024-11-20 16:40:58.613143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.719 qpair failed and we were unable to recover it. 00:29:12.719 [2024-11-20 16:40:58.613471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.719 [2024-11-20 16:40:58.613477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.719 qpair failed and we were unable to recover it. 00:29:12.719 [2024-11-20 16:40:58.613804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.719 [2024-11-20 16:40:58.613810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.719 qpair failed and we were unable to recover it. 00:29:12.719 [2024-11-20 16:40:58.613973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.719 [2024-11-20 16:40:58.613980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.719 qpair failed and we were unable to recover it. 00:29:12.719 [2024-11-20 16:40:58.614176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.719 [2024-11-20 16:40:58.614183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.719 qpair failed and we were unable to recover it. 00:29:12.719 [2024-11-20 16:40:58.614490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.719 [2024-11-20 16:40:58.614497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.719 qpair failed and we were unable to recover it. 00:29:12.719 [2024-11-20 16:40:58.614659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.719 [2024-11-20 16:40:58.614665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.719 qpair failed and we were unable to recover it. 00:29:12.719 [2024-11-20 16:40:58.614936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.719 [2024-11-20 16:40:58.614950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.719 qpair failed and we were unable to recover it. 00:29:12.719 [2024-11-20 16:40:58.615132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.719 [2024-11-20 16:40:58.615140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.719 qpair failed and we were unable to recover it. 00:29:12.719 [2024-11-20 16:40:58.615444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.719 [2024-11-20 16:40:58.615450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.719 qpair failed and we were unable to recover it. 00:29:12.719 [2024-11-20 16:40:58.615740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.719 [2024-11-20 16:40:58.615754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.719 qpair failed and we were unable to recover it. 00:29:12.719 [2024-11-20 16:40:58.616103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.719 [2024-11-20 16:40:58.616110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.719 qpair failed and we were unable to recover it. 00:29:12.719 [2024-11-20 16:40:58.616401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.719 [2024-11-20 16:40:58.616409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.719 qpair failed and we were unable to recover it. 00:29:12.719 [2024-11-20 16:40:58.616547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.719 [2024-11-20 16:40:58.616554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.719 qpair failed and we were unable to recover it. 00:29:12.719 [2024-11-20 16:40:58.616631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.719 [2024-11-20 16:40:58.616637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.719 qpair failed and we were unable to recover it. 00:29:12.719 [2024-11-20 16:40:58.616790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.719 [2024-11-20 16:40:58.616797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.719 qpair failed and we were unable to recover it. 00:29:12.719 [2024-11-20 16:40:58.616977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.719 [2024-11-20 16:40:58.616986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.719 qpair failed and we were unable to recover it. 00:29:12.719 [2024-11-20 16:40:58.617286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.719 [2024-11-20 16:40:58.617292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.719 qpair failed and we were unable to recover it. 00:29:12.719 [2024-11-20 16:40:58.617460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.719 [2024-11-20 16:40:58.617468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.719 qpair failed and we were unable to recover it. 00:29:12.719 [2024-11-20 16:40:58.617772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.719 [2024-11-20 16:40:58.617778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.719 qpair failed and we were unable to recover it. 00:29:12.719 [2024-11-20 16:40:58.618091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.719 [2024-11-20 16:40:58.618098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.719 qpair failed and we were unable to recover it. 00:29:12.719 [2024-11-20 16:40:58.618401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.719 [2024-11-20 16:40:58.618408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.719 qpair failed and we were unable to recover it. 00:29:12.719 [2024-11-20 16:40:58.618576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.719 [2024-11-20 16:40:58.618583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.719 qpair failed and we were unable to recover it. 00:29:12.719 [2024-11-20 16:40:58.618907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.719 [2024-11-20 16:40:58.618914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.719 qpair failed and we were unable to recover it. 00:29:12.719 [2024-11-20 16:40:58.619236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.719 [2024-11-20 16:40:58.619243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.719 qpair failed and we were unable to recover it. 00:29:12.719 [2024-11-20 16:40:58.619417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.719 [2024-11-20 16:40:58.619424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.719 qpair failed and we were unable to recover it. 00:29:12.719 [2024-11-20 16:40:58.619693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.719 [2024-11-20 16:40:58.619699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.719 qpair failed and we were unable to recover it. 00:29:12.719 [2024-11-20 16:40:58.619921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.719 [2024-11-20 16:40:58.619936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.719 qpair failed and we were unable to recover it. 00:29:12.719 [2024-11-20 16:40:58.620143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.719 [2024-11-20 16:40:58.620150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.719 qpair failed and we were unable to recover it. 00:29:12.719 [2024-11-20 16:40:58.620563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.719 [2024-11-20 16:40:58.620571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.719 qpair failed and we were unable to recover it. 00:29:12.719 [2024-11-20 16:40:58.620869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.719 [2024-11-20 16:40:58.620877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.719 qpair failed and we were unable to recover it. 00:29:12.719 [2024-11-20 16:40:58.621195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.719 [2024-11-20 16:40:58.621202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.720 qpair failed and we were unable to recover it. 00:29:12.720 [2024-11-20 16:40:58.621491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.720 [2024-11-20 16:40:58.621499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.720 qpair failed and we were unable to recover it. 00:29:12.720 [2024-11-20 16:40:58.621532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.720 [2024-11-20 16:40:58.621538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.720 qpair failed and we were unable to recover it. 00:29:12.720 [2024-11-20 16:40:58.621697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.720 [2024-11-20 16:40:58.621704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.720 qpair failed and we were unable to recover it. 00:29:12.720 [2024-11-20 16:40:58.621974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.720 [2024-11-20 16:40:58.621989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.720 qpair failed and we were unable to recover it. 00:29:12.720 [2024-11-20 16:40:58.622287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.720 [2024-11-20 16:40:58.622294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.720 qpair failed and we were unable to recover it. 00:29:12.720 [2024-11-20 16:40:58.622588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.720 [2024-11-20 16:40:58.622595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.720 qpair failed and we were unable to recover it. 00:29:12.720 [2024-11-20 16:40:58.622786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.720 [2024-11-20 16:40:58.622792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.720 qpair failed and we were unable to recover it. 00:29:12.720 [2024-11-20 16:40:58.623110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.720 [2024-11-20 16:40:58.623117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.720 qpair failed and we were unable to recover it. 00:29:12.720 [2024-11-20 16:40:58.623448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.720 [2024-11-20 16:40:58.623454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.720 qpair failed and we were unable to recover it. 00:29:12.720 [2024-11-20 16:40:58.623860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.720 [2024-11-20 16:40:58.623867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.720 qpair failed and we were unable to recover it. 00:29:12.720 [2024-11-20 16:40:58.624144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.720 [2024-11-20 16:40:58.624151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.720 qpair failed and we were unable to recover it. 00:29:12.720 [2024-11-20 16:40:58.624477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.720 [2024-11-20 16:40:58.624483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.720 qpair failed and we were unable to recover it. 00:29:12.720 [2024-11-20 16:40:58.624792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.720 [2024-11-20 16:40:58.624799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.720 qpair failed and we were unable to recover it. 00:29:12.720 [2024-11-20 16:40:58.624832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.720 [2024-11-20 16:40:58.624839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.720 qpair failed and we were unable to recover it. 00:29:12.720 [2024-11-20 16:40:58.625139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.720 [2024-11-20 16:40:58.625147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.720 qpair failed and we were unable to recover it. 00:29:12.720 [2024-11-20 16:40:58.625448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.720 [2024-11-20 16:40:58.625456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.720 qpair failed and we were unable to recover it. 00:29:12.720 [2024-11-20 16:40:58.625742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.720 [2024-11-20 16:40:58.625750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.720 qpair failed and we were unable to recover it. 00:29:12.720 [2024-11-20 16:40:58.626044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.720 [2024-11-20 16:40:58.626051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.720 qpair failed and we were unable to recover it. 00:29:12.720 [2024-11-20 16:40:58.626367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.720 [2024-11-20 16:40:58.626373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.720 qpair failed and we were unable to recover it. 00:29:12.720 [2024-11-20 16:40:58.626678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.720 [2024-11-20 16:40:58.626684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.720 qpair failed and we were unable to recover it. 00:29:12.720 [2024-11-20 16:40:58.627000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.720 [2024-11-20 16:40:58.627007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.720 qpair failed and we were unable to recover it. 00:29:12.720 [2024-11-20 16:40:58.627226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.720 [2024-11-20 16:40:58.627233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.720 qpair failed and we were unable to recover it. 00:29:12.720 [2024-11-20 16:40:58.627377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.720 [2024-11-20 16:40:58.627385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.720 qpair failed and we were unable to recover it. 00:29:12.720 [2024-11-20 16:40:58.627696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.720 [2024-11-20 16:40:58.627703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.720 qpair failed and we were unable to recover it. 00:29:12.720 [2024-11-20 16:40:58.627994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.720 [2024-11-20 16:40:58.628000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.720 qpair failed and we were unable to recover it. 00:29:12.720 [2024-11-20 16:40:58.628311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.720 [2024-11-20 16:40:58.628318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.720 qpair failed and we were unable to recover it. 00:29:12.721 [2024-11-20 16:40:58.628479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.721 [2024-11-20 16:40:58.628486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.721 qpair failed and we were unable to recover it. 00:29:12.997 [2024-11-20 16:40:58.628694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.997 [2024-11-20 16:40:58.628702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.997 qpair failed and we were unable to recover it. 00:29:12.997 [2024-11-20 16:40:58.629004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.997 [2024-11-20 16:40:58.629012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.997 qpair failed and we were unable to recover it. 00:29:12.997 [2024-11-20 16:40:58.629181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.997 [2024-11-20 16:40:58.629188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.997 qpair failed and we were unable to recover it. 00:29:12.997 [2024-11-20 16:40:58.629391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.997 [2024-11-20 16:40:58.629398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.997 qpair failed and we were unable to recover it. 00:29:12.997 [2024-11-20 16:40:58.629552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.997 [2024-11-20 16:40:58.629560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.997 qpair failed and we were unable to recover it. 00:29:12.997 [2024-11-20 16:40:58.629855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.997 [2024-11-20 16:40:58.629862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.997 qpair failed and we were unable to recover it. 00:29:12.997 [2024-11-20 16:40:58.630017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.997 [2024-11-20 16:40:58.630024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.997 qpair failed and we were unable to recover it. 00:29:12.997 [2024-11-20 16:40:58.630450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.997 [2024-11-20 16:40:58.630456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.997 qpair failed and we were unable to recover it. 00:29:12.997 [2024-11-20 16:40:58.630754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.997 [2024-11-20 16:40:58.630762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.997 qpair failed and we were unable to recover it. 00:29:12.997 [2024-11-20 16:40:58.630798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.997 [2024-11-20 16:40:58.630805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.997 qpair failed and we were unable to recover it. 00:29:12.997 [2024-11-20 16:40:58.631063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.997 [2024-11-20 16:40:58.631071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.997 qpair failed and we were unable to recover it. 00:29:12.997 [2024-11-20 16:40:58.631389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.997 [2024-11-20 16:40:58.631396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.997 qpair failed and we were unable to recover it. 00:29:12.997 [2024-11-20 16:40:58.631714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.997 [2024-11-20 16:40:58.631721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-20 16:40:58.632005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-20 16:40:58.632012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-20 16:40:58.632336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-20 16:40:58.632344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-20 16:40:58.632658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-20 16:40:58.632665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-20 16:40:58.632983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-20 16:40:58.632990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-20 16:40:58.633301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-20 16:40:58.633308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-20 16:40:58.633651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-20 16:40:58.633657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-20 16:40:58.633949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-20 16:40:58.633956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-20 16:40:58.634134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-20 16:40:58.634141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-20 16:40:58.634297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-20 16:40:58.634304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-20 16:40:58.634608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-20 16:40:58.634615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-20 16:40:58.634946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-20 16:40:58.634953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-20 16:40:58.635239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-20 16:40:58.635246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-20 16:40:58.635528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-20 16:40:58.635541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-20 16:40:58.635573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-20 16:40:58.635579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-20 16:40:58.635858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-20 16:40:58.635866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-20 16:40:58.636158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-20 16:40:58.636164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-20 16:40:58.636445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-20 16:40:58.636452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-20 16:40:58.636668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-20 16:40:58.636675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-20 16:40:58.636839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-20 16:40:58.636846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-20 16:40:58.637184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-20 16:40:58.637191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-20 16:40:58.637471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-20 16:40:58.637477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-20 16:40:58.637739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-20 16:40:58.637746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-20 16:40:58.637947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-20 16:40:58.637954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-20 16:40:58.638246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-20 16:40:58.638253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-20 16:40:58.638538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-20 16:40:58.638552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-20 16:40:58.638708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-20 16:40:58.638714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-20 16:40:58.638906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-20 16:40:58.638913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-20 16:40:58.639204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-20 16:40:58.639211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-20 16:40:58.639526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-20 16:40:58.639532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-20 16:40:58.639695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-20 16:40:58.639702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-20 16:40:58.639931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-20 16:40:58.639939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-20 16:40:58.640247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-20 16:40:58.640254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-20 16:40:58.640643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-20 16:40:58.640649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-20 16:40:58.640953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-20 16:40:58.640960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-20 16:40:58.641144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-20 16:40:58.641151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-20 16:40:58.641331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-20 16:40:58.641338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-20 16:40:58.641673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.998 [2024-11-20 16:40:58.641712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.998 qpair failed and we were unable to recover it. 00:29:12.998 [2024-11-20 16:40:58.641910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-20 16:40:58.641923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-20 16:40:58.642132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-20 16:40:58.642144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-20 16:40:58.642334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-20 16:40:58.642343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-20 16:40:58.642837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-20 16:40:58.642847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-20 16:40:58.642892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-20 16:40:58.642901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-20 16:40:58.643213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-20 16:40:58.643224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-20 16:40:58.643448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-20 16:40:58.643457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-20 16:40:58.643642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-20 16:40:58.643652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-20 16:40:58.643998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-20 16:40:58.644010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-20 16:40:58.644336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-20 16:40:58.644345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-20 16:40:58.644642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-20 16:40:58.644651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-20 16:40:58.644691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-20 16:40:58.644701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-20 16:40:58.644855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-20 16:40:58.644866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-20 16:40:58.645245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-20 16:40:58.645256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-20 16:40:58.645443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-20 16:40:58.645453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-20 16:40:58.645711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-20 16:40:58.645729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-20 16:40:58.646060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-20 16:40:58.646071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-20 16:40:58.646244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-20 16:40:58.646253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-20 16:40:58.646586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-20 16:40:58.646595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-20 16:40:58.646893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-20 16:40:58.646904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-20 16:40:58.647214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-20 16:40:58.647227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-20 16:40:58.647423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-20 16:40:58.647433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-20 16:40:58.647492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-20 16:40:58.647500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-20 16:40:58.647685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-20 16:40:58.647694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-20 16:40:58.648017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-20 16:40:58.648027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-20 16:40:58.648324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-20 16:40:58.648334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-20 16:40:58.648715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-20 16:40:58.648725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-20 16:40:58.649037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-20 16:40:58.649048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-20 16:40:58.649332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-20 16:40:58.649343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-20 16:40:58.649614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-20 16:40:58.649623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-20 16:40:58.649784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-20 16:40:58.649794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-20 16:40:58.649960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-20 16:40:58.649971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-20 16:40:58.650382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-20 16:40:58.650392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-20 16:40:58.650585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-20 16:40:58.650595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-20 16:40:58.650926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-20 16:40:58.650936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-20 16:40:58.651255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-20 16:40:58.651266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-20 16:40:58.651445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-20 16:40:58.651455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-20 16:40:58.651651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-20 16:40:58.651661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:12.999 [2024-11-20 16:40:58.651851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:12.999 [2024-11-20 16:40:58.651861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:12.999 qpair failed and we were unable to recover it. 00:29:13.000 [2024-11-20 16:40:58.652149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.000 [2024-11-20 16:40:58.652159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.000 qpair failed and we were unable to recover it. 00:29:13.000 [2024-11-20 16:40:58.652454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.000 [2024-11-20 16:40:58.652464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.000 qpair failed and we were unable to recover it. 00:29:13.000 [2024-11-20 16:40:58.652777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.000 [2024-11-20 16:40:58.652786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.000 qpair failed and we were unable to recover it. 00:29:13.000 [2024-11-20 16:40:58.652946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.000 [2024-11-20 16:40:58.652956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.000 qpair failed and we were unable to recover it. 00:29:13.000 [2024-11-20 16:40:58.653317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.000 [2024-11-20 16:40:58.653326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.000 qpair failed and we were unable to recover it. 00:29:13.000 [2024-11-20 16:40:58.653514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.000 [2024-11-20 16:40:58.653523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.000 qpair failed and we were unable to recover it. 00:29:13.000 [2024-11-20 16:40:58.653681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.000 [2024-11-20 16:40:58.653690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.000 qpair failed and we were unable to recover it. 00:29:13.000 [2024-11-20 16:40:58.653880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.000 [2024-11-20 16:40:58.653889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.000 qpair failed and we were unable to recover it. 00:29:13.000 [2024-11-20 16:40:58.654216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.000 [2024-11-20 16:40:58.654226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.000 qpair failed and we were unable to recover it. 00:29:13.000 [2024-11-20 16:40:58.654543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.000 [2024-11-20 16:40:58.654552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.000 qpair failed and we were unable to recover it. 00:29:13.000 [2024-11-20 16:40:58.654729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.000 [2024-11-20 16:40:58.654739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.000 qpair failed and we were unable to recover it. 00:29:13.000 [2024-11-20 16:40:58.655095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.000 [2024-11-20 16:40:58.655105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.000 qpair failed and we were unable to recover it. 00:29:13.000 [2024-11-20 16:40:58.655464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.000 [2024-11-20 16:40:58.655474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.000 qpair failed and we were unable to recover it. 00:29:13.000 [2024-11-20 16:40:58.655680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.000 [2024-11-20 16:40:58.655689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.000 qpair failed and we were unable to recover it. 00:29:13.000 [2024-11-20 16:40:58.656082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.000 [2024-11-20 16:40:58.656091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.000 qpair failed and we were unable to recover it. 00:29:13.000 [2024-11-20 16:40:58.656275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.000 [2024-11-20 16:40:58.656285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.000 qpair failed and we were unable to recover it. 00:29:13.000 [2024-11-20 16:40:58.656618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.000 [2024-11-20 16:40:58.656627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.000 qpair failed and we were unable to recover it. 00:29:13.000 [2024-11-20 16:40:58.656933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.000 [2024-11-20 16:40:58.656942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.000 qpair failed and we were unable to recover it. 00:29:13.000 [2024-11-20 16:40:58.657284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.000 [2024-11-20 16:40:58.657294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.000 qpair failed and we were unable to recover it. 00:29:13.000 [2024-11-20 16:40:58.657453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.000 [2024-11-20 16:40:58.657462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.000 qpair failed and we were unable to recover it. 00:29:13.000 [2024-11-20 16:40:58.657786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.000 [2024-11-20 16:40:58.657795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.000 qpair failed and we were unable to recover it. 00:29:13.000 [2024-11-20 16:40:58.657840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.000 [2024-11-20 16:40:58.657848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.000 qpair failed and we were unable to recover it. 00:29:13.000 [2024-11-20 16:40:58.658205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.000 [2024-11-20 16:40:58.658215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.000 qpair failed and we were unable to recover it. 00:29:13.000 [2024-11-20 16:40:58.658486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.000 [2024-11-20 16:40:58.658495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.000 qpair failed and we were unable to recover it. 00:29:13.000 [2024-11-20 16:40:58.658656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.000 [2024-11-20 16:40:58.658665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.000 qpair failed and we were unable to recover it. 00:29:13.000 [2024-11-20 16:40:58.658952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.000 [2024-11-20 16:40:58.658962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.000 qpair failed and we were unable to recover it. 00:29:13.000 [2024-11-20 16:40:58.659296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.000 [2024-11-20 16:40:58.659306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.000 qpair failed and we were unable to recover it. 00:29:13.000 [2024-11-20 16:40:58.659498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.000 [2024-11-20 16:40:58.659507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.000 qpair failed and we were unable to recover it. 00:29:13.000 [2024-11-20 16:40:58.659834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.000 [2024-11-20 16:40:58.659844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.000 qpair failed and we were unable to recover it. 00:29:13.000 [2024-11-20 16:40:58.660145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.000 [2024-11-20 16:40:58.660155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.000 qpair failed and we were unable to recover it. 00:29:13.000 [2024-11-20 16:40:58.660479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.000 [2024-11-20 16:40:58.660488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.000 qpair failed and we were unable to recover it. 00:29:13.000 [2024-11-20 16:40:58.660650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.000 [2024-11-20 16:40:58.660659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.000 qpair failed and we were unable to recover it. 00:29:13.000 [2024-11-20 16:40:58.660988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.000 [2024-11-20 16:40:58.660998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.000 qpair failed and we were unable to recover it. 00:29:13.000 [2024-11-20 16:40:58.661322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.000 [2024-11-20 16:40:58.661333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.000 qpair failed and we were unable to recover it. 00:29:13.000 [2024-11-20 16:40:58.661639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.000 [2024-11-20 16:40:58.661648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.000 qpair failed and we were unable to recover it. 00:29:13.000 [2024-11-20 16:40:58.661934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.000 [2024-11-20 16:40:58.661945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.000 qpair failed and we were unable to recover it. 00:29:13.000 [2024-11-20 16:40:58.662117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.000 [2024-11-20 16:40:58.662127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.000 qpair failed and we were unable to recover it. 00:29:13.000 [2024-11-20 16:40:58.662319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.000 [2024-11-20 16:40:58.662328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.000 qpair failed and we were unable to recover it. 00:29:13.001 [2024-11-20 16:40:58.662596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.001 [2024-11-20 16:40:58.662605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.001 qpair failed and we were unable to recover it. 00:29:13.001 [2024-11-20 16:40:58.662910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.001 [2024-11-20 16:40:58.662921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.001 qpair failed and we were unable to recover it. 00:29:13.001 [2024-11-20 16:40:58.663116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.001 [2024-11-20 16:40:58.663126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.001 qpair failed and we were unable to recover it. 00:29:13.001 [2024-11-20 16:40:58.663411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.001 [2024-11-20 16:40:58.663420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.001 qpair failed and we were unable to recover it. 00:29:13.001 [2024-11-20 16:40:58.663739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.001 [2024-11-20 16:40:58.663749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.001 qpair failed and we were unable to recover it. 00:29:13.001 [2024-11-20 16:40:58.663937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.001 [2024-11-20 16:40:58.663947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.001 qpair failed and we were unable to recover it. 00:29:13.001 [2024-11-20 16:40:58.664100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.001 [2024-11-20 16:40:58.664110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.001 qpair failed and we were unable to recover it. 00:29:13.001 [2024-11-20 16:40:58.664158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.001 [2024-11-20 16:40:58.664167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.001 qpair failed and we were unable to recover it. 00:29:13.001 [2024-11-20 16:40:58.664459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.001 [2024-11-20 16:40:58.664469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.001 qpair failed and we were unable to recover it. 00:29:13.001 [2024-11-20 16:40:58.664767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.001 [2024-11-20 16:40:58.664778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.001 qpair failed and we were unable to recover it. 00:29:13.001 [2024-11-20 16:40:58.665059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.001 [2024-11-20 16:40:58.665069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.001 qpair failed and we were unable to recover it. 00:29:13.001 [2024-11-20 16:40:58.665389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.001 [2024-11-20 16:40:58.665401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.001 qpair failed and we were unable to recover it. 00:29:13.001 [2024-11-20 16:40:58.665564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.001 [2024-11-20 16:40:58.665575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.001 qpair failed and we were unable to recover it. 00:29:13.001 [2024-11-20 16:40:58.665748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.001 [2024-11-20 16:40:58.665758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.001 qpair failed and we were unable to recover it. 00:29:13.001 [2024-11-20 16:40:58.666040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.001 [2024-11-20 16:40:58.666050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.001 qpair failed and we were unable to recover it. 00:29:13.001 [2024-11-20 16:40:58.666403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.001 [2024-11-20 16:40:58.666413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.001 qpair failed and we were unable to recover it. 00:29:13.001 [2024-11-20 16:40:58.666618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.001 [2024-11-20 16:40:58.666627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.001 qpair failed and we were unable to recover it. 00:29:13.001 [2024-11-20 16:40:58.666775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.001 [2024-11-20 16:40:58.666785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.001 qpair failed and we were unable to recover it. 00:29:13.001 [2024-11-20 16:40:58.666990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.001 [2024-11-20 16:40:58.667000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.001 qpair failed and we were unable to recover it. 00:29:13.001 [2024-11-20 16:40:58.667233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.001 [2024-11-20 16:40:58.667242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.001 qpair failed and we were unable to recover it. 00:29:13.001 [2024-11-20 16:40:58.667577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.001 [2024-11-20 16:40:58.667586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.001 qpair failed and we were unable to recover it. 00:29:13.001 [2024-11-20 16:40:58.667876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.001 [2024-11-20 16:40:58.667885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.001 qpair failed and we were unable to recover it. 00:29:13.001 [2024-11-20 16:40:58.668207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.001 [2024-11-20 16:40:58.668218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.001 qpair failed and we were unable to recover it. 00:29:13.001 [2024-11-20 16:40:58.668518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.001 [2024-11-20 16:40:58.668528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.001 qpair failed and we were unable to recover it. 00:29:13.001 [2024-11-20 16:40:58.668677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.001 [2024-11-20 16:40:58.668687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.001 qpair failed and we were unable to recover it. 00:29:13.001 [2024-11-20 16:40:58.668927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.001 [2024-11-20 16:40:58.668936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.001 qpair failed and we were unable to recover it. 00:29:13.001 [2024-11-20 16:40:58.669238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.001 [2024-11-20 16:40:58.669247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.001 qpair failed and we were unable to recover it. 00:29:13.001 [2024-11-20 16:40:58.669425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.001 [2024-11-20 16:40:58.669435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.001 qpair failed and we were unable to recover it. 00:29:13.001 [2024-11-20 16:40:58.669600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.001 [2024-11-20 16:40:58.669609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.001 qpair failed and we were unable to recover it. 00:29:13.001 [2024-11-20 16:40:58.670002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.001 [2024-11-20 16:40:58.670013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.001 qpair failed and we were unable to recover it. 00:29:13.001 [2024-11-20 16:40:58.670337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.001 [2024-11-20 16:40:58.670346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.001 qpair failed and we were unable to recover it. 00:29:13.001 [2024-11-20 16:40:58.670629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.001 [2024-11-20 16:40:58.670638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.001 qpair failed and we were unable to recover it. 00:29:13.001 [2024-11-20 16:40:58.670952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.001 [2024-11-20 16:40:58.670961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.001 qpair failed and we were unable to recover it. 00:29:13.001 [2024-11-20 16:40:58.671147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.001 [2024-11-20 16:40:58.671157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.001 qpair failed and we were unable to recover it. 00:29:13.001 [2024-11-20 16:40:58.671191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8aef30 (9): Bad file descriptor 00:29:13.001 [2024-11-20 16:40:58.671706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.001 [2024-11-20 16:40:58.671758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe980000b90 with addr=10.0.0.2, port=4420 00:29:13.001 qpair failed and we were unable to recover it. 00:29:13.001 [2024-11-20 16:40:58.672239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.001 [2024-11-20 16:40:58.672328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe980000b90 with addr=10.0.0.2, port=4420 00:29:13.001 qpair failed and we were unable to recover it. 00:29:13.001 [2024-11-20 16:40:58.672569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.001 [2024-11-20 16:40:58.672580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.001 qpair failed and we were unable to recover it. 00:29:13.001 [2024-11-20 16:40:58.672735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.002 [2024-11-20 16:40:58.672744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.002 qpair failed and we were unable to recover it. 00:29:13.002 [2024-11-20 16:40:58.672945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.002 [2024-11-20 16:40:58.672955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.002 qpair failed and we were unable to recover it. 00:29:13.002 [2024-11-20 16:40:58.673274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.002 [2024-11-20 16:40:58.673283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.002 qpair failed and we were unable to recover it. 00:29:13.002 [2024-11-20 16:40:58.673654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.002 [2024-11-20 16:40:58.673663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.002 qpair failed and we were unable to recover it. 00:29:13.002 [2024-11-20 16:40:58.673955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.002 [2024-11-20 16:40:58.673965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.002 qpair failed and we were unable to recover it. 00:29:13.002 [2024-11-20 16:40:58.674298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.002 [2024-11-20 16:40:58.674307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.002 qpair failed and we were unable to recover it. 00:29:13.002 [2024-11-20 16:40:58.674598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.002 [2024-11-20 16:40:58.674608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.002 qpair failed and we were unable to recover it. 00:29:13.002 [2024-11-20 16:40:58.674775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.002 [2024-11-20 16:40:58.674786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.002 qpair failed and we were unable to recover it. 00:29:13.002 [2024-11-20 16:40:58.675202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.002 [2024-11-20 16:40:58.675212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.002 qpair failed and we were unable to recover it. 00:29:13.002 [2024-11-20 16:40:58.675421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.002 [2024-11-20 16:40:58.675430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.002 qpair failed and we were unable to recover it. 00:29:13.002 [2024-11-20 16:40:58.675689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.002 [2024-11-20 16:40:58.675698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.002 qpair failed and we were unable to recover it. 00:29:13.002 [2024-11-20 16:40:58.676038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.002 [2024-11-20 16:40:58.676048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.002 qpair failed and we were unable to recover it. 00:29:13.002 [2024-11-20 16:40:58.676384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.002 [2024-11-20 16:40:58.676393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.002 qpair failed and we were unable to recover it. 00:29:13.002 [2024-11-20 16:40:58.676566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.002 [2024-11-20 16:40:58.676576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.002 qpair failed and we were unable to recover it. 00:29:13.002 [2024-11-20 16:40:58.676884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.002 [2024-11-20 16:40:58.676893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.002 qpair failed and we were unable to recover it. 00:29:13.002 [2024-11-20 16:40:58.677226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.002 [2024-11-20 16:40:58.677236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.002 qpair failed and we were unable to recover it. 00:29:13.002 [2024-11-20 16:40:58.677404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.002 [2024-11-20 16:40:58.677413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.002 qpair failed and we were unable to recover it. 00:29:13.002 [2024-11-20 16:40:58.677694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.002 [2024-11-20 16:40:58.677703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.002 qpair failed and we were unable to recover it. 00:29:13.002 [2024-11-20 16:40:58.678056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.002 [2024-11-20 16:40:58.678066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.002 qpair failed and we were unable to recover it. 00:29:13.002 [2024-11-20 16:40:58.678262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.002 [2024-11-20 16:40:58.678271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.002 qpair failed and we were unable to recover it. 00:29:13.002 [2024-11-20 16:40:58.678451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.002 [2024-11-20 16:40:58.678461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.002 qpair failed and we were unable to recover it. 00:29:13.002 [2024-11-20 16:40:58.678765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.002 [2024-11-20 16:40:58.678775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.002 qpair failed and we were unable to recover it. 00:29:13.002 [2024-11-20 16:40:58.679119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.002 [2024-11-20 16:40:58.679129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.002 qpair failed and we were unable to recover it. 00:29:13.002 [2024-11-20 16:40:58.679281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.002 [2024-11-20 16:40:58.679290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.002 qpair failed and we were unable to recover it. 00:29:13.002 [2024-11-20 16:40:58.679643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.002 [2024-11-20 16:40:58.679653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.002 qpair failed and we were unable to recover it. 00:29:13.002 [2024-11-20 16:40:58.679970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.002 [2024-11-20 16:40:58.679979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.002 qpair failed and we were unable to recover it. 00:29:13.002 [2024-11-20 16:40:58.680374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.002 [2024-11-20 16:40:58.680383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.002 qpair failed and we were unable to recover it. 00:29:13.002 [2024-11-20 16:40:58.680552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.002 [2024-11-20 16:40:58.680561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.002 qpair failed and we were unable to recover it. 00:29:13.002 [2024-11-20 16:40:58.680843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.002 [2024-11-20 16:40:58.680854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.002 qpair failed and we were unable to recover it. 00:29:13.002 [2024-11-20 16:40:58.681023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.002 [2024-11-20 16:40:58.681034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.002 qpair failed and we were unable to recover it. 00:29:13.002 [2024-11-20 16:40:58.681238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.002 [2024-11-20 16:40:58.681248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.002 qpair failed and we were unable to recover it. 00:29:13.002 [2024-11-20 16:40:58.681419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.002 [2024-11-20 16:40:58.681428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.002 qpair failed and we were unable to recover it. 00:29:13.002 [2024-11-20 16:40:58.681668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.002 [2024-11-20 16:40:58.681679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.003 qpair failed and we were unable to recover it. 00:29:13.003 [2024-11-20 16:40:58.681975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.003 [2024-11-20 16:40:58.681987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.003 qpair failed and we were unable to recover it. 00:29:13.003 [2024-11-20 16:40:58.682188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.003 [2024-11-20 16:40:58.682197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.003 qpair failed and we were unable to recover it. 00:29:13.003 [2024-11-20 16:40:58.682544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.003 [2024-11-20 16:40:58.682553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.003 qpair failed and we were unable to recover it. 00:29:13.003 [2024-11-20 16:40:58.682841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.003 [2024-11-20 16:40:58.682851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.003 qpair failed and we were unable to recover it. 00:29:13.003 [2024-11-20 16:40:58.683180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.003 [2024-11-20 16:40:58.683190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.003 qpair failed and we were unable to recover it. 00:29:13.003 [2024-11-20 16:40:58.683472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.003 [2024-11-20 16:40:58.683489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.003 qpair failed and we were unable to recover it. 00:29:13.003 [2024-11-20 16:40:58.683720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.003 [2024-11-20 16:40:58.683729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.003 qpair failed and we were unable to recover it. 00:29:13.003 [2024-11-20 16:40:58.684050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.003 [2024-11-20 16:40:58.684060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.003 qpair failed and we were unable to recover it. 00:29:13.003 [2024-11-20 16:40:58.684272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.003 [2024-11-20 16:40:58.684282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.003 qpair failed and we were unable to recover it. 00:29:13.003 [2024-11-20 16:40:58.684449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.003 [2024-11-20 16:40:58.684459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.003 qpair failed and we were unable to recover it. 00:29:13.003 [2024-11-20 16:40:58.684779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.003 [2024-11-20 16:40:58.684788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.003 qpair failed and we were unable to recover it. 00:29:13.003 [2024-11-20 16:40:58.684980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.003 [2024-11-20 16:40:58.684995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.003 qpair failed and we were unable to recover it. 00:29:13.003 [2024-11-20 16:40:58.685184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.003 [2024-11-20 16:40:58.685193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.003 qpair failed and we were unable to recover it. 00:29:13.003 [2024-11-20 16:40:58.685356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.003 [2024-11-20 16:40:58.685365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.003 qpair failed and we were unable to recover it. 00:29:13.003 [2024-11-20 16:40:58.685587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.003 [2024-11-20 16:40:58.685596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.003 qpair failed and we were unable to recover it. 00:29:13.003 [2024-11-20 16:40:58.685939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.003 [2024-11-20 16:40:58.685948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.003 qpair failed and we were unable to recover it. 00:29:13.003 [2024-11-20 16:40:58.686267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.003 [2024-11-20 16:40:58.686277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.003 qpair failed and we were unable to recover it. 00:29:13.003 [2024-11-20 16:40:58.686581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.003 [2024-11-20 16:40:58.686591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.003 qpair failed and we were unable to recover it. 00:29:13.003 [2024-11-20 16:40:58.686926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.003 [2024-11-20 16:40:58.686936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.003 qpair failed and we were unable to recover it. 00:29:13.003 [2024-11-20 16:40:58.687283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.003 [2024-11-20 16:40:58.687293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.003 qpair failed and we were unable to recover it. 00:29:13.003 [2024-11-20 16:40:58.687569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.003 [2024-11-20 16:40:58.687578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.003 qpair failed and we were unable to recover it. 00:29:13.003 [2024-11-20 16:40:58.687915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.003 [2024-11-20 16:40:58.687925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.003 qpair failed and we were unable to recover it. 00:29:13.003 [2024-11-20 16:40:58.688224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.003 [2024-11-20 16:40:58.688237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.003 qpair failed and we were unable to recover it. 00:29:13.003 [2024-11-20 16:40:58.688399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.003 [2024-11-20 16:40:58.688409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.003 qpair failed and we were unable to recover it. 00:29:13.003 [2024-11-20 16:40:58.688602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.003 [2024-11-20 16:40:58.688612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.003 qpair failed and we were unable to recover it. 00:29:13.003 [2024-11-20 16:40:58.688943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.003 [2024-11-20 16:40:58.688953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.003 qpair failed and we were unable to recover it. 00:29:13.003 [2024-11-20 16:40:58.689334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.003 [2024-11-20 16:40:58.689344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.003 qpair failed and we were unable to recover it. 00:29:13.003 [2024-11-20 16:40:58.689524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.003 [2024-11-20 16:40:58.689534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.003 qpair failed and we were unable to recover it. 00:29:13.003 [2024-11-20 16:40:58.689833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.003 [2024-11-20 16:40:58.689844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.003 qpair failed and we were unable to recover it. 00:29:13.003 [2024-11-20 16:40:58.689979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.003 [2024-11-20 16:40:58.689994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.003 qpair failed and we were unable to recover it. 00:29:13.003 [2024-11-20 16:40:58.690297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.003 [2024-11-20 16:40:58.690307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.003 qpair failed and we were unable to recover it. 00:29:13.003 [2024-11-20 16:40:58.690612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.003 [2024-11-20 16:40:58.690621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.003 qpair failed and we were unable to recover it. 00:29:13.003 [2024-11-20 16:40:58.690930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.003 [2024-11-20 16:40:58.690939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.003 qpair failed and we were unable to recover it. 00:29:13.003 [2024-11-20 16:40:58.691130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.003 [2024-11-20 16:40:58.691141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.003 qpair failed and we were unable to recover it. 00:29:13.003 [2024-11-20 16:40:58.691404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.003 [2024-11-20 16:40:58.691414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.003 qpair failed and we were unable to recover it. 00:29:13.003 [2024-11-20 16:40:58.691631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.003 [2024-11-20 16:40:58.691640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.003 qpair failed and we were unable to recover it. 00:29:13.003 [2024-11-20 16:40:58.691929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.003 [2024-11-20 16:40:58.691938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.003 qpair failed and we were unable to recover it. 00:29:13.003 [2024-11-20 16:40:58.692225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.003 [2024-11-20 16:40:58.692235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.003 qpair failed and we were unable to recover it. 00:29:13.003 [2024-11-20 16:40:58.692574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.004 [2024-11-20 16:40:58.692583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.004 qpair failed and we were unable to recover it. 00:29:13.004 [2024-11-20 16:40:58.692763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.004 [2024-11-20 16:40:58.692773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.004 qpair failed and we were unable to recover it. 00:29:13.004 [2024-11-20 16:40:58.693056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.004 [2024-11-20 16:40:58.693066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.004 qpair failed and we were unable to recover it. 00:29:13.004 [2024-11-20 16:40:58.693367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.004 [2024-11-20 16:40:58.693376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.004 qpair failed and we were unable to recover it. 00:29:13.004 [2024-11-20 16:40:58.693710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.004 [2024-11-20 16:40:58.693720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.004 qpair failed and we were unable to recover it. 00:29:13.004 [2024-11-20 16:40:58.693877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.004 [2024-11-20 16:40:58.693886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.004 qpair failed and we were unable to recover it. 00:29:13.004 [2024-11-20 16:40:58.694156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.004 [2024-11-20 16:40:58.694166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.004 qpair failed and we were unable to recover it. 00:29:13.004 [2024-11-20 16:40:58.694210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.004 [2024-11-20 16:40:58.694218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.004 qpair failed and we were unable to recover it. 00:29:13.004 [2024-11-20 16:40:58.694506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.004 [2024-11-20 16:40:58.694517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.004 qpair failed and we were unable to recover it. 00:29:13.004 [2024-11-20 16:40:58.694671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.004 [2024-11-20 16:40:58.694680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.004 qpair failed and we were unable to recover it. 00:29:13.004 [2024-11-20 16:40:58.695039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.004 [2024-11-20 16:40:58.695049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.004 qpair failed and we were unable to recover it. 00:29:13.004 [2024-11-20 16:40:58.695357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.004 [2024-11-20 16:40:58.695367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.004 qpair failed and we were unable to recover it. 00:29:13.004 [2024-11-20 16:40:58.695657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.004 [2024-11-20 16:40:58.695668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.004 qpair failed and we were unable to recover it. 00:29:13.004 [2024-11-20 16:40:58.695856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.004 [2024-11-20 16:40:58.695866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.004 qpair failed and we were unable to recover it. 00:29:13.004 [2024-11-20 16:40:58.696161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.004 [2024-11-20 16:40:58.696170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.004 qpair failed and we were unable to recover it. 00:29:13.004 [2024-11-20 16:40:58.696486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.004 [2024-11-20 16:40:58.696495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.004 qpair failed and we were unable to recover it. 00:29:13.004 [2024-11-20 16:40:58.696802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.004 [2024-11-20 16:40:58.696811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.004 qpair failed and we were unable to recover it. 00:29:13.004 [2024-11-20 16:40:58.697143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.004 [2024-11-20 16:40:58.697153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.004 qpair failed and we were unable to recover it. 00:29:13.004 [2024-11-20 16:40:58.697352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.004 [2024-11-20 16:40:58.697361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.004 qpair failed and we were unable to recover it. 00:29:13.004 [2024-11-20 16:40:58.697704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.004 [2024-11-20 16:40:58.697714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.004 qpair failed and we were unable to recover it. 00:29:13.004 [2024-11-20 16:40:58.698047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.004 [2024-11-20 16:40:58.698056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.004 qpair failed and we were unable to recover it. 00:29:13.004 [2024-11-20 16:40:58.698250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.004 [2024-11-20 16:40:58.698260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.004 qpair failed and we were unable to recover it. 00:29:13.004 [2024-11-20 16:40:58.698468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.004 [2024-11-20 16:40:58.698479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.004 qpair failed and we were unable to recover it. 00:29:13.004 [2024-11-20 16:40:58.698810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.004 [2024-11-20 16:40:58.698821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.004 qpair failed and we were unable to recover it. 00:29:13.004 [2024-11-20 16:40:58.699010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.004 [2024-11-20 16:40:58.699021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.004 qpair failed and we were unable to recover it. 00:29:13.004 [2024-11-20 16:40:58.699246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.004 [2024-11-20 16:40:58.699259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.004 qpair failed and we were unable to recover it. 00:29:13.004 [2024-11-20 16:40:58.699576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.004 [2024-11-20 16:40:58.699586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.004 qpair failed and we were unable to recover it. 00:29:13.004 [2024-11-20 16:40:58.699751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.004 [2024-11-20 16:40:58.699761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.004 qpair failed and we were unable to recover it. 00:29:13.004 [2024-11-20 16:40:58.699943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.004 [2024-11-20 16:40:58.699952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.004 qpair failed and we were unable to recover it. 00:29:13.004 [2024-11-20 16:40:58.700280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.004 [2024-11-20 16:40:58.700290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.004 qpair failed and we were unable to recover it. 00:29:13.004 [2024-11-20 16:40:58.700498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.004 [2024-11-20 16:40:58.700508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.004 qpair failed and we were unable to recover it. 00:29:13.004 [2024-11-20 16:40:58.700865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.004 [2024-11-20 16:40:58.700875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.004 qpair failed and we were unable to recover it. 00:29:13.004 [2024-11-20 16:40:58.701109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.004 [2024-11-20 16:40:58.701119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.004 qpair failed and we were unable to recover it. 00:29:13.004 [2024-11-20 16:40:58.701437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.004 [2024-11-20 16:40:58.701447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.004 qpair failed and we were unable to recover it. 00:29:13.004 [2024-11-20 16:40:58.701738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.004 [2024-11-20 16:40:58.701747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.004 qpair failed and we were unable to recover it. 00:29:13.004 [2024-11-20 16:40:58.702108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.004 [2024-11-20 16:40:58.702118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.004 qpair failed and we were unable to recover it. 00:29:13.004 [2024-11-20 16:40:58.702291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.004 [2024-11-20 16:40:58.702301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.004 qpair failed and we were unable to recover it. 00:29:13.004 [2024-11-20 16:40:58.702576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.004 [2024-11-20 16:40:58.702586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.004 qpair failed and we were unable to recover it. 00:29:13.004 [2024-11-20 16:40:58.702892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.004 [2024-11-20 16:40:58.702902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.004 qpair failed and we were unable to recover it. 00:29:13.005 [2024-11-20 16:40:58.703223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.005 [2024-11-20 16:40:58.703233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.005 qpair failed and we were unable to recover it. 00:29:13.005 [2024-11-20 16:40:58.703393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.005 [2024-11-20 16:40:58.703403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.005 qpair failed and we were unable to recover it. 00:29:13.005 [2024-11-20 16:40:58.703799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.005 [2024-11-20 16:40:58.703808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.005 qpair failed and we were unable to recover it. 00:29:13.005 [2024-11-20 16:40:58.704023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.005 [2024-11-20 16:40:58.704032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.005 qpair failed and we were unable to recover it. 00:29:13.005 [2024-11-20 16:40:58.704207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.005 [2024-11-20 16:40:58.704216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.005 qpair failed and we were unable to recover it. 00:29:13.005 [2024-11-20 16:40:58.704402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.005 [2024-11-20 16:40:58.704411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.005 qpair failed and we were unable to recover it. 00:29:13.005 [2024-11-20 16:40:58.704587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.005 [2024-11-20 16:40:58.704597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.005 qpair failed and we were unable to recover it. 00:29:13.005 [2024-11-20 16:40:58.704903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.005 [2024-11-20 16:40:58.704912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.005 qpair failed and we were unable to recover it. 00:29:13.005 [2024-11-20 16:40:58.705218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.005 [2024-11-20 16:40:58.705228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.005 qpair failed and we were unable to recover it. 00:29:13.005 [2024-11-20 16:40:58.705400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.005 [2024-11-20 16:40:58.705410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.005 qpair failed and we were unable to recover it. 00:29:13.005 [2024-11-20 16:40:58.705724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.005 [2024-11-20 16:40:58.705734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.005 qpair failed and we were unable to recover it. 00:29:13.005 [2024-11-20 16:40:58.705923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.005 [2024-11-20 16:40:58.705933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.005 qpair failed and we were unable to recover it. 00:29:13.005 [2024-11-20 16:40:58.706094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.005 [2024-11-20 16:40:58.706105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.005 qpair failed and we were unable to recover it. 00:29:13.005 [2024-11-20 16:40:58.706385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.005 [2024-11-20 16:40:58.706397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.005 qpair failed and we were unable to recover it. 00:29:13.005 [2024-11-20 16:40:58.706547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.005 [2024-11-20 16:40:58.706556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.005 qpair failed and we were unable to recover it. 00:29:13.005 [2024-11-20 16:40:58.706829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.005 [2024-11-20 16:40:58.706839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.005 qpair failed and we were unable to recover it. 00:29:13.005 [2024-11-20 16:40:58.707034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.005 [2024-11-20 16:40:58.707044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.005 qpair failed and we were unable to recover it. 00:29:13.005 [2024-11-20 16:40:58.707371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.005 [2024-11-20 16:40:58.707380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.005 qpair failed and we were unable to recover it. 00:29:13.005 [2024-11-20 16:40:58.707705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.005 [2024-11-20 16:40:58.707716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.005 qpair failed and we were unable to recover it. 00:29:13.005 [2024-11-20 16:40:58.708061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.005 [2024-11-20 16:40:58.708071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.005 qpair failed and we were unable to recover it. 00:29:13.005 [2024-11-20 16:40:58.708294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.005 [2024-11-20 16:40:58.708304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.005 qpair failed and we were unable to recover it. 00:29:13.005 [2024-11-20 16:40:58.708657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.005 [2024-11-20 16:40:58.708666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.005 qpair failed and we were unable to recover it. 00:29:13.005 [2024-11-20 16:40:58.708949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.005 [2024-11-20 16:40:58.708958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.005 qpair failed and we were unable to recover it. 00:29:13.005 [2024-11-20 16:40:58.709278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.005 [2024-11-20 16:40:58.709288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.005 qpair failed and we were unable to recover it. 00:29:13.005 [2024-11-20 16:40:58.709617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.005 [2024-11-20 16:40:58.709627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.005 qpair failed and we were unable to recover it. 00:29:13.005 [2024-11-20 16:40:58.709938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.005 [2024-11-20 16:40:58.709948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.005 qpair failed and we were unable to recover it. 00:29:13.005 [2024-11-20 16:40:58.710170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.005 [2024-11-20 16:40:58.710180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.005 qpair failed and we were unable to recover it. 00:29:13.005 [2024-11-20 16:40:58.710369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.005 [2024-11-20 16:40:58.710379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.005 qpair failed and we were unable to recover it. 00:29:13.005 [2024-11-20 16:40:58.710549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.005 [2024-11-20 16:40:58.710559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.005 qpair failed and we were unable to recover it. 00:29:13.005 [2024-11-20 16:40:58.710829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.005 [2024-11-20 16:40:58.710840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.005 qpair failed and we were unable to recover it. 00:29:13.005 [2024-11-20 16:40:58.711155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.005 [2024-11-20 16:40:58.711165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.005 qpair failed and we were unable to recover it. 00:29:13.005 [2024-11-20 16:40:58.711329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.005 [2024-11-20 16:40:58.711339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.005 qpair failed and we were unable to recover it. 00:29:13.005 [2024-11-20 16:40:58.711669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.005 [2024-11-20 16:40:58.711679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.005 qpair failed and we were unable to recover it. 00:29:13.005 [2024-11-20 16:40:58.711972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.005 [2024-11-20 16:40:58.711991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.005 qpair failed and we were unable to recover it. 00:29:13.005 [2024-11-20 16:40:58.712298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.005 [2024-11-20 16:40:58.712308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.005 qpair failed and we were unable to recover it. 00:29:13.005 [2024-11-20 16:40:58.712466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.005 [2024-11-20 16:40:58.712476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.005 qpair failed and we were unable to recover it. 00:29:13.005 [2024-11-20 16:40:58.712837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.005 [2024-11-20 16:40:58.712847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.005 qpair failed and we were unable to recover it. 00:29:13.005 [2024-11-20 16:40:58.713125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.005 [2024-11-20 16:40:58.713136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.005 qpair failed and we were unable to recover it. 00:29:13.005 [2024-11-20 16:40:58.713454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.006 [2024-11-20 16:40:58.713463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.006 qpair failed and we were unable to recover it. 00:29:13.006 [2024-11-20 16:40:58.713783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.006 [2024-11-20 16:40:58.713793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.006 qpair failed and we were unable to recover it. 00:29:13.006 [2024-11-20 16:40:58.714131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.006 [2024-11-20 16:40:58.714141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.006 qpair failed and we were unable to recover it. 00:29:13.006 [2024-11-20 16:40:58.714451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.006 [2024-11-20 16:40:58.714461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.006 qpair failed and we were unable to recover it. 00:29:13.006 [2024-11-20 16:40:58.714787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.006 [2024-11-20 16:40:58.714796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.006 qpair failed and we were unable to recover it. 00:29:13.006 [2024-11-20 16:40:58.714952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.006 [2024-11-20 16:40:58.714961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.006 qpair failed and we were unable to recover it. 00:29:13.006 [2024-11-20 16:40:58.715155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.006 [2024-11-20 16:40:58.715166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.006 qpair failed and we were unable to recover it. 00:29:13.006 [2024-11-20 16:40:58.715350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.006 [2024-11-20 16:40:58.715360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.006 qpair failed and we were unable to recover it. 00:29:13.006 [2024-11-20 16:40:58.715666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.006 [2024-11-20 16:40:58.715676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.006 qpair failed and we were unable to recover it. 00:29:13.006 [2024-11-20 16:40:58.716010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.006 [2024-11-20 16:40:58.716021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.006 qpair failed and we were unable to recover it. 00:29:13.006 [2024-11-20 16:40:58.716194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.006 [2024-11-20 16:40:58.716205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.006 qpair failed and we were unable to recover it. 00:29:13.006 [2024-11-20 16:40:58.716512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.006 [2024-11-20 16:40:58.716521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.006 qpair failed and we were unable to recover it. 00:29:13.006 [2024-11-20 16:40:58.716711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.006 [2024-11-20 16:40:58.716721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.006 qpair failed and we were unable to recover it. 00:29:13.006 [2024-11-20 16:40:58.717127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.006 [2024-11-20 16:40:58.717137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.006 qpair failed and we were unable to recover it. 00:29:13.006 [2024-11-20 16:40:58.717421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.006 [2024-11-20 16:40:58.717431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.006 qpair failed and we were unable to recover it. 00:29:13.006 [2024-11-20 16:40:58.717760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.006 [2024-11-20 16:40:58.717770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.006 qpair failed and we were unable to recover it. 00:29:13.006 [2024-11-20 16:40:58.717931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.006 [2024-11-20 16:40:58.717943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.006 qpair failed and we were unable to recover it. 00:29:13.006 [2024-11-20 16:40:58.718199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.006 [2024-11-20 16:40:58.718209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.006 qpair failed and we were unable to recover it. 00:29:13.006 [2024-11-20 16:40:58.718499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.006 [2024-11-20 16:40:58.718508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.006 qpair failed and we were unable to recover it. 00:29:13.006 [2024-11-20 16:40:58.718895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.006 [2024-11-20 16:40:58.718904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.006 qpair failed and we were unable to recover it. 00:29:13.006 [2024-11-20 16:40:58.719189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.006 [2024-11-20 16:40:58.719199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.006 qpair failed and we were unable to recover it. 00:29:13.006 [2024-11-20 16:40:58.719582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.006 [2024-11-20 16:40:58.719592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.006 qpair failed and we were unable to recover it. 00:29:13.006 [2024-11-20 16:40:58.719764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.006 [2024-11-20 16:40:58.719773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.006 qpair failed and we were unable to recover it. 00:29:13.006 [2024-11-20 16:40:58.719820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.006 [2024-11-20 16:40:58.719830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.006 qpair failed and we were unable to recover it. 00:29:13.006 [2024-11-20 16:40:58.720116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.006 [2024-11-20 16:40:58.720128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.006 qpair failed and we were unable to recover it. 00:29:13.006 [2024-11-20 16:40:58.720427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.006 [2024-11-20 16:40:58.720436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.006 qpair failed and we were unable to recover it. 00:29:13.006 [2024-11-20 16:40:58.720652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.006 [2024-11-20 16:40:58.720661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.006 qpair failed and we were unable to recover it. 00:29:13.006 [2024-11-20 16:40:58.721021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.006 [2024-11-20 16:40:58.721032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.006 qpair failed and we were unable to recover it. 00:29:13.006 [2024-11-20 16:40:58.721372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.006 [2024-11-20 16:40:58.721383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.006 qpair failed and we were unable to recover it. 00:29:13.006 [2024-11-20 16:40:58.721426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.006 [2024-11-20 16:40:58.721435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.006 qpair failed and we were unable to recover it. 00:29:13.006 [2024-11-20 16:40:58.721638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.006 [2024-11-20 16:40:58.721648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.006 qpair failed and we were unable to recover it. 00:29:13.006 [2024-11-20 16:40:58.721934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.006 [2024-11-20 16:40:58.721944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.006 qpair failed and we were unable to recover it. 00:29:13.006 [2024-11-20 16:40:58.722118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.006 [2024-11-20 16:40:58.722128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.006 qpair failed and we were unable to recover it. 00:29:13.006 [2024-11-20 16:40:58.722432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.006 [2024-11-20 16:40:58.722442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.006 qpair failed and we were unable to recover it. 00:29:13.007 [2024-11-20 16:40:58.722605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.007 [2024-11-20 16:40:58.722616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.007 qpair failed and we were unable to recover it. 00:29:13.007 [2024-11-20 16:40:58.722849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.007 [2024-11-20 16:40:58.722860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.007 qpair failed and we were unable to recover it. 00:29:13.007 [2024-11-20 16:40:58.723193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.007 [2024-11-20 16:40:58.723203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.007 qpair failed and we were unable to recover it. 00:29:13.007 [2024-11-20 16:40:58.723377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.007 [2024-11-20 16:40:58.723387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.007 qpair failed and we were unable to recover it. 00:29:13.007 [2024-11-20 16:40:58.723725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.007 [2024-11-20 16:40:58.723735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.007 qpair failed and we were unable to recover it. 00:29:13.007 [2024-11-20 16:40:58.724041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.007 [2024-11-20 16:40:58.724052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.007 qpair failed and we were unable to recover it. 00:29:13.007 [2024-11-20 16:40:58.724455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.007 [2024-11-20 16:40:58.724465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.007 qpair failed and we were unable to recover it. 00:29:13.007 [2024-11-20 16:40:58.724793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.007 [2024-11-20 16:40:58.724803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.007 qpair failed and we were unable to recover it. 00:29:13.007 [2024-11-20 16:40:58.725132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.007 [2024-11-20 16:40:58.725143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.007 qpair failed and we were unable to recover it. 00:29:13.007 [2024-11-20 16:40:58.725286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.007 [2024-11-20 16:40:58.725296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.007 qpair failed and we were unable to recover it. 00:29:13.007 [2024-11-20 16:40:58.725587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.007 [2024-11-20 16:40:58.725598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.007 qpair failed and we were unable to recover it. 00:29:13.007 [2024-11-20 16:40:58.725924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.007 [2024-11-20 16:40:58.725933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.007 qpair failed and we were unable to recover it. 00:29:13.007 [2024-11-20 16:40:58.726093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.007 [2024-11-20 16:40:58.726103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.007 qpair failed and we were unable to recover it. 00:29:13.007 [2024-11-20 16:40:58.726481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.007 [2024-11-20 16:40:58.726490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.007 qpair failed and we were unable to recover it. 00:29:13.007 [2024-11-20 16:40:58.726786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.007 [2024-11-20 16:40:58.726802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.007 qpair failed and we were unable to recover it. 00:29:13.007 [2024-11-20 16:40:58.727060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.007 [2024-11-20 16:40:58.727070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.007 qpair failed and we were unable to recover it. 00:29:13.007 [2024-11-20 16:40:58.727364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.007 [2024-11-20 16:40:58.727375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.007 qpair failed and we were unable to recover it. 00:29:13.007 [2024-11-20 16:40:58.727676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.007 [2024-11-20 16:40:58.727686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.007 qpair failed and we were unable to recover it. 00:29:13.007 [2024-11-20 16:40:58.728001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.007 [2024-11-20 16:40:58.728011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.007 qpair failed and we were unable to recover it. 00:29:13.007 [2024-11-20 16:40:58.728338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.007 [2024-11-20 16:40:58.728347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.007 qpair failed and we were unable to recover it. 00:29:13.007 [2024-11-20 16:40:58.728665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.007 [2024-11-20 16:40:58.728675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.007 qpair failed and we were unable to recover it. 00:29:13.007 [2024-11-20 16:40:58.728834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.007 [2024-11-20 16:40:58.728844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.007 qpair failed and we were unable to recover it. 00:29:13.007 [2024-11-20 16:40:58.729049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.007 [2024-11-20 16:40:58.729060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.007 qpair failed and we were unable to recover it. 00:29:13.007 [2024-11-20 16:40:58.729233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.007 [2024-11-20 16:40:58.729242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.007 qpair failed and we were unable to recover it. 00:29:13.007 [2024-11-20 16:40:58.729515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.007 [2024-11-20 16:40:58.729525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.007 qpair failed and we were unable to recover it. 00:29:13.007 [2024-11-20 16:40:58.729834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.007 [2024-11-20 16:40:58.729844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.007 qpair failed and we were unable to recover it. 00:29:13.007 [2024-11-20 16:40:58.730161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.007 [2024-11-20 16:40:58.730171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.007 qpair failed and we were unable to recover it. 00:29:13.007 [2024-11-20 16:40:58.730335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.007 [2024-11-20 16:40:58.730345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.007 qpair failed and we were unable to recover it. 00:29:13.007 [2024-11-20 16:40:58.730635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.007 [2024-11-20 16:40:58.730644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.007 qpair failed and we were unable to recover it. 00:29:13.007 [2024-11-20 16:40:58.730813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.007 [2024-11-20 16:40:58.730823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.007 qpair failed and we were unable to recover it. 00:29:13.007 [2024-11-20 16:40:58.731001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.007 [2024-11-20 16:40:58.731013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.007 qpair failed and we were unable to recover it. 00:29:13.007 [2024-11-20 16:40:58.731331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.007 [2024-11-20 16:40:58.731342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.007 qpair failed and we were unable to recover it. 00:29:13.007 [2024-11-20 16:40:58.731664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.007 [2024-11-20 16:40:58.731674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.007 qpair failed and we were unable to recover it. 00:29:13.007 [2024-11-20 16:40:58.731895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.007 [2024-11-20 16:40:58.731905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.007 qpair failed and we were unable to recover it. 00:29:13.007 [2024-11-20 16:40:58.732192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.007 [2024-11-20 16:40:58.732203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.007 qpair failed and we were unable to recover it. 00:29:13.007 [2024-11-20 16:40:58.732529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.007 [2024-11-20 16:40:58.732539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.007 qpair failed and we were unable to recover it. 00:29:13.007 [2024-11-20 16:40:58.732835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.007 [2024-11-20 16:40:58.732844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.007 qpair failed and we were unable to recover it. 00:29:13.007 [2024-11-20 16:40:58.733141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.007 [2024-11-20 16:40:58.733151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.007 qpair failed and we were unable to recover it. 00:29:13.008 [2024-11-20 16:40:58.733462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.008 [2024-11-20 16:40:58.733472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.008 qpair failed and we were unable to recover it. 00:29:13.008 [2024-11-20 16:40:58.733641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.008 [2024-11-20 16:40:58.733651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.008 qpair failed and we were unable to recover it. 00:29:13.008 [2024-11-20 16:40:58.733938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.008 [2024-11-20 16:40:58.733948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.008 qpair failed and we were unable to recover it. 00:29:13.008 [2024-11-20 16:40:58.734260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.008 [2024-11-20 16:40:58.734270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.008 qpair failed and we were unable to recover it. 00:29:13.008 [2024-11-20 16:40:58.734573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.008 [2024-11-20 16:40:58.734583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.008 qpair failed and we were unable to recover it. 00:29:13.008 [2024-11-20 16:40:58.734945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.008 [2024-11-20 16:40:58.734955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.008 qpair failed and we were unable to recover it. 00:29:13.008 [2024-11-20 16:40:58.735273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.008 [2024-11-20 16:40:58.735284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.008 qpair failed and we were unable to recover it. 00:29:13.008 [2024-11-20 16:40:58.735563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.008 [2024-11-20 16:40:58.735573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.008 qpair failed and we were unable to recover it. 00:29:13.008 [2024-11-20 16:40:58.735901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.008 [2024-11-20 16:40:58.735912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.008 qpair failed and we were unable to recover it. 00:29:13.008 [2024-11-20 16:40:58.736222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.008 [2024-11-20 16:40:58.736233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.008 qpair failed and we were unable to recover it. 00:29:13.008 [2024-11-20 16:40:58.736421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.008 [2024-11-20 16:40:58.736431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.008 qpair failed and we were unable to recover it. 00:29:13.008 [2024-11-20 16:40:58.736608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.008 [2024-11-20 16:40:58.736620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.008 qpair failed and we were unable to recover it. 00:29:13.008 [2024-11-20 16:40:58.736815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.008 [2024-11-20 16:40:58.736827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.008 qpair failed and we were unable to recover it. 00:29:13.008 [2024-11-20 16:40:58.736867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.008 [2024-11-20 16:40:58.736876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.008 qpair failed and we were unable to recover it. 00:29:13.008 [2024-11-20 16:40:58.737166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.008 [2024-11-20 16:40:58.737177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.008 qpair failed and we were unable to recover it. 00:29:13.008 [2024-11-20 16:40:58.737522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.008 [2024-11-20 16:40:58.737533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.008 qpair failed and we were unable to recover it. 00:29:13.008 [2024-11-20 16:40:58.737844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.008 [2024-11-20 16:40:58.737854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.008 qpair failed and we were unable to recover it. 00:29:13.008 [2024-11-20 16:40:58.738146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.008 [2024-11-20 16:40:58.738156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.008 qpair failed and we were unable to recover it. 00:29:13.008 [2024-11-20 16:40:58.738362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.008 [2024-11-20 16:40:58.738372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.008 qpair failed and we were unable to recover it. 00:29:13.008 [2024-11-20 16:40:58.738693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.008 [2024-11-20 16:40:58.738702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.008 qpair failed and we were unable to recover it. 00:29:13.008 [2024-11-20 16:40:58.738872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.008 [2024-11-20 16:40:58.738882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.008 qpair failed and we were unable to recover it. 00:29:13.008 [2024-11-20 16:40:58.739282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.008 [2024-11-20 16:40:58.739292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.008 qpair failed and we were unable to recover it. 00:29:13.008 [2024-11-20 16:40:58.739611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.008 [2024-11-20 16:40:58.739621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.008 qpair failed and we were unable to recover it. 00:29:13.008 [2024-11-20 16:40:58.739928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.008 [2024-11-20 16:40:58.739938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.008 qpair failed and we were unable to recover it. 00:29:13.008 [2024-11-20 16:40:58.740248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.008 [2024-11-20 16:40:58.740259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.008 qpair failed and we were unable to recover it. 00:29:13.008 [2024-11-20 16:40:58.740392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.008 [2024-11-20 16:40:58.740402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.008 qpair failed and we were unable to recover it. 00:29:13.008 [2024-11-20 16:40:58.740731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.008 [2024-11-20 16:40:58.740741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.008 qpair failed and we were unable to recover it. 00:29:13.008 [2024-11-20 16:40:58.740964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.008 [2024-11-20 16:40:58.740974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.008 qpair failed and we were unable to recover it. 00:29:13.008 [2024-11-20 16:40:58.741308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.008 [2024-11-20 16:40:58.741317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.008 qpair failed and we were unable to recover it. 00:29:13.008 [2024-11-20 16:40:58.741606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.008 [2024-11-20 16:40:58.741616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.008 qpair failed and we were unable to recover it. 00:29:13.008 [2024-11-20 16:40:58.741834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.008 [2024-11-20 16:40:58.741844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.008 qpair failed and we were unable to recover it. 00:29:13.008 [2024-11-20 16:40:58.742163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.008 [2024-11-20 16:40:58.742173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.008 qpair failed and we were unable to recover it. 00:29:13.008 [2024-11-20 16:40:58.742354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.008 [2024-11-20 16:40:58.742364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.008 qpair failed and we were unable to recover it. 00:29:13.008 [2024-11-20 16:40:58.742757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.008 [2024-11-20 16:40:58.742767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.008 qpair failed and we were unable to recover it. 00:29:13.008 [2024-11-20 16:40:58.742915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.008 [2024-11-20 16:40:58.742925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.008 qpair failed and we were unable to recover it. 00:29:13.008 [2024-11-20 16:40:58.743098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.008 [2024-11-20 16:40:58.743108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.008 qpair failed and we were unable to recover it. 00:29:13.008 [2024-11-20 16:40:58.743507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.008 [2024-11-20 16:40:58.743517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.008 qpair failed and we were unable to recover it. 00:29:13.008 [2024-11-20 16:40:58.743790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.008 [2024-11-20 16:40:58.743799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.008 qpair failed and we were unable to recover it. 00:29:13.008 [2024-11-20 16:40:58.744130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.009 [2024-11-20 16:40:58.744141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.009 qpair failed and we were unable to recover it. 00:29:13.009 [2024-11-20 16:40:58.744430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.009 [2024-11-20 16:40:58.744441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.009 qpair failed and we were unable to recover it. 00:29:13.009 [2024-11-20 16:40:58.744634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.009 [2024-11-20 16:40:58.744644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.009 qpair failed and we were unable to recover it. 00:29:13.009 [2024-11-20 16:40:58.744912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.009 [2024-11-20 16:40:58.744921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.009 qpair failed and we were unable to recover it. 00:29:13.009 [2024-11-20 16:40:58.745115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.009 [2024-11-20 16:40:58.745126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.009 qpair failed and we were unable to recover it. 00:29:13.009 [2024-11-20 16:40:58.745305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.009 [2024-11-20 16:40:58.745315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.009 qpair failed and we were unable to recover it. 00:29:13.009 [2024-11-20 16:40:58.745644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.009 [2024-11-20 16:40:58.745655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.009 qpair failed and we were unable to recover it. 00:29:13.009 [2024-11-20 16:40:58.745950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.009 [2024-11-20 16:40:58.745960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.009 qpair failed and we were unable to recover it. 00:29:13.009 [2024-11-20 16:40:58.746160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.009 [2024-11-20 16:40:58.746170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.009 qpair failed and we were unable to recover it. 00:29:13.009 [2024-11-20 16:40:58.746459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.009 [2024-11-20 16:40:58.746468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.009 qpair failed and we were unable to recover it. 00:29:13.009 [2024-11-20 16:40:58.746778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.009 [2024-11-20 16:40:58.746788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.009 qpair failed and we were unable to recover it. 00:29:13.009 [2024-11-20 16:40:58.746984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.009 [2024-11-20 16:40:58.746994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.009 qpair failed and we were unable to recover it. 00:29:13.009 [2024-11-20 16:40:58.747281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.009 [2024-11-20 16:40:58.747291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.009 qpair failed and we were unable to recover it. 00:29:13.009 [2024-11-20 16:40:58.747677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.009 [2024-11-20 16:40:58.747686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.009 qpair failed and we were unable to recover it. 00:29:13.009 [2024-11-20 16:40:58.748010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.009 [2024-11-20 16:40:58.748020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.009 qpair failed and we were unable to recover it. 00:29:13.009 [2024-11-20 16:40:58.748276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.009 [2024-11-20 16:40:58.748289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.009 qpair failed and we were unable to recover it. 00:29:13.009 [2024-11-20 16:40:58.748582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.009 [2024-11-20 16:40:58.748592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.009 qpair failed and we were unable to recover it. 00:29:13.009 [2024-11-20 16:40:58.748800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.009 [2024-11-20 16:40:58.748809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.009 qpair failed and we were unable to recover it. 00:29:13.009 [2024-11-20 16:40:58.749129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.009 [2024-11-20 16:40:58.749139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.009 qpair failed and we were unable to recover it. 00:29:13.009 [2024-11-20 16:40:58.749447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.009 [2024-11-20 16:40:58.749458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.009 qpair failed and we were unable to recover it. 00:29:13.009 [2024-11-20 16:40:58.749644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.009 [2024-11-20 16:40:58.749654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.009 qpair failed and we were unable to recover it. 00:29:13.009 [2024-11-20 16:40:58.749812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.009 [2024-11-20 16:40:58.749822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.009 qpair failed and we were unable to recover it. 00:29:13.009 [2024-11-20 16:40:58.750136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.009 [2024-11-20 16:40:58.750146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.009 qpair failed and we were unable to recover it. 00:29:13.009 [2024-11-20 16:40:58.750418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.009 [2024-11-20 16:40:58.750427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.009 qpair failed and we were unable to recover it. 00:29:13.009 [2024-11-20 16:40:58.750753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.009 [2024-11-20 16:40:58.750763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.009 qpair failed and we were unable to recover it. 00:29:13.009 [2024-11-20 16:40:58.750929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.009 [2024-11-20 16:40:58.750938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.009 qpair failed and we were unable to recover it. 00:29:13.009 [2024-11-20 16:40:58.751180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.009 [2024-11-20 16:40:58.751190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.009 qpair failed and we were unable to recover it. 00:29:13.009 [2024-11-20 16:40:58.751409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.009 [2024-11-20 16:40:58.751418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.009 qpair failed and we were unable to recover it. 00:29:13.009 [2024-11-20 16:40:58.751732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.009 [2024-11-20 16:40:58.751743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.009 qpair failed and we were unable to recover it. 00:29:13.009 [2024-11-20 16:40:58.752056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.009 [2024-11-20 16:40:58.752066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.009 qpair failed and we were unable to recover it. 00:29:13.009 [2024-11-20 16:40:58.752285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.009 [2024-11-20 16:40:58.752294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.009 qpair failed and we were unable to recover it. 00:29:13.009 [2024-11-20 16:40:58.752521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.009 [2024-11-20 16:40:58.752530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.009 qpair failed and we were unable to recover it. 00:29:13.009 [2024-11-20 16:40:58.752731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.009 [2024-11-20 16:40:58.752740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.009 qpair failed and we were unable to recover it. 00:29:13.009 [2024-11-20 16:40:58.752911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.009 [2024-11-20 16:40:58.752920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.009 qpair failed and we were unable to recover it. 00:29:13.009 [2024-11-20 16:40:58.753202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.009 [2024-11-20 16:40:58.753212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.009 qpair failed and we were unable to recover it. 00:29:13.009 [2024-11-20 16:40:58.753299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.009 [2024-11-20 16:40:58.753308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.009 qpair failed and we were unable to recover it. 00:29:13.009 [2024-11-20 16:40:58.753610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.009 [2024-11-20 16:40:58.753620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.009 qpair failed and we were unable to recover it. 00:29:13.009 [2024-11-20 16:40:58.753917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.009 [2024-11-20 16:40:58.753935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.009 qpair failed and we were unable to recover it. 00:29:13.009 [2024-11-20 16:40:58.754249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.009 [2024-11-20 16:40:58.754258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.010 qpair failed and we were unable to recover it. 00:29:13.010 [2024-11-20 16:40:58.754540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.010 [2024-11-20 16:40:58.754557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.010 qpair failed and we were unable to recover it. 00:29:13.010 [2024-11-20 16:40:58.754892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.010 [2024-11-20 16:40:58.754903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.010 qpair failed and we were unable to recover it. 00:29:13.010 [2024-11-20 16:40:58.755081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.010 [2024-11-20 16:40:58.755092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.010 qpair failed and we were unable to recover it. 00:29:13.010 [2024-11-20 16:40:58.755269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.010 [2024-11-20 16:40:58.755281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.010 qpair failed and we were unable to recover it. 00:29:13.010 [2024-11-20 16:40:58.755589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.010 [2024-11-20 16:40:58.755599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.010 qpair failed and we were unable to recover it. 00:29:13.010 [2024-11-20 16:40:58.755786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.010 [2024-11-20 16:40:58.755795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.010 qpair failed and we were unable to recover it. 00:29:13.010 [2024-11-20 16:40:58.755936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.010 [2024-11-20 16:40:58.755947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.010 qpair failed and we were unable to recover it. 00:29:13.010 [2024-11-20 16:40:58.756254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.010 [2024-11-20 16:40:58.756265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.010 qpair failed and we were unable to recover it. 00:29:13.010 [2024-11-20 16:40:58.756600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.010 [2024-11-20 16:40:58.756610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.010 qpair failed and we were unable to recover it. 00:29:13.010 [2024-11-20 16:40:58.756823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.010 [2024-11-20 16:40:58.756833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.010 qpair failed and we were unable to recover it. 00:29:13.010 [2024-11-20 16:40:58.756989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.010 [2024-11-20 16:40:58.756999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.010 qpair failed and we were unable to recover it. 00:29:13.010 [2024-11-20 16:40:58.757309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.010 [2024-11-20 16:40:58.757319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.010 qpair failed and we were unable to recover it. 00:29:13.010 [2024-11-20 16:40:58.757627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.010 [2024-11-20 16:40:58.757636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.010 qpair failed and we were unable to recover it. 00:29:13.010 [2024-11-20 16:40:58.757807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.010 [2024-11-20 16:40:58.757816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.010 qpair failed and we were unable to recover it. 00:29:13.010 [2024-11-20 16:40:58.758046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.010 [2024-11-20 16:40:58.758056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.010 qpair failed and we were unable to recover it. 00:29:13.010 [2024-11-20 16:40:58.758432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.010 [2024-11-20 16:40:58.758442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.010 qpair failed and we were unable to recover it. 00:29:13.010 [2024-11-20 16:40:58.758751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.010 [2024-11-20 16:40:58.758760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.010 qpair failed and we were unable to recover it. 00:29:13.010 [2024-11-20 16:40:58.759089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.010 [2024-11-20 16:40:58.759098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.010 qpair failed and we were unable to recover it. 00:29:13.010 [2024-11-20 16:40:58.759356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.010 [2024-11-20 16:40:58.759366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.010 qpair failed and we were unable to recover it. 00:29:13.010 [2024-11-20 16:40:58.759672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.010 [2024-11-20 16:40:58.759682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.010 qpair failed and we were unable to recover it. 00:29:13.010 [2024-11-20 16:40:58.759869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.010 [2024-11-20 16:40:58.759879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.010 qpair failed and we were unable to recover it. 00:29:13.010 [2024-11-20 16:40:58.760063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.010 [2024-11-20 16:40:58.760074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.010 qpair failed and we were unable to recover it. 00:29:13.010 [2024-11-20 16:40:58.760483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.010 [2024-11-20 16:40:58.760493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.010 qpair failed and we were unable to recover it. 00:29:13.010 [2024-11-20 16:40:58.760662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.010 [2024-11-20 16:40:58.760671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.010 qpair failed and we were unable to recover it. 00:29:13.010 [2024-11-20 16:40:58.760993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.010 [2024-11-20 16:40:58.761003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.010 qpair failed and we were unable to recover it. 00:29:13.010 [2024-11-20 16:40:58.761208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.010 [2024-11-20 16:40:58.761218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.010 qpair failed and we were unable to recover it. 00:29:13.010 [2024-11-20 16:40:58.761431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.010 [2024-11-20 16:40:58.761440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.010 qpair failed and we were unable to recover it. 00:29:13.010 [2024-11-20 16:40:58.761709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.010 [2024-11-20 16:40:58.761718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.010 qpair failed and we were unable to recover it. 00:29:13.010 [2024-11-20 16:40:58.761902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.010 [2024-11-20 16:40:58.761912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.010 qpair failed and we were unable to recover it. 00:29:13.010 [2024-11-20 16:40:58.762130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.010 [2024-11-20 16:40:58.762140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.010 qpair failed and we were unable to recover it. 00:29:13.010 [2024-11-20 16:40:58.762499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.010 [2024-11-20 16:40:58.762509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.010 qpair failed and we were unable to recover it. 00:29:13.010 [2024-11-20 16:40:58.762812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.010 [2024-11-20 16:40:58.762823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.010 qpair failed and we were unable to recover it. 00:29:13.010 [2024-11-20 16:40:58.763120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.010 [2024-11-20 16:40:58.763130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.010 qpair failed and we were unable to recover it. 00:29:13.010 [2024-11-20 16:40:58.763412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.010 [2024-11-20 16:40:58.763422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.010 qpair failed and we were unable to recover it. 00:29:13.010 [2024-11-20 16:40:58.763700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.010 [2024-11-20 16:40:58.763710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.010 qpair failed and we were unable to recover it. 00:29:13.010 [2024-11-20 16:40:58.764029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.010 [2024-11-20 16:40:58.764038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.010 qpair failed and we were unable to recover it. 00:29:13.010 [2024-11-20 16:40:58.764360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.010 [2024-11-20 16:40:58.764370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.010 qpair failed and we were unable to recover it. 00:29:13.010 [2024-11-20 16:40:58.764413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.010 [2024-11-20 16:40:58.764422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.011 qpair failed and we were unable to recover it. 00:29:13.011 [2024-11-20 16:40:58.764606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.011 [2024-11-20 16:40:58.764615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.011 qpair failed and we were unable to recover it. 00:29:13.011 [2024-11-20 16:40:58.764938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.011 [2024-11-20 16:40:58.764947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.011 qpair failed and we were unable to recover it. 00:29:13.011 [2024-11-20 16:40:58.765113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.011 [2024-11-20 16:40:58.765124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.011 qpair failed and we were unable to recover it. 00:29:13.011 [2024-11-20 16:40:58.765415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.011 [2024-11-20 16:40:58.765424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.011 qpair failed and we were unable to recover it. 00:29:13.011 [2024-11-20 16:40:58.765709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.011 [2024-11-20 16:40:58.765718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.011 qpair failed and we were unable to recover it. 00:29:13.011 [2024-11-20 16:40:58.766038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.011 [2024-11-20 16:40:58.766048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.011 qpair failed and we were unable to recover it. 00:29:13.011 [2024-11-20 16:40:58.766382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.011 [2024-11-20 16:40:58.766394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.011 qpair failed and we were unable to recover it. 00:29:13.011 [2024-11-20 16:40:58.766684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.011 [2024-11-20 16:40:58.766693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.011 qpair failed and we were unable to recover it. 00:29:13.011 [2024-11-20 16:40:58.766867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.011 [2024-11-20 16:40:58.766877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.011 qpair failed and we were unable to recover it. 00:29:13.011 [2024-11-20 16:40:58.767261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.011 [2024-11-20 16:40:58.767271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.011 qpair failed and we were unable to recover it. 00:29:13.011 [2024-11-20 16:40:58.767588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.011 [2024-11-20 16:40:58.767597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.011 qpair failed and we were unable to recover it. 00:29:13.011 [2024-11-20 16:40:58.767924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.011 [2024-11-20 16:40:58.767934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.011 qpair failed and we were unable to recover it. 00:29:13.011 [2024-11-20 16:40:58.768283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.011 [2024-11-20 16:40:58.768293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.011 qpair failed and we were unable to recover it. 00:29:13.011 [2024-11-20 16:40:58.768580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.011 [2024-11-20 16:40:58.768590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.011 qpair failed and we were unable to recover it. 00:29:13.011 [2024-11-20 16:40:58.768904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.011 [2024-11-20 16:40:58.768913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.011 qpair failed and we were unable to recover it. 00:29:13.011 [2024-11-20 16:40:58.769074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.011 [2024-11-20 16:40:58.769085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.011 qpair failed and we were unable to recover it. 00:29:13.011 [2024-11-20 16:40:58.769271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.011 [2024-11-20 16:40:58.769281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.011 qpair failed and we were unable to recover it. 00:29:13.011 [2024-11-20 16:40:58.769604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.011 [2024-11-20 16:40:58.769614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.011 qpair failed and we were unable to recover it. 00:29:13.011 [2024-11-20 16:40:58.769817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.011 [2024-11-20 16:40:58.769826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.011 qpair failed and we were unable to recover it. 00:29:13.011 [2024-11-20 16:40:58.770152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.011 [2024-11-20 16:40:58.770161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.011 qpair failed and we were unable to recover it. 00:29:13.011 [2024-11-20 16:40:58.770330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.011 [2024-11-20 16:40:58.770339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.011 qpair failed and we were unable to recover it. 00:29:13.011 [2024-11-20 16:40:58.770618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.011 [2024-11-20 16:40:58.770627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.011 qpair failed and we were unable to recover it. 00:29:13.011 [2024-11-20 16:40:58.770931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.011 [2024-11-20 16:40:58.770941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.011 qpair failed and we were unable to recover it. 00:29:13.011 [2024-11-20 16:40:58.771235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.011 [2024-11-20 16:40:58.771245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.011 qpair failed and we were unable to recover it. 00:29:13.011 [2024-11-20 16:40:58.771411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.011 [2024-11-20 16:40:58.771421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.011 qpair failed and we were unable to recover it. 00:29:13.011 [2024-11-20 16:40:58.771701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.011 [2024-11-20 16:40:58.771711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.011 qpair failed and we were unable to recover it. 00:29:13.011 [2024-11-20 16:40:58.772036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.011 [2024-11-20 16:40:58.772046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.011 qpair failed and we were unable to recover it. 00:29:13.011 [2024-11-20 16:40:58.772363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.011 [2024-11-20 16:40:58.772372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.011 qpair failed and we were unable to recover it. 00:29:13.011 [2024-11-20 16:40:58.772661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.011 [2024-11-20 16:40:58.772670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.011 qpair failed and we were unable to recover it. 00:29:13.011 [2024-11-20 16:40:58.772718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.011 [2024-11-20 16:40:58.772727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.011 qpair failed and we were unable to recover it. 00:29:13.011 [2024-11-20 16:40:58.773036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.011 [2024-11-20 16:40:58.773046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.011 qpair failed and we were unable to recover it. 00:29:13.011 [2024-11-20 16:40:58.773197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.011 [2024-11-20 16:40:58.773206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.011 qpair failed and we were unable to recover it. 00:29:13.011 [2024-11-20 16:40:58.773323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.011 [2024-11-20 16:40:58.773332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.011 qpair failed and we were unable to recover it. 00:29:13.011 [2024-11-20 16:40:58.773609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.012 [2024-11-20 16:40:58.773621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.012 qpair failed and we were unable to recover it. 00:29:13.012 [2024-11-20 16:40:58.773779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.012 [2024-11-20 16:40:58.773789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.012 qpair failed and we were unable to recover it. 00:29:13.012 [2024-11-20 16:40:58.774078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.012 [2024-11-20 16:40:58.774087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.012 qpair failed and we were unable to recover it. 00:29:13.012 [2024-11-20 16:40:58.774253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.012 [2024-11-20 16:40:58.774263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.012 qpair failed and we were unable to recover it. 00:29:13.012 [2024-11-20 16:40:58.774598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.012 [2024-11-20 16:40:58.774607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.012 qpair failed and we were unable to recover it. 00:29:13.012 [2024-11-20 16:40:58.774651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.012 [2024-11-20 16:40:58.774659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.012 qpair failed and we were unable to recover it. 00:29:13.012 [2024-11-20 16:40:58.774950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.012 [2024-11-20 16:40:58.774960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.012 qpair failed and we were unable to recover it. 00:29:13.012 [2024-11-20 16:40:58.775286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.012 [2024-11-20 16:40:58.775296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.012 qpair failed and we were unable to recover it. 00:29:13.012 [2024-11-20 16:40:58.775455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.012 [2024-11-20 16:40:58.775464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.012 qpair failed and we were unable to recover it. 00:29:13.012 [2024-11-20 16:40:58.775757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.012 [2024-11-20 16:40:58.775768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.012 qpair failed and we were unable to recover it. 00:29:13.012 [2024-11-20 16:40:58.776075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.012 [2024-11-20 16:40:58.776086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.012 qpair failed and we were unable to recover it. 00:29:13.012 [2024-11-20 16:40:58.776393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.012 [2024-11-20 16:40:58.776404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.012 qpair failed and we were unable to recover it. 00:29:13.012 [2024-11-20 16:40:58.776729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.012 [2024-11-20 16:40:58.776739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.012 qpair failed and we were unable to recover it. 00:29:13.012 [2024-11-20 16:40:58.777025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.012 [2024-11-20 16:40:58.777035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.012 qpair failed and we were unable to recover it. 00:29:13.012 [2024-11-20 16:40:58.777244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.012 [2024-11-20 16:40:58.777254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.012 qpair failed and we were unable to recover it. 00:29:13.012 [2024-11-20 16:40:58.777612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.012 [2024-11-20 16:40:58.777622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.012 qpair failed and we were unable to recover it. 00:29:13.012 [2024-11-20 16:40:58.777937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.012 [2024-11-20 16:40:58.777946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.012 qpair failed and we were unable to recover it. 00:29:13.012 [2024-11-20 16:40:58.778115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.012 [2024-11-20 16:40:58.778124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.012 qpair failed and we were unable to recover it. 00:29:13.012 [2024-11-20 16:40:58.778295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.012 [2024-11-20 16:40:58.778304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.012 qpair failed and we were unable to recover it. 00:29:13.012 [2024-11-20 16:40:58.778575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.012 [2024-11-20 16:40:58.778585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.012 qpair failed and we were unable to recover it. 00:29:13.012 [2024-11-20 16:40:58.778876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.012 [2024-11-20 16:40:58.778893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.012 qpair failed and we were unable to recover it. 00:29:13.012 [2024-11-20 16:40:58.779218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.012 [2024-11-20 16:40:58.779228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.012 qpair failed and we were unable to recover it. 00:29:13.012 [2024-11-20 16:40:58.779543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.012 [2024-11-20 16:40:58.779553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.012 qpair failed and we were unable to recover it. 00:29:13.012 [2024-11-20 16:40:58.779758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.012 [2024-11-20 16:40:58.779768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.012 qpair failed and we were unable to recover it. 00:29:13.012 [2024-11-20 16:40:58.779999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.012 [2024-11-20 16:40:58.780009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.012 qpair failed and we were unable to recover it. 00:29:13.012 [2024-11-20 16:40:58.780306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.012 [2024-11-20 16:40:58.780316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.012 qpair failed and we were unable to recover it. 00:29:13.012 [2024-11-20 16:40:58.780659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.012 [2024-11-20 16:40:58.780668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.012 qpair failed and we were unable to recover it. 00:29:13.012 [2024-11-20 16:40:58.780881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.012 [2024-11-20 16:40:58.780891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.012 qpair failed and we were unable to recover it. 00:29:13.012 [2024-11-20 16:40:58.781206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.012 [2024-11-20 16:40:58.781216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.012 qpair failed and we were unable to recover it. 00:29:13.012 [2024-11-20 16:40:58.781550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.012 [2024-11-20 16:40:58.781560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.012 qpair failed and we were unable to recover it. 00:29:13.012 [2024-11-20 16:40:58.781858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.012 [2024-11-20 16:40:58.781868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.012 qpair failed and we were unable to recover it. 00:29:13.012 [2024-11-20 16:40:58.782147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.012 [2024-11-20 16:40:58.782157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.012 qpair failed and we were unable to recover it. 00:29:13.012 [2024-11-20 16:40:58.782333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.012 [2024-11-20 16:40:58.782343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.012 qpair failed and we were unable to recover it. 00:29:13.012 [2024-11-20 16:40:58.782645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.012 [2024-11-20 16:40:58.782654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.012 qpair failed and we were unable to recover it. 00:29:13.012 [2024-11-20 16:40:58.782823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.012 [2024-11-20 16:40:58.782833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.012 qpair failed and we were unable to recover it. 00:29:13.012 [2024-11-20 16:40:58.783111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.012 [2024-11-20 16:40:58.783121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.012 qpair failed and we were unable to recover it. 00:29:13.012 [2024-11-20 16:40:58.783429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.012 [2024-11-20 16:40:58.783439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.012 qpair failed and we were unable to recover it. 00:29:13.012 [2024-11-20 16:40:58.783647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.012 [2024-11-20 16:40:58.783656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.012 qpair failed and we were unable to recover it. 00:29:13.012 [2024-11-20 16:40:58.783956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.012 [2024-11-20 16:40:58.783966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.012 qpair failed and we were unable to recover it. 00:29:13.013 [2024-11-20 16:40:58.784092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.013 [2024-11-20 16:40:58.784101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.013 qpair failed and we were unable to recover it. 00:29:13.013 [2024-11-20 16:40:58.784397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.013 [2024-11-20 16:40:58.784406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.013 qpair failed and we were unable to recover it. 00:29:13.013 [2024-11-20 16:40:58.784760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.013 [2024-11-20 16:40:58.784772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.013 qpair failed and we were unable to recover it. 00:29:13.013 [2024-11-20 16:40:58.785104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.013 [2024-11-20 16:40:58.785114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.013 qpair failed and we were unable to recover it. 00:29:13.013 [2024-11-20 16:40:58.785313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.013 [2024-11-20 16:40:58.785322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.013 qpair failed and we were unable to recover it. 00:29:13.013 [2024-11-20 16:40:58.785655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.013 [2024-11-20 16:40:58.785665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.013 qpair failed and we were unable to recover it. 00:29:13.013 [2024-11-20 16:40:58.785952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.013 [2024-11-20 16:40:58.785968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.013 qpair failed and we were unable to recover it. 00:29:13.013 [2024-11-20 16:40:58.786149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.013 [2024-11-20 16:40:58.786159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.013 qpair failed and we were unable to recover it. 00:29:13.013 [2024-11-20 16:40:58.786394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.013 [2024-11-20 16:40:58.786403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.013 qpair failed and we were unable to recover it. 00:29:13.013 [2024-11-20 16:40:58.786633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.013 [2024-11-20 16:40:58.786643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.013 qpair failed and we were unable to recover it. 00:29:13.013 [2024-11-20 16:40:58.786939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.013 [2024-11-20 16:40:58.786948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.013 qpair failed and we were unable to recover it. 00:29:13.013 [2024-11-20 16:40:58.787270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.013 [2024-11-20 16:40:58.787280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.013 qpair failed and we were unable to recover it. 00:29:13.013 [2024-11-20 16:40:58.787497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.013 [2024-11-20 16:40:58.787506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.013 qpair failed and we were unable to recover it. 00:29:13.013 [2024-11-20 16:40:58.787695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.013 [2024-11-20 16:40:58.787706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.013 qpair failed and we were unable to recover it. 00:29:13.013 [2024-11-20 16:40:58.788036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.013 [2024-11-20 16:40:58.788046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.013 qpair failed and we were unable to recover it. 00:29:13.013 [2024-11-20 16:40:58.788344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.013 [2024-11-20 16:40:58.788354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.013 qpair failed and we were unable to recover it. 00:29:13.013 [2024-11-20 16:40:58.788659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.013 [2024-11-20 16:40:58.788668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.013 qpair failed and we were unable to recover it. 00:29:13.013 [2024-11-20 16:40:58.788868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.013 [2024-11-20 16:40:58.788878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.013 qpair failed and we were unable to recover it. 00:29:13.013 [2024-11-20 16:40:58.789215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.013 [2024-11-20 16:40:58.789225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.013 qpair failed and we were unable to recover it. 00:29:13.013 [2024-11-20 16:40:58.789413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.013 [2024-11-20 16:40:58.789422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.013 qpair failed and we were unable to recover it. 00:29:13.013 [2024-11-20 16:40:58.789787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.013 [2024-11-20 16:40:58.789796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.013 qpair failed and we were unable to recover it. 00:29:13.013 [2024-11-20 16:40:58.790171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.013 [2024-11-20 16:40:58.790181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.013 qpair failed and we were unable to recover it. 00:29:13.013 [2024-11-20 16:40:58.790220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.013 [2024-11-20 16:40:58.790230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.013 qpair failed and we were unable to recover it. 00:29:13.013 [2024-11-20 16:40:58.790549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.013 [2024-11-20 16:40:58.790558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.013 qpair failed and we were unable to recover it. 00:29:13.013 [2024-11-20 16:40:58.790717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.013 [2024-11-20 16:40:58.790727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.013 qpair failed and we were unable to recover it. 00:29:13.013 [2024-11-20 16:40:58.791048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.013 [2024-11-20 16:40:58.791058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.013 qpair failed and we were unable to recover it. 00:29:13.013 [2024-11-20 16:40:58.791351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.013 [2024-11-20 16:40:58.791360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.013 qpair failed and we were unable to recover it. 00:29:13.013 [2024-11-20 16:40:58.791752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.013 [2024-11-20 16:40:58.791761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.013 qpair failed and we were unable to recover it. 00:29:13.013 [2024-11-20 16:40:58.791933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.013 [2024-11-20 16:40:58.791943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.013 qpair failed and we were unable to recover it. 00:29:13.013 [2024-11-20 16:40:58.792274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.013 [2024-11-20 16:40:58.792286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.013 qpair failed and we were unable to recover it. 00:29:13.013 [2024-11-20 16:40:58.792614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.013 [2024-11-20 16:40:58.792623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.013 qpair failed and we were unable to recover it. 00:29:13.013 [2024-11-20 16:40:58.792964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.013 [2024-11-20 16:40:58.792974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.013 qpair failed and we were unable to recover it. 00:29:13.013 [2024-11-20 16:40:58.793152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.013 [2024-11-20 16:40:58.793161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.013 qpair failed and we were unable to recover it. 00:29:13.013 [2024-11-20 16:40:58.793467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.013 [2024-11-20 16:40:58.793477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.013 qpair failed and we were unable to recover it. 00:29:13.013 [2024-11-20 16:40:58.793790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.013 [2024-11-20 16:40:58.793801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.013 qpair failed and we were unable to recover it. 00:29:13.013 [2024-11-20 16:40:58.794116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.013 [2024-11-20 16:40:58.794126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.013 qpair failed and we were unable to recover it. 00:29:13.013 [2024-11-20 16:40:58.794281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.013 [2024-11-20 16:40:58.794291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.013 qpair failed and we were unable to recover it. 00:29:13.013 [2024-11-20 16:40:58.794574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.013 [2024-11-20 16:40:58.794583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.013 qpair failed and we were unable to recover it. 00:29:13.013 [2024-11-20 16:40:58.794891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.014 [2024-11-20 16:40:58.794900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.014 qpair failed and we were unable to recover it. 00:29:13.014 [2024-11-20 16:40:58.795176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.014 [2024-11-20 16:40:58.795186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.014 qpair failed and we were unable to recover it. 00:29:13.014 [2024-11-20 16:40:58.795354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.014 [2024-11-20 16:40:58.795364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.014 qpair failed and we were unable to recover it. 00:29:13.014 [2024-11-20 16:40:58.795748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.014 [2024-11-20 16:40:58.795757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.014 qpair failed and we were unable to recover it. 00:29:13.014 [2024-11-20 16:40:58.796050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.014 [2024-11-20 16:40:58.796060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.014 qpair failed and we were unable to recover it. 00:29:13.014 [2024-11-20 16:40:58.796380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.014 [2024-11-20 16:40:58.796390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.014 qpair failed and we were unable to recover it. 00:29:13.014 [2024-11-20 16:40:58.796703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.014 [2024-11-20 16:40:58.796712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.014 qpair failed and we were unable to recover it. 00:29:13.014 [2024-11-20 16:40:58.797042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.014 [2024-11-20 16:40:58.797052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.014 qpair failed and we were unable to recover it. 00:29:13.014 [2024-11-20 16:40:58.797225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.014 [2024-11-20 16:40:58.797234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.014 qpair failed and we were unable to recover it. 00:29:13.014 [2024-11-20 16:40:58.797448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.014 [2024-11-20 16:40:58.797457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.014 qpair failed and we were unable to recover it. 00:29:13.014 [2024-11-20 16:40:58.797780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.014 [2024-11-20 16:40:58.797789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.014 qpair failed and we were unable to recover it. 00:29:13.014 [2024-11-20 16:40:58.798115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.014 [2024-11-20 16:40:58.798124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.014 qpair failed and we were unable to recover it. 00:29:13.014 [2024-11-20 16:40:58.798286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.014 [2024-11-20 16:40:58.798296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.014 qpair failed and we were unable to recover it. 00:29:13.014 [2024-11-20 16:40:58.798662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.014 [2024-11-20 16:40:58.798672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.014 qpair failed and we were unable to recover it. 00:29:13.014 [2024-11-20 16:40:58.798990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.014 [2024-11-20 16:40:58.799000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.014 qpair failed and we were unable to recover it. 00:29:13.014 [2024-11-20 16:40:58.799344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.014 [2024-11-20 16:40:58.799354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.014 qpair failed and we were unable to recover it. 00:29:13.014 [2024-11-20 16:40:58.799658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.014 [2024-11-20 16:40:58.799668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.014 qpair failed and we were unable to recover it. 00:29:13.014 [2024-11-20 16:40:58.799831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.014 [2024-11-20 16:40:58.799842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.014 qpair failed and we were unable to recover it. 00:29:13.014 [2024-11-20 16:40:58.800174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.014 [2024-11-20 16:40:58.800184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.014 qpair failed and we were unable to recover it. 00:29:13.014 [2024-11-20 16:40:58.800496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.014 [2024-11-20 16:40:58.800506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.014 qpair failed and we were unable to recover it. 00:29:13.014 [2024-11-20 16:40:58.800763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.014 [2024-11-20 16:40:58.800772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.014 qpair failed and we were unable to recover it. 00:29:13.014 [2024-11-20 16:40:58.801081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.014 [2024-11-20 16:40:58.801091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.014 qpair failed and we were unable to recover it. 00:29:13.014 [2024-11-20 16:40:58.801178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.014 [2024-11-20 16:40:58.801187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.014 qpair failed and we were unable to recover it. 00:29:13.014 [2024-11-20 16:40:58.801509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.014 [2024-11-20 16:40:58.801519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.014 qpair failed and we were unable to recover it. 00:29:13.014 [2024-11-20 16:40:58.801825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.014 [2024-11-20 16:40:58.801835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.014 qpair failed and we were unable to recover it. 00:29:13.014 [2024-11-20 16:40:58.802007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.014 [2024-11-20 16:40:58.802017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.014 qpair failed and we were unable to recover it. 00:29:13.014 [2024-11-20 16:40:58.802288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.014 [2024-11-20 16:40:58.802298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.014 qpair failed and we were unable to recover it. 00:29:13.014 [2024-11-20 16:40:58.802688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.014 [2024-11-20 16:40:58.802698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.014 qpair failed and we were unable to recover it. 00:29:13.014 [2024-11-20 16:40:58.803023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.014 [2024-11-20 16:40:58.803032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.014 qpair failed and we were unable to recover it. 00:29:13.014 [2024-11-20 16:40:58.803267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.014 [2024-11-20 16:40:58.803277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.014 qpair failed and we were unable to recover it. 00:29:13.014 [2024-11-20 16:40:58.803435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.014 [2024-11-20 16:40:58.803446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.014 qpair failed and we were unable to recover it. 00:29:13.014 [2024-11-20 16:40:58.803756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.014 [2024-11-20 16:40:58.803765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.014 qpair failed and we were unable to recover it. 00:29:13.014 [2024-11-20 16:40:58.803929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.014 [2024-11-20 16:40:58.803942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.014 qpair failed and we were unable to recover it. 00:29:13.014 [2024-11-20 16:40:58.804243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.014 [2024-11-20 16:40:58.804254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.014 qpair failed and we were unable to recover it. 00:29:13.014 [2024-11-20 16:40:58.804471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.014 [2024-11-20 16:40:58.804480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.014 qpair failed and we were unable to recover it. 00:29:13.014 [2024-11-20 16:40:58.804794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.014 [2024-11-20 16:40:58.804804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.014 qpair failed and we were unable to recover it. 00:29:13.014 [2024-11-20 16:40:58.805132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.014 [2024-11-20 16:40:58.805142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.014 qpair failed and we were unable to recover it. 00:29:13.014 [2024-11-20 16:40:58.805448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.014 [2024-11-20 16:40:58.805458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.014 qpair failed and we were unable to recover it. 00:29:13.014 [2024-11-20 16:40:58.805637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.014 [2024-11-20 16:40:58.805646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.015 qpair failed and we were unable to recover it. 00:29:13.015 [2024-11-20 16:40:58.805703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.015 [2024-11-20 16:40:58.805712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.015 qpair failed and we were unable to recover it. 00:29:13.015 [2024-11-20 16:40:58.805899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.015 [2024-11-20 16:40:58.805909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.015 qpair failed and we were unable to recover it. 00:29:13.015 [2024-11-20 16:40:58.806152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.015 [2024-11-20 16:40:58.806162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.015 qpair failed and we were unable to recover it. 00:29:13.015 [2024-11-20 16:40:58.806349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.015 [2024-11-20 16:40:58.806359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.015 qpair failed and we were unable to recover it. 00:29:13.015 [2024-11-20 16:40:58.806556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.015 [2024-11-20 16:40:58.806565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.015 qpair failed and we were unable to recover it. 00:29:13.015 [2024-11-20 16:40:58.806617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.015 [2024-11-20 16:40:58.806625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.015 qpair failed and we were unable to recover it. 00:29:13.015 [2024-11-20 16:40:58.806806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.015 [2024-11-20 16:40:58.806815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.015 qpair failed and we were unable to recover it. 00:29:13.015 [2024-11-20 16:40:58.807208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.015 [2024-11-20 16:40:58.807218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.015 qpair failed and we were unable to recover it. 00:29:13.015 [2024-11-20 16:40:58.807524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.015 [2024-11-20 16:40:58.807534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.015 qpair failed and we were unable to recover it. 00:29:13.015 [2024-11-20 16:40:58.807846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.015 [2024-11-20 16:40:58.807856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.015 qpair failed and we were unable to recover it. 00:29:13.015 [2024-11-20 16:40:58.807902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.015 [2024-11-20 16:40:58.807911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.015 qpair failed and we were unable to recover it. 00:29:13.015 [2024-11-20 16:40:58.808110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.015 [2024-11-20 16:40:58.808120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.015 qpair failed and we were unable to recover it. 00:29:13.015 [2024-11-20 16:40:58.808296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.015 [2024-11-20 16:40:58.808306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.015 qpair failed and we were unable to recover it. 00:29:13.015 [2024-11-20 16:40:58.808586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.015 [2024-11-20 16:40:58.808595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.015 qpair failed and we were unable to recover it. 00:29:13.015 [2024-11-20 16:40:58.808773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.015 [2024-11-20 16:40:58.808783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.015 qpair failed and we were unable to recover it. 00:29:13.015 [2024-11-20 16:40:58.808823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.015 [2024-11-20 16:40:58.808832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.015 qpair failed and we were unable to recover it. 00:29:13.015 [2024-11-20 16:40:58.809020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.015 [2024-11-20 16:40:58.809030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.015 qpair failed and we were unable to recover it. 00:29:13.015 [2024-11-20 16:40:58.809389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.015 [2024-11-20 16:40:58.809400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.015 qpair failed and we were unable to recover it. 00:29:13.015 [2024-11-20 16:40:58.809747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.015 [2024-11-20 16:40:58.809757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.015 qpair failed and we were unable to recover it. 00:29:13.015 [2024-11-20 16:40:58.810079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.015 [2024-11-20 16:40:58.810090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.015 qpair failed and we were unable to recover it. 00:29:13.015 [2024-11-20 16:40:58.810256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.015 [2024-11-20 16:40:58.810268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.015 qpair failed and we were unable to recover it. 00:29:13.015 [2024-11-20 16:40:58.810614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.015 [2024-11-20 16:40:58.810625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.015 qpair failed and we were unable to recover it. 00:29:13.015 [2024-11-20 16:40:58.810933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.015 [2024-11-20 16:40:58.810943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.015 qpair failed and we were unable to recover it. 00:29:13.015 [2024-11-20 16:40:58.811244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.015 [2024-11-20 16:40:58.811253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.015 qpair failed and we were unable to recover it. 00:29:13.015 [2024-11-20 16:40:58.811430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.015 [2024-11-20 16:40:58.811441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.015 qpair failed and we were unable to recover it. 00:29:13.015 [2024-11-20 16:40:58.811653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.015 [2024-11-20 16:40:58.811662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.015 qpair failed and we were unable to recover it. 00:29:13.015 [2024-11-20 16:40:58.811835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.015 [2024-11-20 16:40:58.811845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.015 qpair failed and we were unable to recover it. 00:29:13.015 [2024-11-20 16:40:58.811909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.015 [2024-11-20 16:40:58.811918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.015 qpair failed and we were unable to recover it. 00:29:13.015 [2024-11-20 16:40:58.812105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.015 [2024-11-20 16:40:58.812115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.015 qpair failed and we were unable to recover it. 00:29:13.015 [2024-11-20 16:40:58.812423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.015 [2024-11-20 16:40:58.812432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.015 qpair failed and we were unable to recover it. 00:29:13.015 [2024-11-20 16:40:58.812597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.015 [2024-11-20 16:40:58.812606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.015 qpair failed and we were unable to recover it. 00:29:13.015 [2024-11-20 16:40:58.812922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.015 [2024-11-20 16:40:58.812932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.015 qpair failed and we were unable to recover it. 00:29:13.015 [2024-11-20 16:40:58.813242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.015 [2024-11-20 16:40:58.813252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.015 qpair failed and we were unable to recover it. 00:29:13.015 [2024-11-20 16:40:58.813409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.015 [2024-11-20 16:40:58.813419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.015 qpair failed and we were unable to recover it. 00:29:13.015 [2024-11-20 16:40:58.813619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.015 [2024-11-20 16:40:58.813628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.015 qpair failed and we were unable to recover it. 00:29:13.015 [2024-11-20 16:40:58.813678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.015 [2024-11-20 16:40:58.813686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.015 qpair failed and we were unable to recover it. 00:29:13.015 [2024-11-20 16:40:58.813988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.015 [2024-11-20 16:40:58.813998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.016 qpair failed and we were unable to recover it. 00:29:13.016 [2024-11-20 16:40:58.814205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.016 [2024-11-20 16:40:58.814214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.016 qpair failed and we were unable to recover it. 00:29:13.016 [2024-11-20 16:40:58.814553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.016 [2024-11-20 16:40:58.814562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.016 qpair failed and we were unable to recover it. 00:29:13.016 [2024-11-20 16:40:58.814739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.016 [2024-11-20 16:40:58.814749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.016 qpair failed and we were unable to recover it. 00:29:13.016 [2024-11-20 16:40:58.815042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.016 [2024-11-20 16:40:58.815052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.016 qpair failed and we were unable to recover it. 00:29:13.016 [2024-11-20 16:40:58.815373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.016 [2024-11-20 16:40:58.815383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.016 qpair failed and we were unable to recover it. 00:29:13.016 [2024-11-20 16:40:58.815700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.016 [2024-11-20 16:40:58.815709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.016 qpair failed and we were unable to recover it. 00:29:13.016 [2024-11-20 16:40:58.816038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.016 [2024-11-20 16:40:58.816048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.016 qpair failed and we were unable to recover it. 00:29:13.016 [2024-11-20 16:40:58.816357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.016 [2024-11-20 16:40:58.816367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.016 qpair failed and we were unable to recover it. 00:29:13.016 [2024-11-20 16:40:58.816681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.016 [2024-11-20 16:40:58.816690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.016 qpair failed and we were unable to recover it. 00:29:13.016 [2024-11-20 16:40:58.816868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.016 [2024-11-20 16:40:58.816877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.016 qpair failed and we were unable to recover it. 00:29:13.016 [2024-11-20 16:40:58.817085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.016 [2024-11-20 16:40:58.817095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.016 qpair failed and we were unable to recover it. 00:29:13.016 [2024-11-20 16:40:58.817273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.016 [2024-11-20 16:40:58.817283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.016 qpair failed and we were unable to recover it. 00:29:13.016 [2024-11-20 16:40:58.817518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.016 [2024-11-20 16:40:58.817529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.016 qpair failed and we were unable to recover it. 00:29:13.016 [2024-11-20 16:40:58.817817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.016 [2024-11-20 16:40:58.817828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.016 qpair failed and we were unable to recover it. 00:29:13.016 [2024-11-20 16:40:58.818007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.016 [2024-11-20 16:40:58.818018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.016 qpair failed and we were unable to recover it. 00:29:13.016 [2024-11-20 16:40:58.818179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.016 [2024-11-20 16:40:58.818189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.016 qpair failed and we were unable to recover it. 00:29:13.016 [2024-11-20 16:40:58.818374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.016 [2024-11-20 16:40:58.818383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.016 qpair failed and we were unable to recover it. 00:29:13.016 [2024-11-20 16:40:58.818691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.016 [2024-11-20 16:40:58.818701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.016 qpair failed and we were unable to recover it. 00:29:13.016 [2024-11-20 16:40:58.818924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.016 [2024-11-20 16:40:58.818934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.016 qpair failed and we were unable to recover it. 00:29:13.016 [2024-11-20 16:40:58.819241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.016 [2024-11-20 16:40:58.819252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.016 qpair failed and we were unable to recover it. 00:29:13.016 [2024-11-20 16:40:58.819573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.016 [2024-11-20 16:40:58.819582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.016 qpair failed and we were unable to recover it. 00:29:13.016 [2024-11-20 16:40:58.819878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.016 [2024-11-20 16:40:58.819888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.016 qpair failed and we were unable to recover it. 00:29:13.016 [2024-11-20 16:40:58.820196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.016 [2024-11-20 16:40:58.820206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.016 qpair failed and we were unable to recover it. 00:29:13.016 [2024-11-20 16:40:58.820484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.016 [2024-11-20 16:40:58.820494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.016 qpair failed and we were unable to recover it. 00:29:13.016 [2024-11-20 16:40:58.820679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.016 [2024-11-20 16:40:58.820691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.016 qpair failed and we were unable to recover it. 00:29:13.016 [2024-11-20 16:40:58.820978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.016 [2024-11-20 16:40:58.820991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.016 qpair failed and we were unable to recover it. 00:29:13.016 [2024-11-20 16:40:58.821295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.016 [2024-11-20 16:40:58.821305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.016 qpair failed and we were unable to recover it. 00:29:13.016 [2024-11-20 16:40:58.821473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.016 [2024-11-20 16:40:58.821483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.016 qpair failed and we were unable to recover it. 00:29:13.016 [2024-11-20 16:40:58.821767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.016 [2024-11-20 16:40:58.821778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.016 qpair failed and we were unable to recover it. 00:29:13.016 [2024-11-20 16:40:58.821911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.016 [2024-11-20 16:40:58.821921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.016 qpair failed and we were unable to recover it. 00:29:13.016 [2024-11-20 16:40:58.822114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.016 [2024-11-20 16:40:58.822124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.016 qpair failed and we were unable to recover it. 00:29:13.016 [2024-11-20 16:40:58.822242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.016 [2024-11-20 16:40:58.822251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.016 qpair failed and we were unable to recover it. 00:29:13.016 [2024-11-20 16:40:58.822599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.016 [2024-11-20 16:40:58.822609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.016 qpair failed and we were unable to recover it. 00:29:13.016 [2024-11-20 16:40:58.822779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.016 [2024-11-20 16:40:58.822789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.016 qpair failed and we were unable to recover it. 00:29:13.016 [2024-11-20 16:40:58.823168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.016 [2024-11-20 16:40:58.823178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.016 qpair failed and we were unable to recover it. 00:29:13.016 [2024-11-20 16:40:58.823423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.017 [2024-11-20 16:40:58.823433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.017 qpair failed and we were unable to recover it. 00:29:13.017 [2024-11-20 16:40:58.823717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.017 [2024-11-20 16:40:58.823727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.017 qpair failed and we were unable to recover it. 00:29:13.017 [2024-11-20 16:40:58.824052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.017 [2024-11-20 16:40:58.824062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.017 qpair failed and we were unable to recover it. 00:29:13.017 [2024-11-20 16:40:58.824403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.017 [2024-11-20 16:40:58.824413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.017 qpair failed and we were unable to recover it. 00:29:13.017 [2024-11-20 16:40:58.824714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.017 [2024-11-20 16:40:58.824723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.017 qpair failed and we were unable to recover it. 00:29:13.017 [2024-11-20 16:40:58.825055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.017 [2024-11-20 16:40:58.825065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.017 qpair failed and we were unable to recover it. 00:29:13.017 [2024-11-20 16:40:58.825348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.017 [2024-11-20 16:40:58.825357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.017 qpair failed and we were unable to recover it. 00:29:13.017 [2024-11-20 16:40:58.825613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.017 [2024-11-20 16:40:58.825623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.017 qpair failed and we were unable to recover it. 00:29:13.017 [2024-11-20 16:40:58.825904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.017 [2024-11-20 16:40:58.825914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.017 qpair failed and we were unable to recover it. 00:29:13.017 [2024-11-20 16:40:58.826240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.017 [2024-11-20 16:40:58.826249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.017 qpair failed and we were unable to recover it. 00:29:13.017 [2024-11-20 16:40:58.826507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.017 [2024-11-20 16:40:58.826517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.017 qpair failed and we were unable to recover it. 00:29:13.017 [2024-11-20 16:40:58.826696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.017 [2024-11-20 16:40:58.826706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.017 qpair failed and we were unable to recover it. 00:29:13.017 [2024-11-20 16:40:58.826891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.017 [2024-11-20 16:40:58.826901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.017 qpair failed and we were unable to recover it. 00:29:13.017 [2024-11-20 16:40:58.827070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.017 [2024-11-20 16:40:58.827080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.017 qpair failed and we were unable to recover it. 00:29:13.017 [2024-11-20 16:40:58.827405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.017 [2024-11-20 16:40:58.827414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.017 qpair failed and we were unable to recover it. 00:29:13.017 [2024-11-20 16:40:58.827740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.017 [2024-11-20 16:40:58.827750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.017 qpair failed and we were unable to recover it. 00:29:13.017 [2024-11-20 16:40:58.828085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.017 [2024-11-20 16:40:58.828095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.017 qpair failed and we were unable to recover it. 00:29:13.017 [2024-11-20 16:40:58.828263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.017 [2024-11-20 16:40:58.828274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.017 qpair failed and we were unable to recover it. 00:29:13.017 [2024-11-20 16:40:58.828560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.017 [2024-11-20 16:40:58.828570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.017 qpair failed and we were unable to recover it. 00:29:13.017 [2024-11-20 16:40:58.828862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.017 [2024-11-20 16:40:58.828871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.017 qpair failed and we were unable to recover it. 00:29:13.017 [2024-11-20 16:40:58.828991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.017 [2024-11-20 16:40:58.829001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.017 qpair failed and we were unable to recover it. 00:29:13.017 [2024-11-20 16:40:58.829303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.017 [2024-11-20 16:40:58.829313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.017 qpair failed and we were unable to recover it. 00:29:13.017 [2024-11-20 16:40:58.829629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.017 [2024-11-20 16:40:58.829639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.017 qpair failed and we were unable to recover it. 00:29:13.017 [2024-11-20 16:40:58.829962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.017 [2024-11-20 16:40:58.829971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.017 qpair failed and we were unable to recover it. 00:29:13.017 [2024-11-20 16:40:58.830133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.017 [2024-11-20 16:40:58.830143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.017 qpair failed and we were unable to recover it. 00:29:13.017 [2024-11-20 16:40:58.830357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.017 [2024-11-20 16:40:58.830368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.017 qpair failed and we were unable to recover it. 00:29:13.017 [2024-11-20 16:40:58.830690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.017 [2024-11-20 16:40:58.830699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.017 qpair failed and we were unable to recover it. 00:29:13.017 [2024-11-20 16:40:58.830832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.017 [2024-11-20 16:40:58.830842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.017 qpair failed and we were unable to recover it. 00:29:13.017 [2024-11-20 16:40:58.831149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.017 [2024-11-20 16:40:58.831159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.017 qpair failed and we were unable to recover it. 00:29:13.017 [2024-11-20 16:40:58.831551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.017 [2024-11-20 16:40:58.831560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.017 qpair failed and we were unable to recover it. 00:29:13.017 [2024-11-20 16:40:58.831857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.017 [2024-11-20 16:40:58.831867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.017 qpair failed and we were unable to recover it. 00:29:13.017 [2024-11-20 16:40:58.832098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.017 [2024-11-20 16:40:58.832108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.017 qpair failed and we were unable to recover it. 00:29:13.017 [2024-11-20 16:40:58.832474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.017 [2024-11-20 16:40:58.832483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.017 qpair failed and we were unable to recover it. 00:29:13.017 [2024-11-20 16:40:58.832780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.017 [2024-11-20 16:40:58.832790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.017 qpair failed and we were unable to recover it. 00:29:13.017 [2024-11-20 16:40:58.833135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.017 [2024-11-20 16:40:58.833145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.017 qpair failed and we were unable to recover it. 00:29:13.017 [2024-11-20 16:40:58.833479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.017 [2024-11-20 16:40:58.833488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.017 qpair failed and we were unable to recover it. 00:29:13.017 [2024-11-20 16:40:58.833796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.017 [2024-11-20 16:40:58.833805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.017 qpair failed and we were unable to recover it. 00:29:13.017 [2024-11-20 16:40:58.834096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.017 [2024-11-20 16:40:58.834106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.017 qpair failed and we were unable to recover it. 00:29:13.017 [2024-11-20 16:40:58.834423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.017 [2024-11-20 16:40:58.834433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.018 qpair failed and we were unable to recover it. 00:29:13.018 [2024-11-20 16:40:58.834721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.018 [2024-11-20 16:40:58.834731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.018 qpair failed and we were unable to recover it. 00:29:13.018 [2024-11-20 16:40:58.835099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.018 [2024-11-20 16:40:58.835109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.018 qpair failed and we were unable to recover it. 00:29:13.018 [2024-11-20 16:40:58.835459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.018 [2024-11-20 16:40:58.835470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.018 qpair failed and we were unable to recover it. 00:29:13.018 [2024-11-20 16:40:58.835785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.018 [2024-11-20 16:40:58.835795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.018 qpair failed and we were unable to recover it. 00:29:13.018 [2024-11-20 16:40:58.836098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.018 [2024-11-20 16:40:58.836108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.018 qpair failed and we were unable to recover it. 00:29:13.018 [2024-11-20 16:40:58.836396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.018 [2024-11-20 16:40:58.836406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.018 qpair failed and we were unable to recover it. 00:29:13.018 [2024-11-20 16:40:58.836704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.018 [2024-11-20 16:40:58.836713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.018 qpair failed and we were unable to recover it. 00:29:13.018 [2024-11-20 16:40:58.837002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.018 [2024-11-20 16:40:58.837012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.018 qpair failed and we were unable to recover it. 00:29:13.018 [2024-11-20 16:40:58.837198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.018 [2024-11-20 16:40:58.837209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.018 qpair failed and we were unable to recover it. 00:29:13.018 [2024-11-20 16:40:58.837396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.018 [2024-11-20 16:40:58.837407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.018 qpair failed and we were unable to recover it. 00:29:13.018 [2024-11-20 16:40:58.837623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.018 [2024-11-20 16:40:58.837633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.018 qpair failed and we were unable to recover it. 00:29:13.018 [2024-11-20 16:40:58.837937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.018 [2024-11-20 16:40:58.837946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.018 qpair failed and we were unable to recover it. 00:29:13.018 [2024-11-20 16:40:58.838001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.018 [2024-11-20 16:40:58.838010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.018 qpair failed and we were unable to recover it. 00:29:13.018 [2024-11-20 16:40:58.838338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.018 [2024-11-20 16:40:58.838347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.018 qpair failed and we were unable to recover it. 00:29:13.018 [2024-11-20 16:40:58.838508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.018 [2024-11-20 16:40:58.838518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.018 qpair failed and we were unable to recover it. 00:29:13.018 [2024-11-20 16:40:58.838707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.018 [2024-11-20 16:40:58.838717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.018 qpair failed and we were unable to recover it. 00:29:13.018 [2024-11-20 16:40:58.838760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.018 [2024-11-20 16:40:58.838769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.018 qpair failed and we were unable to recover it. 00:29:13.018 [2024-11-20 16:40:58.838948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.018 [2024-11-20 16:40:58.838958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.018 qpair failed and we were unable to recover it. 00:29:13.018 [2024-11-20 16:40:58.839306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.018 [2024-11-20 16:40:58.839318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.018 qpair failed and we were unable to recover it. 00:29:13.018 [2024-11-20 16:40:58.839521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.018 [2024-11-20 16:40:58.839530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.018 qpair failed and we were unable to recover it. 00:29:13.018 [2024-11-20 16:40:58.839691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.018 [2024-11-20 16:40:58.839700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.018 qpair failed and we were unable to recover it. 00:29:13.018 [2024-11-20 16:40:58.839851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.018 [2024-11-20 16:40:58.839860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.018 qpair failed and we were unable to recover it. 00:29:13.018 [2024-11-20 16:40:58.840040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.018 [2024-11-20 16:40:58.840050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.018 qpair failed and we were unable to recover it. 00:29:13.018 [2024-11-20 16:40:58.840330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.018 [2024-11-20 16:40:58.840339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.018 qpair failed and we were unable to recover it. 00:29:13.018 [2024-11-20 16:40:58.840538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.018 [2024-11-20 16:40:58.840548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.018 qpair failed and we were unable to recover it. 00:29:13.018 [2024-11-20 16:40:58.840845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.018 [2024-11-20 16:40:58.840855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.018 qpair failed and we were unable to recover it. 00:29:13.018 [2024-11-20 16:40:58.841014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.018 [2024-11-20 16:40:58.841025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.018 qpair failed and we were unable to recover it. 00:29:13.018 [2024-11-20 16:40:58.841202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.018 [2024-11-20 16:40:58.841211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.018 qpair failed and we were unable to recover it. 00:29:13.018 [2024-11-20 16:40:58.841515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.018 [2024-11-20 16:40:58.841524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.018 qpair failed and we were unable to recover it. 00:29:13.018 [2024-11-20 16:40:58.841584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.018 [2024-11-20 16:40:58.841593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.018 qpair failed and we were unable to recover it. 00:29:13.018 [2024-11-20 16:40:58.841902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.018 [2024-11-20 16:40:58.841912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.018 qpair failed and we were unable to recover it. 00:29:13.018 [2024-11-20 16:40:58.842356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.018 [2024-11-20 16:40:58.842366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.018 qpair failed and we were unable to recover it. 00:29:13.018 [2024-11-20 16:40:58.842677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.018 [2024-11-20 16:40:58.842686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.018 qpair failed and we were unable to recover it. 00:29:13.018 [2024-11-20 16:40:58.843000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.018 [2024-11-20 16:40:58.843010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.018 qpair failed and we were unable to recover it. 00:29:13.018 [2024-11-20 16:40:58.843273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.018 [2024-11-20 16:40:58.843283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.018 qpair failed and we were unable to recover it. 00:29:13.018 [2024-11-20 16:40:58.843583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.018 [2024-11-20 16:40:58.843593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.018 qpair failed and we were unable to recover it. 00:29:13.018 [2024-11-20 16:40:58.843901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.018 [2024-11-20 16:40:58.843911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.018 qpair failed and we were unable to recover it. 00:29:13.018 [2024-11-20 16:40:58.844229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.018 [2024-11-20 16:40:58.844239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.018 qpair failed and we were unable to recover it. 00:29:13.019 [2024-11-20 16:40:58.844404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.019 [2024-11-20 16:40:58.844413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.019 qpair failed and we were unable to recover it. 00:29:13.019 [2024-11-20 16:40:58.844786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.019 [2024-11-20 16:40:58.844795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.019 qpair failed and we were unable to recover it. 00:29:13.019 [2024-11-20 16:40:58.845187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.019 [2024-11-20 16:40:58.845196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.019 qpair failed and we were unable to recover it. 00:29:13.019 [2024-11-20 16:40:58.845367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.019 [2024-11-20 16:40:58.845376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.019 qpair failed and we were unable to recover it. 00:29:13.019 [2024-11-20 16:40:58.845554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.019 [2024-11-20 16:40:58.845565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.019 qpair failed and we were unable to recover it. 00:29:13.019 [2024-11-20 16:40:58.845753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.019 [2024-11-20 16:40:58.845763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.019 qpair failed and we were unable to recover it. 00:29:13.019 [2024-11-20 16:40:58.845944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.019 [2024-11-20 16:40:58.845954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.019 qpair failed and we were unable to recover it. 00:29:13.019 [2024-11-20 16:40:58.846239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.019 [2024-11-20 16:40:58.846249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.019 qpair failed and we were unable to recover it. 00:29:13.019 [2024-11-20 16:40:58.846529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.019 [2024-11-20 16:40:58.846539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.019 qpair failed and we were unable to recover it. 00:29:13.019 [2024-11-20 16:40:58.846725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.019 [2024-11-20 16:40:58.846735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.019 qpair failed and we were unable to recover it. 00:29:13.019 [2024-11-20 16:40:58.846930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.019 [2024-11-20 16:40:58.846940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.019 qpair failed and we were unable to recover it. 00:29:13.019 [2024-11-20 16:40:58.847154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.019 [2024-11-20 16:40:58.847164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.019 qpair failed and we were unable to recover it. 00:29:13.019 [2024-11-20 16:40:58.847492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.019 [2024-11-20 16:40:58.847501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.019 qpair failed and we were unable to recover it. 00:29:13.019 [2024-11-20 16:40:58.847806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.019 [2024-11-20 16:40:58.847815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.019 qpair failed and we were unable to recover it. 00:29:13.019 [2024-11-20 16:40:58.848002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.019 [2024-11-20 16:40:58.848012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.019 qpair failed and we were unable to recover it. 00:29:13.019 [2024-11-20 16:40:58.848352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.019 [2024-11-20 16:40:58.848362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.019 qpair failed and we were unable to recover it. 00:29:13.019 [2024-11-20 16:40:58.848576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.019 [2024-11-20 16:40:58.848586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.019 qpair failed and we were unable to recover it. 00:29:13.019 [2024-11-20 16:40:58.848934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.019 [2024-11-20 16:40:58.848943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.019 qpair failed and we were unable to recover it. 00:29:13.019 [2024-11-20 16:40:58.849145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.019 [2024-11-20 16:40:58.849155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.019 qpair failed and we were unable to recover it. 00:29:13.019 [2024-11-20 16:40:58.849360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.019 [2024-11-20 16:40:58.849370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.019 qpair failed and we were unable to recover it. 00:29:13.019 [2024-11-20 16:40:58.849605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.019 [2024-11-20 16:40:58.849614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.019 qpair failed and we were unable to recover it. 00:29:13.019 [2024-11-20 16:40:58.849929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.019 [2024-11-20 16:40:58.849942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.019 qpair failed and we were unable to recover it. 00:29:13.019 [2024-11-20 16:40:58.850250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.019 [2024-11-20 16:40:58.850260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.019 qpair failed and we were unable to recover it. 00:29:13.019 [2024-11-20 16:40:58.850680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.019 [2024-11-20 16:40:58.850690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.019 qpair failed and we were unable to recover it. 00:29:13.019 [2024-11-20 16:40:58.850873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.019 [2024-11-20 16:40:58.850883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.019 qpair failed and we were unable to recover it. 00:29:13.019 [2024-11-20 16:40:58.851041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.019 [2024-11-20 16:40:58.851050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.019 qpair failed and we were unable to recover it. 00:29:13.019 [2024-11-20 16:40:58.851391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.019 [2024-11-20 16:40:58.851400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.019 qpair failed and we were unable to recover it. 00:29:13.019 [2024-11-20 16:40:58.851616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.019 [2024-11-20 16:40:58.851626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.019 qpair failed and we were unable to recover it. 00:29:13.019 [2024-11-20 16:40:58.851799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.019 [2024-11-20 16:40:58.851809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.019 qpair failed and we were unable to recover it. 00:29:13.019 [2024-11-20 16:40:58.851997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.019 [2024-11-20 16:40:58.852008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.019 qpair failed and we were unable to recover it. 00:29:13.019 [2024-11-20 16:40:58.852292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.019 [2024-11-20 16:40:58.852301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.019 qpair failed and we were unable to recover it. 00:29:13.019 [2024-11-20 16:40:58.852488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.019 [2024-11-20 16:40:58.852497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.019 qpair failed and we were unable to recover it. 00:29:13.019 [2024-11-20 16:40:58.852711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.019 [2024-11-20 16:40:58.852721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.019 qpair failed and we were unable to recover it. 00:29:13.019 [2024-11-20 16:40:58.852893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.019 [2024-11-20 16:40:58.852903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.019 qpair failed and we were unable to recover it. 00:29:13.019 [2024-11-20 16:40:58.853270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.019 [2024-11-20 16:40:58.853280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.019 qpair failed and we were unable to recover it. 00:29:13.019 [2024-11-20 16:40:58.853472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.019 [2024-11-20 16:40:58.853481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.019 qpair failed and we were unable to recover it. 00:29:13.019 [2024-11-20 16:40:58.853689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.019 [2024-11-20 16:40:58.853700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.019 qpair failed and we were unable to recover it. 00:29:13.019 [2024-11-20 16:40:58.853970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.019 [2024-11-20 16:40:58.853980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.019 qpair failed and we were unable to recover it. 00:29:13.019 [2024-11-20 16:40:58.854290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.020 [2024-11-20 16:40:58.854300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.020 qpair failed and we were unable to recover it. 00:29:13.020 [2024-11-20 16:40:58.854596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.020 [2024-11-20 16:40:58.854606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.020 qpair failed and we were unable to recover it. 00:29:13.020 [2024-11-20 16:40:58.854792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.020 [2024-11-20 16:40:58.854802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.020 qpair failed and we were unable to recover it. 00:29:13.020 [2024-11-20 16:40:58.855100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.020 [2024-11-20 16:40:58.855110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.020 qpair failed and we were unable to recover it. 00:29:13.020 [2024-11-20 16:40:58.855292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.020 [2024-11-20 16:40:58.855302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.020 qpair failed and we were unable to recover it. 00:29:13.020 [2024-11-20 16:40:58.855650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.020 [2024-11-20 16:40:58.855659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.020 qpair failed and we were unable to recover it. 00:29:13.020 [2024-11-20 16:40:58.855974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.020 [2024-11-20 16:40:58.855994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.020 qpair failed and we were unable to recover it. 00:29:13.020 [2024-11-20 16:40:58.856058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.020 [2024-11-20 16:40:58.856066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.020 qpair failed and we were unable to recover it. 00:29:13.020 [2024-11-20 16:40:58.856254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.020 [2024-11-20 16:40:58.856263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.020 qpair failed and we were unable to recover it. 00:29:13.020 [2024-11-20 16:40:58.856586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.020 [2024-11-20 16:40:58.856596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.020 qpair failed and we were unable to recover it. 00:29:13.020 [2024-11-20 16:40:58.856797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.020 [2024-11-20 16:40:58.856809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.020 qpair failed and we were unable to recover it. 00:29:13.020 [2024-11-20 16:40:58.857117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.020 [2024-11-20 16:40:58.857127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.020 qpair failed and we were unable to recover it. 00:29:13.020 [2024-11-20 16:40:58.857427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.020 [2024-11-20 16:40:58.857437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.020 qpair failed and we were unable to recover it. 00:29:13.020 [2024-11-20 16:40:58.857727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.020 [2024-11-20 16:40:58.857738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.020 qpair failed and we were unable to recover it. 00:29:13.020 [2024-11-20 16:40:58.858049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.020 [2024-11-20 16:40:58.858060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.020 qpair failed and we were unable to recover it. 00:29:13.020 [2024-11-20 16:40:58.858237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.020 [2024-11-20 16:40:58.858246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.020 qpair failed and we were unable to recover it. 00:29:13.020 [2024-11-20 16:40:58.858519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.020 [2024-11-20 16:40:58.858529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.020 qpair failed and we were unable to recover it. 00:29:13.020 [2024-11-20 16:40:58.858718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.020 [2024-11-20 16:40:58.858727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.020 qpair failed and we were unable to recover it. 00:29:13.020 [2024-11-20 16:40:58.859058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.020 [2024-11-20 16:40:58.859068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.020 qpair failed and we were unable to recover it. 00:29:13.020 [2024-11-20 16:40:58.859443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.020 [2024-11-20 16:40:58.859452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.020 qpair failed and we were unable to recover it. 00:29:13.020 [2024-11-20 16:40:58.859742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.020 [2024-11-20 16:40:58.859752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.020 qpair failed and we were unable to recover it. 00:29:13.020 [2024-11-20 16:40:58.859951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.020 [2024-11-20 16:40:58.859961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.020 qpair failed and we were unable to recover it. 00:29:13.020 [2024-11-20 16:40:58.860266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.020 [2024-11-20 16:40:58.860276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.020 qpair failed and we were unable to recover it. 00:29:13.020 [2024-11-20 16:40:58.860488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.020 [2024-11-20 16:40:58.860498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.020 qpair failed and we were unable to recover it. 00:29:13.020 [2024-11-20 16:40:58.860829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.020 [2024-11-20 16:40:58.860839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.020 qpair failed and we were unable to recover it. 00:29:13.020 [2024-11-20 16:40:58.861149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.020 [2024-11-20 16:40:58.861159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.020 qpair failed and we were unable to recover it. 00:29:13.020 [2024-11-20 16:40:58.861535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.020 [2024-11-20 16:40:58.861544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.020 qpair failed and we were unable to recover it. 00:29:13.020 [2024-11-20 16:40:58.861709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.020 [2024-11-20 16:40:58.861719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.020 qpair failed and we were unable to recover it. 00:29:13.020 [2024-11-20 16:40:58.861956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.020 [2024-11-20 16:40:58.861967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.020 qpair failed and we were unable to recover it. 00:29:13.020 [2024-11-20 16:40:58.862273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.020 [2024-11-20 16:40:58.862283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.020 qpair failed and we were unable to recover it. 00:29:13.020 [2024-11-20 16:40:58.862453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.020 [2024-11-20 16:40:58.862463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.020 qpair failed and we were unable to recover it. 00:29:13.020 [2024-11-20 16:40:58.862560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.020 [2024-11-20 16:40:58.862569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.020 qpair failed and we were unable to recover it. 00:29:13.020 [2024-11-20 16:40:58.862774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.020 [2024-11-20 16:40:58.862784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.020 qpair failed and we were unable to recover it. 00:29:13.020 [2024-11-20 16:40:58.862958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.020 [2024-11-20 16:40:58.862968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.020 qpair failed and we were unable to recover it. 00:29:13.020 [2024-11-20 16:40:58.863330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.021 [2024-11-20 16:40:58.863341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.021 qpair failed and we were unable to recover it. 00:29:13.021 [2024-11-20 16:40:58.863623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.021 [2024-11-20 16:40:58.863632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.021 qpair failed and we were unable to recover it. 00:29:13.021 [2024-11-20 16:40:58.863812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.021 [2024-11-20 16:40:58.863822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.021 qpair failed and we were unable to recover it. 00:29:13.021 [2024-11-20 16:40:58.863997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.021 [2024-11-20 16:40:58.864007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.021 qpair failed and we were unable to recover it. 00:29:13.021 [2024-11-20 16:40:58.864428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.021 [2024-11-20 16:40:58.864438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.021 qpair failed and we were unable to recover it. 00:29:13.021 [2024-11-20 16:40:58.864751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.021 [2024-11-20 16:40:58.864760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.021 qpair failed and we were unable to recover it. 00:29:13.021 [2024-11-20 16:40:58.865098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.021 [2024-11-20 16:40:58.865108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.021 qpair failed and we were unable to recover it. 00:29:13.021 [2024-11-20 16:40:58.865406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.021 [2024-11-20 16:40:58.865415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.021 qpair failed and we were unable to recover it. 00:29:13.021 [2024-11-20 16:40:58.865709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.021 [2024-11-20 16:40:58.865719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.021 qpair failed and we were unable to recover it. 00:29:13.021 [2024-11-20 16:40:58.865940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.021 [2024-11-20 16:40:58.865950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.021 qpair failed and we were unable to recover it. 00:29:13.021 [2024-11-20 16:40:58.866114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.021 [2024-11-20 16:40:58.866124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.021 qpair failed and we were unable to recover it. 00:29:13.021 [2024-11-20 16:40:58.866422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.021 [2024-11-20 16:40:58.866431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.021 qpair failed and we were unable to recover it. 00:29:13.021 [2024-11-20 16:40:58.866662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.021 [2024-11-20 16:40:58.866672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.021 qpair failed and we were unable to recover it. 00:29:13.021 [2024-11-20 16:40:58.866862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.021 [2024-11-20 16:40:58.866873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.021 qpair failed and we were unable to recover it. 00:29:13.021 [2024-11-20 16:40:58.867066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.021 [2024-11-20 16:40:58.867078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.021 qpair failed and we were unable to recover it. 00:29:13.021 [2024-11-20 16:40:58.867282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.021 [2024-11-20 16:40:58.867292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.021 qpair failed and we were unable to recover it. 00:29:13.021 [2024-11-20 16:40:58.867598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.021 [2024-11-20 16:40:58.867608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.021 qpair failed and we were unable to recover it. 00:29:13.021 [2024-11-20 16:40:58.867797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.021 [2024-11-20 16:40:58.867810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.021 qpair failed and we were unable to recover it. 00:29:13.021 [2024-11-20 16:40:58.868113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.021 [2024-11-20 16:40:58.868124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.021 qpair failed and we were unable to recover it. 00:29:13.021 [2024-11-20 16:40:58.868388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.021 [2024-11-20 16:40:58.868398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.021 qpair failed and we were unable to recover it. 00:29:13.021 [2024-11-20 16:40:58.868568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.021 [2024-11-20 16:40:58.868578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.021 qpair failed and we were unable to recover it. 00:29:13.021 [2024-11-20 16:40:58.868744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.021 [2024-11-20 16:40:58.868754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.021 qpair failed and we were unable to recover it. 00:29:13.021 [2024-11-20 16:40:58.869077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.021 [2024-11-20 16:40:58.869087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.021 qpair failed and we were unable to recover it. 00:29:13.021 [2024-11-20 16:40:58.869336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.021 [2024-11-20 16:40:58.869345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.021 qpair failed and we were unable to recover it. 00:29:13.021 [2024-11-20 16:40:58.869563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.021 [2024-11-20 16:40:58.869572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.021 qpair failed and we were unable to recover it. 00:29:13.021 [2024-11-20 16:40:58.869810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.021 [2024-11-20 16:40:58.869819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.021 qpair failed and we were unable to recover it. 00:29:13.021 [2024-11-20 16:40:58.870160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.021 [2024-11-20 16:40:58.870170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.021 qpair failed and we were unable to recover it. 00:29:13.021 [2024-11-20 16:40:58.870359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.021 [2024-11-20 16:40:58.870369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.021 qpair failed and we were unable to recover it. 00:29:13.021 [2024-11-20 16:40:58.870722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.021 [2024-11-20 16:40:58.870732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.021 qpair failed and we were unable to recover it. 00:29:13.021 [2024-11-20 16:40:58.871050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.021 [2024-11-20 16:40:58.871060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.021 qpair failed and we were unable to recover it. 00:29:13.021 [2024-11-20 16:40:58.871246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.021 [2024-11-20 16:40:58.871255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.021 qpair failed and we were unable to recover it. 00:29:13.021 [2024-11-20 16:40:58.871417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.021 [2024-11-20 16:40:58.871426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.021 qpair failed and we were unable to recover it. 00:29:13.021 [2024-11-20 16:40:58.871712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.021 [2024-11-20 16:40:58.871722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.021 qpair failed and we were unable to recover it. 00:29:13.021 [2024-11-20 16:40:58.871765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.021 [2024-11-20 16:40:58.871774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.021 qpair failed and we were unable to recover it. 00:29:13.021 [2024-11-20 16:40:58.871919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.021 [2024-11-20 16:40:58.871928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.021 qpair failed and we were unable to recover it. 00:29:13.021 [2024-11-20 16:40:58.872213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.021 [2024-11-20 16:40:58.872224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.021 qpair failed and we were unable to recover it. 00:29:13.021 [2024-11-20 16:40:58.872489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.021 [2024-11-20 16:40:58.872499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.021 qpair failed and we were unable to recover it. 00:29:13.021 [2024-11-20 16:40:58.872833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.021 [2024-11-20 16:40:58.872842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.021 qpair failed and we were unable to recover it. 00:29:13.021 [2024-11-20 16:40:58.873157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.021 [2024-11-20 16:40:58.873167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.021 qpair failed and we were unable to recover it. 00:29:13.022 [2024-11-20 16:40:58.873350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.022 [2024-11-20 16:40:58.873359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.022 qpair failed and we were unable to recover it. 00:29:13.022 [2024-11-20 16:40:58.873561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.022 [2024-11-20 16:40:58.873570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.022 qpair failed and we were unable to recover it. 00:29:13.022 [2024-11-20 16:40:58.873667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.022 [2024-11-20 16:40:58.873677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.022 qpair failed and we were unable to recover it. 00:29:13.022 [2024-11-20 16:40:58.874013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.022 [2024-11-20 16:40:58.874023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.022 qpair failed and we were unable to recover it. 00:29:13.022 [2024-11-20 16:40:58.874333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.022 [2024-11-20 16:40:58.874342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.022 qpair failed and we were unable to recover it. 00:29:13.022 [2024-11-20 16:40:58.874733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.022 [2024-11-20 16:40:58.874744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.022 qpair failed and we were unable to recover it. 00:29:13.022 [2024-11-20 16:40:58.874924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.022 [2024-11-20 16:40:58.874933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.022 qpair failed and we were unable to recover it. 00:29:13.022 [2024-11-20 16:40:58.875236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.022 [2024-11-20 16:40:58.875246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.022 qpair failed and we were unable to recover it. 00:29:13.022 [2024-11-20 16:40:58.875550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.022 [2024-11-20 16:40:58.875559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.022 qpair failed and we were unable to recover it. 00:29:13.022 [2024-11-20 16:40:58.875886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.022 [2024-11-20 16:40:58.875896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.022 qpair failed and we were unable to recover it. 00:29:13.022 [2024-11-20 16:40:58.876060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.022 [2024-11-20 16:40:58.876070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.022 qpair failed and we were unable to recover it. 00:29:13.022 [2024-11-20 16:40:58.876292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.022 [2024-11-20 16:40:58.876302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.022 qpair failed and we were unable to recover it. 00:29:13.022 [2024-11-20 16:40:58.876623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.022 [2024-11-20 16:40:58.876632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.022 qpair failed and we were unable to recover it. 00:29:13.022 [2024-11-20 16:40:58.876926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.022 [2024-11-20 16:40:58.876936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.022 qpair failed and we were unable to recover it. 00:29:13.022 [2024-11-20 16:40:58.877115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.022 [2024-11-20 16:40:58.877125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.022 qpair failed and we were unable to recover it. 00:29:13.022 [2024-11-20 16:40:58.877430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.022 [2024-11-20 16:40:58.877439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.022 qpair failed and we were unable to recover it. 00:29:13.022 [2024-11-20 16:40:58.877791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.022 [2024-11-20 16:40:58.877801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.022 qpair failed and we were unable to recover it. 00:29:13.022 [2024-11-20 16:40:58.878116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.022 [2024-11-20 16:40:58.878126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.022 qpair failed and we were unable to recover it. 00:29:13.022 [2024-11-20 16:40:58.878415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.022 [2024-11-20 16:40:58.878424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.022 qpair failed and we were unable to recover it. 00:29:13.022 [2024-11-20 16:40:58.878587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.022 [2024-11-20 16:40:58.878596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.022 qpair failed and we were unable to recover it. 00:29:13.022 [2024-11-20 16:40:58.878907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.022 [2024-11-20 16:40:58.878917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.022 qpair failed and we were unable to recover it. 00:29:13.022 [2024-11-20 16:40:58.879241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.022 [2024-11-20 16:40:58.879251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.022 qpair failed and we were unable to recover it. 00:29:13.022 [2024-11-20 16:40:58.879469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.022 [2024-11-20 16:40:58.879479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.022 qpair failed and we were unable to recover it. 00:29:13.022 [2024-11-20 16:40:58.879743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.022 [2024-11-20 16:40:58.879753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.022 qpair failed and we were unable to recover it. 00:29:13.022 [2024-11-20 16:40:58.879938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.022 [2024-11-20 16:40:58.879948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.022 qpair failed and we were unable to recover it. 00:29:13.022 [2024-11-20 16:40:58.880265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.022 [2024-11-20 16:40:58.880275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.022 qpair failed and we were unable to recover it. 00:29:13.022 [2024-11-20 16:40:58.880551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.022 [2024-11-20 16:40:58.880560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.022 qpair failed and we were unable to recover it. 00:29:13.022 [2024-11-20 16:40:58.880854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.022 [2024-11-20 16:40:58.880864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.022 qpair failed and we were unable to recover it. 00:29:13.022 [2024-11-20 16:40:58.881139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.022 [2024-11-20 16:40:58.881149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.022 qpair failed and we were unable to recover it. 00:29:13.022 [2024-11-20 16:40:58.881470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.022 [2024-11-20 16:40:58.881479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.022 qpair failed and we were unable to recover it. 00:29:13.022 [2024-11-20 16:40:58.881551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.022 [2024-11-20 16:40:58.881561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.022 qpair failed and we were unable to recover it. 00:29:13.022 [2024-11-20 16:40:58.881736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.022 [2024-11-20 16:40:58.881746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.022 qpair failed and we were unable to recover it. 00:29:13.022 [2024-11-20 16:40:58.882046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.022 [2024-11-20 16:40:58.882055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.022 qpair failed and we were unable to recover it. 00:29:13.022 [2024-11-20 16:40:58.882364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.022 [2024-11-20 16:40:58.882373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.022 qpair failed and we were unable to recover it. 00:29:13.022 [2024-11-20 16:40:58.882421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.022 [2024-11-20 16:40:58.882430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.022 qpair failed and we were unable to recover it. 00:29:13.022 [2024-11-20 16:40:58.882699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.022 [2024-11-20 16:40:58.882708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.022 qpair failed and we were unable to recover it. 00:29:13.022 [2024-11-20 16:40:58.882747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.022 [2024-11-20 16:40:58.882755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.022 qpair failed and we were unable to recover it. 00:29:13.022 [2024-11-20 16:40:58.883112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.022 [2024-11-20 16:40:58.883122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.022 qpair failed and we were unable to recover it. 00:29:13.022 [2024-11-20 16:40:58.883289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.023 [2024-11-20 16:40:58.883299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.023 qpair failed and we were unable to recover it. 00:29:13.023 [2024-11-20 16:40:58.883492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.023 [2024-11-20 16:40:58.883501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.023 qpair failed and we were unable to recover it. 00:29:13.023 [2024-11-20 16:40:58.883810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.023 [2024-11-20 16:40:58.883820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.023 qpair failed and we were unable to recover it. 00:29:13.023 [2024-11-20 16:40:58.884128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.023 [2024-11-20 16:40:58.884138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.023 qpair failed and we were unable to recover it. 00:29:13.023 [2024-11-20 16:40:58.884438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.023 [2024-11-20 16:40:58.884448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.023 qpair failed and we were unable to recover it. 00:29:13.023 [2024-11-20 16:40:58.884753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.023 [2024-11-20 16:40:58.884762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.023 qpair failed and we were unable to recover it. 00:29:13.023 [2024-11-20 16:40:58.884886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.023 [2024-11-20 16:40:58.884895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.023 qpair failed and we were unable to recover it. 00:29:13.023 [2024-11-20 16:40:58.885208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.023 [2024-11-20 16:40:58.885217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.023 qpair failed and we were unable to recover it. 00:29:13.023 [2024-11-20 16:40:58.885527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.023 [2024-11-20 16:40:58.885539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.023 qpair failed and we were unable to recover it. 00:29:13.023 [2024-11-20 16:40:58.885731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.023 [2024-11-20 16:40:58.885742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.023 qpair failed and we were unable to recover it. 00:29:13.023 [2024-11-20 16:40:58.886086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.023 [2024-11-20 16:40:58.886096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.023 qpair failed and we were unable to recover it. 00:29:13.023 [2024-11-20 16:40:58.886392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.023 [2024-11-20 16:40:58.886401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.023 qpair failed and we were unable to recover it. 00:29:13.023 [2024-11-20 16:40:58.886701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.023 [2024-11-20 16:40:58.886710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.023 qpair failed and we were unable to recover it. 00:29:13.023 [2024-11-20 16:40:58.887020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.023 [2024-11-20 16:40:58.887029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.023 qpair failed and we were unable to recover it. 00:29:13.023 [2024-11-20 16:40:58.887330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.023 [2024-11-20 16:40:58.887339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.023 qpair failed and we were unable to recover it. 00:29:13.023 [2024-11-20 16:40:58.887628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.023 [2024-11-20 16:40:58.887638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.023 qpair failed and we were unable to recover it. 00:29:13.023 [2024-11-20 16:40:58.887814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.023 [2024-11-20 16:40:58.887823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.023 qpair failed and we were unable to recover it. 00:29:13.023 [2024-11-20 16:40:58.888150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.023 [2024-11-20 16:40:58.888160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.023 qpair failed and we were unable to recover it. 00:29:13.023 [2024-11-20 16:40:58.888492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.023 [2024-11-20 16:40:58.888503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.023 qpair failed and we were unable to recover it. 00:29:13.023 [2024-11-20 16:40:58.888809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.023 [2024-11-20 16:40:58.888818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.023 qpair failed and we were unable to recover it. 00:29:13.023 [2024-11-20 16:40:58.888980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.023 [2024-11-20 16:40:58.889000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.023 qpair failed and we were unable to recover it. 00:29:13.023 [2024-11-20 16:40:58.889268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.023 [2024-11-20 16:40:58.889277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.023 qpair failed and we were unable to recover it. 00:29:13.023 [2024-11-20 16:40:58.889575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.023 [2024-11-20 16:40:58.889585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.023 qpair failed and we were unable to recover it. 00:29:13.023 [2024-11-20 16:40:58.889749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.023 [2024-11-20 16:40:58.889760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.023 qpair failed and we were unable to recover it. 00:29:13.023 [2024-11-20 16:40:58.889943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.023 [2024-11-20 16:40:58.889953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.023 qpair failed and we were unable to recover it. 00:29:13.023 [2024-11-20 16:40:58.890103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.023 [2024-11-20 16:40:58.890112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.023 qpair failed and we were unable to recover it. 00:29:13.023 [2024-11-20 16:40:58.890306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.023 [2024-11-20 16:40:58.890315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.023 qpair failed and we were unable to recover it. 00:29:13.023 [2024-11-20 16:40:58.890616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.023 [2024-11-20 16:40:58.890626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.023 qpair failed and we were unable to recover it. 00:29:13.023 [2024-11-20 16:40:58.890934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.023 [2024-11-20 16:40:58.890944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.023 qpair failed and we were unable to recover it. 00:29:13.023 [2024-11-20 16:40:58.891278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.023 [2024-11-20 16:40:58.891289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.023 qpair failed and we were unable to recover it. 00:29:13.023 [2024-11-20 16:40:58.891570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.023 [2024-11-20 16:40:58.891580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.023 qpair failed and we were unable to recover it. 00:29:13.023 [2024-11-20 16:40:58.891628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.023 [2024-11-20 16:40:58.891637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.023 qpair failed and we were unable to recover it. 00:29:13.023 [2024-11-20 16:40:58.891967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.023 [2024-11-20 16:40:58.891977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.023 qpair failed and we were unable to recover it. 00:29:13.023 [2024-11-20 16:40:58.892279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.023 [2024-11-20 16:40:58.892290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.023 qpair failed and we were unable to recover it. 00:29:13.023 [2024-11-20 16:40:58.892619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.023 [2024-11-20 16:40:58.892629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.023 qpair failed and we were unable to recover it. 00:29:13.023 [2024-11-20 16:40:58.892954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.023 [2024-11-20 16:40:58.892967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.023 qpair failed and we were unable to recover it. 00:29:13.023 [2024-11-20 16:40:58.893279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.023 [2024-11-20 16:40:58.893289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.023 qpair failed and we were unable to recover it. 00:29:13.023 [2024-11-20 16:40:58.893623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.023 [2024-11-20 16:40:58.893634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.023 qpair failed and we were unable to recover it. 00:29:13.024 [2024-11-20 16:40:58.893943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.024 [2024-11-20 16:40:58.893953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.024 qpair failed and we were unable to recover it. 00:29:13.024 [2024-11-20 16:40:58.894179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.024 [2024-11-20 16:40:58.894189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.024 qpair failed and we were unable to recover it. 00:29:13.024 [2024-11-20 16:40:58.894372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.024 [2024-11-20 16:40:58.894383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.024 qpair failed and we were unable to recover it. 00:29:13.024 [2024-11-20 16:40:58.894660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.024 [2024-11-20 16:40:58.894670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.024 qpair failed and we were unable to recover it. 00:29:13.024 [2024-11-20 16:40:58.894715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.024 [2024-11-20 16:40:58.894726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.024 qpair failed and we were unable to recover it. 00:29:13.024 [2024-11-20 16:40:58.894988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.024 [2024-11-20 16:40:58.894999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.024 qpair failed and we were unable to recover it. 00:29:13.024 [2024-11-20 16:40:58.895167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.024 [2024-11-20 16:40:58.895176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.024 qpair failed and we were unable to recover it. 00:29:13.024 [2024-11-20 16:40:58.895456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.024 [2024-11-20 16:40:58.895466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.024 qpair failed and we were unable to recover it. 00:29:13.024 [2024-11-20 16:40:58.895774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.024 [2024-11-20 16:40:58.895783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.024 qpair failed and we were unable to recover it. 00:29:13.024 [2024-11-20 16:40:58.896085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.024 [2024-11-20 16:40:58.896095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.024 qpair failed and we were unable to recover it. 00:29:13.024 [2024-11-20 16:40:58.896271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.024 [2024-11-20 16:40:58.896281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.024 qpair failed and we were unable to recover it. 00:29:13.024 [2024-11-20 16:40:58.896467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.024 [2024-11-20 16:40:58.896476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.024 qpair failed and we were unable to recover it. 00:29:13.024 [2024-11-20 16:40:58.896709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.024 [2024-11-20 16:40:58.896719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.024 qpair failed and we were unable to recover it. 00:29:13.024 [2024-11-20 16:40:58.896880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.024 [2024-11-20 16:40:58.896890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.024 qpair failed and we were unable to recover it. 00:29:13.024 [2024-11-20 16:40:58.896934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.024 [2024-11-20 16:40:58.896943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.024 qpair failed and we were unable to recover it. 00:29:13.024 [2024-11-20 16:40:58.897102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.024 [2024-11-20 16:40:58.897112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.024 qpair failed and we were unable to recover it. 00:29:13.024 [2024-11-20 16:40:58.897285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.024 [2024-11-20 16:40:58.897294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.024 qpair failed and we were unable to recover it. 00:29:13.024 [2024-11-20 16:40:58.897367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.024 [2024-11-20 16:40:58.897376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.024 qpair failed and we were unable to recover it. 00:29:13.024 [2024-11-20 16:40:58.897549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.024 [2024-11-20 16:40:58.897559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.024 qpair failed and we were unable to recover it. 00:29:13.024 [2024-11-20 16:40:58.897830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.024 [2024-11-20 16:40:58.897839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.024 qpair failed and we were unable to recover it. 00:29:13.024 [2024-11-20 16:40:58.898130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.024 [2024-11-20 16:40:58.898139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.024 qpair failed and we were unable to recover it. 00:29:13.024 [2024-11-20 16:40:58.898469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.024 [2024-11-20 16:40:58.898478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.024 qpair failed and we were unable to recover it. 00:29:13.024 [2024-11-20 16:40:58.898797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.024 [2024-11-20 16:40:58.898806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.024 qpair failed and we were unable to recover it. 00:29:13.024 [2024-11-20 16:40:58.899133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.024 [2024-11-20 16:40:58.899142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.024 qpair failed and we were unable to recover it. 00:29:13.024 [2024-11-20 16:40:58.899351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.024 [2024-11-20 16:40:58.899361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.024 qpair failed and we were unable to recover it. 00:29:13.024 [2024-11-20 16:40:58.899542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.024 [2024-11-20 16:40:58.899560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.024 qpair failed and we were unable to recover it. 00:29:13.024 [2024-11-20 16:40:58.899914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.024 [2024-11-20 16:40:58.899923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.024 qpair failed and we were unable to recover it. 00:29:13.024 [2024-11-20 16:40:58.900227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.024 [2024-11-20 16:40:58.900237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.024 qpair failed and we were unable to recover it. 00:29:13.024 [2024-11-20 16:40:58.900566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.024 [2024-11-20 16:40:58.900575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.024 qpair failed and we were unable to recover it. 00:29:13.024 [2024-11-20 16:40:58.900910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.024 [2024-11-20 16:40:58.900919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.024 qpair failed and we were unable to recover it. 00:29:13.024 [2024-11-20 16:40:58.901109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.024 [2024-11-20 16:40:58.901119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.024 qpair failed and we were unable to recover it. 00:29:13.024 [2024-11-20 16:40:58.901460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.024 [2024-11-20 16:40:58.901469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.024 qpair failed and we were unable to recover it. 00:29:13.024 [2024-11-20 16:40:58.901766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.024 [2024-11-20 16:40:58.901776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.024 qpair failed and we were unable to recover it. 00:29:13.024 [2024-11-20 16:40:58.901940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.024 [2024-11-20 16:40:58.901949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.024 qpair failed and we were unable to recover it. 00:29:13.024 [2024-11-20 16:40:58.902164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.024 [2024-11-20 16:40:58.902174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.024 qpair failed and we were unable to recover it. 00:29:13.024 [2024-11-20 16:40:58.902476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.024 [2024-11-20 16:40:58.902486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.024 qpair failed and we were unable to recover it. 00:29:13.024 [2024-11-20 16:40:58.902768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.024 [2024-11-20 16:40:58.902778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.024 qpair failed and we were unable to recover it. 00:29:13.024 [2024-11-20 16:40:58.903096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.024 [2024-11-20 16:40:58.903105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.024 qpair failed and we were unable to recover it. 00:29:13.024 [2024-11-20 16:40:58.903422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.025 [2024-11-20 16:40:58.903435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.025 qpair failed and we were unable to recover it. 00:29:13.025 [2024-11-20 16:40:58.903588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.025 [2024-11-20 16:40:58.903598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.025 qpair failed and we were unable to recover it. 00:29:13.025 [2024-11-20 16:40:58.903821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.025 [2024-11-20 16:40:58.903831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.025 qpair failed and we were unable to recover it. 00:29:13.025 [2024-11-20 16:40:58.904129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.025 [2024-11-20 16:40:58.904139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.025 qpair failed and we were unable to recover it. 00:29:13.025 [2024-11-20 16:40:58.904300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.025 [2024-11-20 16:40:58.904309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.025 qpair failed and we were unable to recover it. 00:29:13.025 [2024-11-20 16:40:58.904581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.025 [2024-11-20 16:40:58.904591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.025 qpair failed and we were unable to recover it. 00:29:13.025 [2024-11-20 16:40:58.904794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.025 [2024-11-20 16:40:58.904805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.025 qpair failed and we were unable to recover it. 00:29:13.025 [2024-11-20 16:40:58.905112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.025 [2024-11-20 16:40:58.905121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.025 qpair failed and we were unable to recover it. 00:29:13.025 [2024-11-20 16:40:58.905311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.025 [2024-11-20 16:40:58.905320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.025 qpair failed and we were unable to recover it. 00:29:13.025 [2024-11-20 16:40:58.905642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.025 [2024-11-20 16:40:58.905652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.025 qpair failed and we were unable to recover it. 00:29:13.025 [2024-11-20 16:40:58.905973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.025 [2024-11-20 16:40:58.905987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.025 qpair failed and we were unable to recover it. 00:29:13.025 [2024-11-20 16:40:58.906293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.025 [2024-11-20 16:40:58.906303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.025 qpair failed and we were unable to recover it. 00:29:13.025 [2024-11-20 16:40:58.906456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.025 [2024-11-20 16:40:58.906465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.025 qpair failed and we were unable to recover it. 00:29:13.025 [2024-11-20 16:40:58.906753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.025 [2024-11-20 16:40:58.906763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.025 qpair failed and we were unable to recover it. 00:29:13.025 [2024-11-20 16:40:58.907048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.025 [2024-11-20 16:40:58.907059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.025 qpair failed and we were unable to recover it. 00:29:13.025 [2024-11-20 16:40:58.907259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.025 [2024-11-20 16:40:58.907269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.025 qpair failed and we were unable to recover it. 00:29:13.025 [2024-11-20 16:40:58.907463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.025 [2024-11-20 16:40:58.907472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.025 qpair failed and we were unable to recover it. 00:29:13.025 [2024-11-20 16:40:58.907802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.025 [2024-11-20 16:40:58.907812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.025 qpair failed and we were unable to recover it. 00:29:13.025 [2024-11-20 16:40:58.908099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.025 [2024-11-20 16:40:58.908109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.025 qpair failed and we were unable to recover it. 00:29:13.025 [2024-11-20 16:40:58.908153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.025 [2024-11-20 16:40:58.908163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.025 qpair failed and we were unable to recover it. 00:29:13.025 [2024-11-20 16:40:58.908521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.025 [2024-11-20 16:40:58.908531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.025 qpair failed and we were unable to recover it. 00:29:13.025 [2024-11-20 16:40:58.908886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.025 [2024-11-20 16:40:58.908896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.025 qpair failed and we were unable to recover it. 00:29:13.025 [2024-11-20 16:40:58.909188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.025 [2024-11-20 16:40:58.909197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.025 qpair failed and we were unable to recover it. 00:29:13.025 [2024-11-20 16:40:58.909524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.025 [2024-11-20 16:40:58.909535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.025 qpair failed and we were unable to recover it. 00:29:13.025 [2024-11-20 16:40:58.909708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.025 [2024-11-20 16:40:58.909717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.025 qpair failed and we were unable to recover it. 00:29:13.025 [2024-11-20 16:40:58.909906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.025 [2024-11-20 16:40:58.909916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.025 qpair failed and we were unable to recover it. 00:29:13.025 [2024-11-20 16:40:58.910152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.025 [2024-11-20 16:40:58.910162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.025 qpair failed and we were unable to recover it. 00:29:13.025 [2024-11-20 16:40:58.910486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.025 [2024-11-20 16:40:58.910496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.025 qpair failed and we were unable to recover it. 00:29:13.025 [2024-11-20 16:40:58.910781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.025 [2024-11-20 16:40:58.910792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.025 qpair failed and we were unable to recover it. 00:29:13.025 [2024-11-20 16:40:58.911116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.025 [2024-11-20 16:40:58.911129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.025 qpair failed and we were unable to recover it. 00:29:13.025 [2024-11-20 16:40:58.911438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.025 [2024-11-20 16:40:58.911448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.025 qpair failed and we were unable to recover it. 00:29:13.025 [2024-11-20 16:40:58.911748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.025 [2024-11-20 16:40:58.911759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.025 qpair failed and we were unable to recover it. 00:29:13.025 [2024-11-20 16:40:58.912065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.025 [2024-11-20 16:40:58.912076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.025 qpair failed and we were unable to recover it. 00:29:13.025 [2024-11-20 16:40:58.912276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.026 [2024-11-20 16:40:58.912285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.026 qpair failed and we were unable to recover it. 00:29:13.026 [2024-11-20 16:40:58.912458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.026 [2024-11-20 16:40:58.912468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.026 qpair failed and we were unable to recover it. 00:29:13.026 [2024-11-20 16:40:58.912730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.026 [2024-11-20 16:40:58.912740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.026 qpair failed and we were unable to recover it. 00:29:13.026 [2024-11-20 16:40:58.912950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.026 [2024-11-20 16:40:58.912960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.026 qpair failed and we were unable to recover it. 00:29:13.026 [2024-11-20 16:40:58.913250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.026 [2024-11-20 16:40:58.913260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.026 qpair failed and we were unable to recover it. 00:29:13.026 [2024-11-20 16:40:58.913477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.026 [2024-11-20 16:40:58.913486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.026 qpair failed and we were unable to recover it. 00:29:13.026 [2024-11-20 16:40:58.913819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.026 [2024-11-20 16:40:58.913828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.026 qpair failed and we were unable to recover it. 00:29:13.026 [2024-11-20 16:40:58.914084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.026 [2024-11-20 16:40:58.914094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.026 qpair failed and we were unable to recover it. 00:29:13.026 [2024-11-20 16:40:58.914267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.026 [2024-11-20 16:40:58.914277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.026 qpair failed and we were unable to recover it. 00:29:13.026 [2024-11-20 16:40:58.914604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.026 [2024-11-20 16:40:58.914613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.026 qpair failed and we were unable to recover it. 00:29:13.026 [2024-11-20 16:40:58.914656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.026 [2024-11-20 16:40:58.914664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.026 qpair failed and we were unable to recover it. 00:29:13.026 [2024-11-20 16:40:58.914829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.026 [2024-11-20 16:40:58.914839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.026 qpair failed and we were unable to recover it. 00:29:13.026 [2024-11-20 16:40:58.914887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.026 [2024-11-20 16:40:58.914897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.026 qpair failed and we were unable to recover it. 00:29:13.026 [2024-11-20 16:40:58.915193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.026 [2024-11-20 16:40:58.915203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.026 qpair failed and we were unable to recover it. 00:29:13.026 [2024-11-20 16:40:58.915542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.026 [2024-11-20 16:40:58.915551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.026 qpair failed and we were unable to recover it. 00:29:13.026 [2024-11-20 16:40:58.915842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.026 [2024-11-20 16:40:58.915853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.026 qpair failed and we were unable to recover it. 00:29:13.026 [2024-11-20 16:40:58.916009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.026 [2024-11-20 16:40:58.916019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.026 qpair failed and we were unable to recover it. 00:29:13.026 [2024-11-20 16:40:58.916293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.026 [2024-11-20 16:40:58.916302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.026 qpair failed and we were unable to recover it. 00:29:13.026 [2024-11-20 16:40:58.916625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.026 [2024-11-20 16:40:58.916635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.026 qpair failed and we were unable to recover it. 00:29:13.026 [2024-11-20 16:40:58.916934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.026 [2024-11-20 16:40:58.916945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.026 qpair failed and we were unable to recover it. 00:29:13.026 [2024-11-20 16:40:58.917257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.026 [2024-11-20 16:40:58.917267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.026 qpair failed and we were unable to recover it. 00:29:13.026 [2024-11-20 16:40:58.917416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.026 [2024-11-20 16:40:58.917426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.026 qpair failed and we were unable to recover it. 00:29:13.026 [2024-11-20 16:40:58.917692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.026 [2024-11-20 16:40:58.917701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.026 qpair failed and we were unable to recover it. 00:29:13.026 [2024-11-20 16:40:58.918031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.026 [2024-11-20 16:40:58.918041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.026 qpair failed and we were unable to recover it. 00:29:13.026 [2024-11-20 16:40:58.918316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.026 [2024-11-20 16:40:58.918326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.026 qpair failed and we were unable to recover it. 00:29:13.026 [2024-11-20 16:40:58.918499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.026 [2024-11-20 16:40:58.918508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.026 qpair failed and we were unable to recover it. 00:29:13.026 [2024-11-20 16:40:58.918810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.026 [2024-11-20 16:40:58.918820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.026 qpair failed and we were unable to recover it. 00:29:13.026 [2024-11-20 16:40:58.919132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.026 [2024-11-20 16:40:58.919142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.026 qpair failed and we were unable to recover it. 00:29:13.026 [2024-11-20 16:40:58.919464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.026 [2024-11-20 16:40:58.919474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.026 qpair failed and we were unable to recover it. 00:29:13.026 [2024-11-20 16:40:58.919760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.026 [2024-11-20 16:40:58.919770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.026 qpair failed and we were unable to recover it. 00:29:13.026 [2024-11-20 16:40:58.920093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.026 [2024-11-20 16:40:58.920103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.026 qpair failed and we were unable to recover it. 00:29:13.026 [2024-11-20 16:40:58.920292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.026 [2024-11-20 16:40:58.920302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.026 qpair failed and we were unable to recover it. 00:29:13.026 [2024-11-20 16:40:58.920622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.026 [2024-11-20 16:40:58.920632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.026 qpair failed and we were unable to recover it. 00:29:13.026 [2024-11-20 16:40:58.920941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.026 [2024-11-20 16:40:58.920951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.026 qpair failed and we were unable to recover it. 00:29:13.026 [2024-11-20 16:40:58.921126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.026 [2024-11-20 16:40:58.921136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.026 qpair failed and we were unable to recover it. 00:29:13.026 [2024-11-20 16:40:58.921333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.026 [2024-11-20 16:40:58.921346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.026 qpair failed and we were unable to recover it. 00:29:13.026 [2024-11-20 16:40:58.921630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.026 [2024-11-20 16:40:58.921639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.026 qpair failed and we were unable to recover it. 00:29:13.026 [2024-11-20 16:40:58.921825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.026 [2024-11-20 16:40:58.921834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.026 qpair failed and we were unable to recover it. 00:29:13.026 [2024-11-20 16:40:58.922135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.027 [2024-11-20 16:40:58.922145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.027 qpair failed and we were unable to recover it. 00:29:13.027 [2024-11-20 16:40:58.922466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.027 [2024-11-20 16:40:58.922476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.027 qpair failed and we were unable to recover it. 00:29:13.027 [2024-11-20 16:40:58.922663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.027 [2024-11-20 16:40:58.922673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.027 qpair failed and we were unable to recover it. 00:29:13.027 [2024-11-20 16:40:58.922846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.027 [2024-11-20 16:40:58.922856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.027 qpair failed and we were unable to recover it. 00:29:13.027 [2024-11-20 16:40:58.923143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.027 [2024-11-20 16:40:58.923153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.027 qpair failed and we were unable to recover it. 00:29:13.027 [2024-11-20 16:40:58.923485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.027 [2024-11-20 16:40:58.923496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.027 qpair failed and we were unable to recover it. 00:29:13.027 [2024-11-20 16:40:58.923657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.027 [2024-11-20 16:40:58.923668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.027 qpair failed and we were unable to recover it. 00:29:13.027 [2024-11-20 16:40:58.923884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.027 [2024-11-20 16:40:58.923894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.027 qpair failed and we were unable to recover it. 00:29:13.027 [2024-11-20 16:40:58.924242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.027 [2024-11-20 16:40:58.924252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.027 qpair failed and we were unable to recover it. 00:29:13.027 [2024-11-20 16:40:58.924562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.027 [2024-11-20 16:40:58.924572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.027 qpair failed and we were unable to recover it. 00:29:13.027 [2024-11-20 16:40:58.924940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.027 [2024-11-20 16:40:58.924950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.027 qpair failed and we were unable to recover it. 00:29:13.027 [2024-11-20 16:40:58.925263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.027 [2024-11-20 16:40:58.925273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.027 qpair failed and we were unable to recover it. 00:29:13.027 [2024-11-20 16:40:58.925558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.027 [2024-11-20 16:40:58.925568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.027 qpair failed and we were unable to recover it. 00:29:13.027 [2024-11-20 16:40:58.925880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.027 [2024-11-20 16:40:58.925890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.027 qpair failed and we were unable to recover it. 00:29:13.027 [2024-11-20 16:40:58.926201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.027 [2024-11-20 16:40:58.926211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.027 qpair failed and we were unable to recover it. 00:29:13.027 [2024-11-20 16:40:58.926369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.027 [2024-11-20 16:40:58.926379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.027 qpair failed and we were unable to recover it. 00:29:13.027 [2024-11-20 16:40:58.926655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.027 [2024-11-20 16:40:58.926665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.027 qpair failed and we were unable to recover it. 00:29:13.027 [2024-11-20 16:40:58.926836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.027 [2024-11-20 16:40:58.926846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.027 qpair failed and we were unable to recover it. 00:29:13.027 [2024-11-20 16:40:58.926894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.027 [2024-11-20 16:40:58.926904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.027 qpair failed and we were unable to recover it. 00:29:13.027 [2024-11-20 16:40:58.927085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.027 [2024-11-20 16:40:58.927096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.027 qpair failed and we were unable to recover it. 00:29:13.027 [2024-11-20 16:40:58.927388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.027 [2024-11-20 16:40:58.927398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.027 qpair failed and we were unable to recover it. 00:29:13.027 [2024-11-20 16:40:58.927707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.027 [2024-11-20 16:40:58.927718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.027 qpair failed and we were unable to recover it. 00:29:13.027 [2024-11-20 16:40:58.928056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.027 [2024-11-20 16:40:58.928067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.027 qpair failed and we were unable to recover it. 00:29:13.027 [2024-11-20 16:40:58.928110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.027 [2024-11-20 16:40:58.928118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.027 qpair failed and we were unable to recover it. 00:29:13.027 [2024-11-20 16:40:58.928282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.027 [2024-11-20 16:40:58.928292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.027 qpair failed and we were unable to recover it. 00:29:13.027 [2024-11-20 16:40:58.928559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.027 [2024-11-20 16:40:58.928569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.027 qpair failed and we were unable to recover it. 00:29:13.027 [2024-11-20 16:40:58.928882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.027 [2024-11-20 16:40:58.928892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.027 qpair failed and we were unable to recover it. 00:29:13.027 [2024-11-20 16:40:58.929060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.027 [2024-11-20 16:40:58.929070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.027 qpair failed and we were unable to recover it. 00:29:13.027 [2024-11-20 16:40:58.929297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.027 [2024-11-20 16:40:58.929306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.027 qpair failed and we were unable to recover it. 00:29:13.027 [2024-11-20 16:40:58.929529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.027 [2024-11-20 16:40:58.929539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.027 qpair failed and we were unable to recover it. 00:29:13.027 [2024-11-20 16:40:58.929838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.027 [2024-11-20 16:40:58.929848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.027 qpair failed and we were unable to recover it. 00:29:13.027 [2024-11-20 16:40:58.930147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.027 [2024-11-20 16:40:58.930157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.027 qpair failed and we were unable to recover it. 00:29:13.027 [2024-11-20 16:40:58.930453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.027 [2024-11-20 16:40:58.930463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.027 qpair failed and we were unable to recover it. 00:29:13.027 [2024-11-20 16:40:58.930644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.027 [2024-11-20 16:40:58.930654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.027 qpair failed and we were unable to recover it. 00:29:13.027 [2024-11-20 16:40:58.930936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.027 [2024-11-20 16:40:58.930945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.027 qpair failed and we were unable to recover it. 00:29:13.027 [2024-11-20 16:40:58.931278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.027 [2024-11-20 16:40:58.931288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.027 qpair failed and we were unable to recover it. 00:29:13.027 [2024-11-20 16:40:58.931593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.028 [2024-11-20 16:40:58.931602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.028 qpair failed and we were unable to recover it. 00:29:13.028 [2024-11-20 16:40:58.931911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.028 [2024-11-20 16:40:58.931921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.028 qpair failed and we were unable to recover it. 00:29:13.028 [2024-11-20 16:40:58.932094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.028 [2024-11-20 16:40:58.932107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.028 qpair failed and we were unable to recover it. 00:29:13.028 [2024-11-20 16:40:58.932369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.028 [2024-11-20 16:40:58.932379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.028 qpair failed and we were unable to recover it. 00:29:13.028 [2024-11-20 16:40:58.932695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.028 [2024-11-20 16:40:58.932705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.028 qpair failed and we were unable to recover it. 00:29:13.028 [2024-11-20 16:40:58.932989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.028 [2024-11-20 16:40:58.932998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.028 qpair failed and we were unable to recover it. 00:29:13.028 [2024-11-20 16:40:58.933160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.028 [2024-11-20 16:40:58.933170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.028 qpair failed and we were unable to recover it. 00:29:13.028 [2024-11-20 16:40:58.933385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.028 [2024-11-20 16:40:58.933394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.028 qpair failed and we were unable to recover it. 00:29:13.028 [2024-11-20 16:40:58.933731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.028 [2024-11-20 16:40:58.933740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.028 qpair failed and we were unable to recover it. 00:29:13.028 [2024-11-20 16:40:58.934055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.028 [2024-11-20 16:40:58.934065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.028 qpair failed and we were unable to recover it. 00:29:13.028 [2024-11-20 16:40:58.934366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.028 [2024-11-20 16:40:58.934383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.028 qpair failed and we were unable to recover it. 00:29:13.028 [2024-11-20 16:40:58.934562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.028 [2024-11-20 16:40:58.934572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.028 qpair failed and we were unable to recover it. 00:29:13.028 [2024-11-20 16:40:58.934719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.028 [2024-11-20 16:40:58.934729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.028 qpair failed and we were unable to recover it. 00:29:13.028 [2024-11-20 16:40:58.934917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.028 [2024-11-20 16:40:58.934926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.028 qpair failed and we were unable to recover it. 00:29:13.028 [2024-11-20 16:40:58.935235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.028 [2024-11-20 16:40:58.935246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.028 qpair failed and we were unable to recover it. 00:29:13.028 [2024-11-20 16:40:58.935569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.028 [2024-11-20 16:40:58.935579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.028 qpair failed and we were unable to recover it. 00:29:13.028 [2024-11-20 16:40:58.935880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.028 [2024-11-20 16:40:58.935891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.028 qpair failed and we were unable to recover it. 00:29:13.028 [2024-11-20 16:40:58.936061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.028 [2024-11-20 16:40:58.936071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.028 qpair failed and we were unable to recover it. 00:29:13.028 [2024-11-20 16:40:58.936357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.028 [2024-11-20 16:40:58.936366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.028 qpair failed and we were unable to recover it. 00:29:13.028 [2024-11-20 16:40:58.936690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.028 [2024-11-20 16:40:58.936700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.028 qpair failed and we were unable to recover it. 00:29:13.028 [2024-11-20 16:40:58.936892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.028 [2024-11-20 16:40:58.936902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.028 qpair failed and we were unable to recover it. 00:29:13.028 [2024-11-20 16:40:58.937070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.028 [2024-11-20 16:40:58.937079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.028 qpair failed and we were unable to recover it. 00:29:13.028 [2024-11-20 16:40:58.937396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.028 [2024-11-20 16:40:58.937405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.028 qpair failed and we were unable to recover it. 00:29:13.028 [2024-11-20 16:40:58.937449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.028 [2024-11-20 16:40:58.937458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.028 qpair failed and we were unable to recover it. 00:29:13.028 [2024-11-20 16:40:58.937739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.028 [2024-11-20 16:40:58.937749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.028 qpair failed and we were unable to recover it. 00:29:13.305 [2024-11-20 16:40:58.938064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.305 [2024-11-20 16:40:58.938075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.305 qpair failed and we were unable to recover it. 00:29:13.305 [2024-11-20 16:40:58.938156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.305 [2024-11-20 16:40:58.938165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.305 qpair failed and we were unable to recover it. 00:29:13.305 [2024-11-20 16:40:58.938328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.305 [2024-11-20 16:40:58.938338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.305 qpair failed and we were unable to recover it. 00:29:13.305 [2024-11-20 16:40:58.938649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.305 [2024-11-20 16:40:58.938660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.305 qpair failed and we were unable to recover it. 00:29:13.305 [2024-11-20 16:40:58.938964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.305 [2024-11-20 16:40:58.938975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.305 qpair failed and we were unable to recover it. 00:29:13.305 [2024-11-20 16:40:58.939287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.305 [2024-11-20 16:40:58.939297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.305 qpair failed and we were unable to recover it. 00:29:13.305 [2024-11-20 16:40:58.939585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.305 [2024-11-20 16:40:58.939595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.305 qpair failed and we were unable to recover it. 00:29:13.305 [2024-11-20 16:40:58.939906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.305 [2024-11-20 16:40:58.939915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.305 qpair failed and we were unable to recover it. 00:29:13.305 [2024-11-20 16:40:58.940246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.305 [2024-11-20 16:40:58.940257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.305 qpair failed and we were unable to recover it. 00:29:13.305 [2024-11-20 16:40:58.940476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.305 [2024-11-20 16:40:58.940485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.305 qpair failed and we were unable to recover it. 00:29:13.305 [2024-11-20 16:40:58.940790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.305 [2024-11-20 16:40:58.940799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.305 qpair failed and we were unable to recover it. 00:29:13.305 [2024-11-20 16:40:58.940971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.305 [2024-11-20 16:40:58.940985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.305 qpair failed and we were unable to recover it. 00:29:13.305 [2024-11-20 16:40:58.941291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.305 [2024-11-20 16:40:58.941308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.305 qpair failed and we were unable to recover it. 00:29:13.305 [2024-11-20 16:40:58.941690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.305 [2024-11-20 16:40:58.941701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.305 qpair failed and we were unable to recover it. 00:29:13.305 [2024-11-20 16:40:58.941869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.305 [2024-11-20 16:40:58.941878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.305 qpair failed and we were unable to recover it. 00:29:13.305 [2024-11-20 16:40:58.942051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.305 [2024-11-20 16:40:58.942062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.305 qpair failed and we were unable to recover it. 00:29:13.305 [2024-11-20 16:40:58.942354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.306 [2024-11-20 16:40:58.942363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.306 qpair failed and we were unable to recover it. 00:29:13.306 [2024-11-20 16:40:58.942552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.306 [2024-11-20 16:40:58.942562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.306 qpair failed and we were unable to recover it. 00:29:13.306 [2024-11-20 16:40:58.942903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.306 [2024-11-20 16:40:58.942913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.306 qpair failed and we were unable to recover it. 00:29:13.306 [2024-11-20 16:40:58.943262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.306 [2024-11-20 16:40:58.943272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.306 qpair failed and we were unable to recover it. 00:29:13.306 [2024-11-20 16:40:58.943577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.306 [2024-11-20 16:40:58.943589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.306 qpair failed and we were unable to recover it. 00:29:13.306 [2024-11-20 16:40:58.943890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.306 [2024-11-20 16:40:58.943900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.306 qpair failed and we were unable to recover it. 00:29:13.306 [2024-11-20 16:40:58.944222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.306 [2024-11-20 16:40:58.944232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.306 qpair failed and we were unable to recover it. 00:29:13.306 [2024-11-20 16:40:58.944429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.306 [2024-11-20 16:40:58.944438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.306 qpair failed and we were unable to recover it. 00:29:13.306 [2024-11-20 16:40:58.944762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.306 [2024-11-20 16:40:58.944772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.306 qpair failed and we were unable to recover it. 00:29:13.306 [2024-11-20 16:40:58.944979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.306 [2024-11-20 16:40:58.944994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.306 qpair failed and we were unable to recover it. 00:29:13.306 [2024-11-20 16:40:58.945215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.306 [2024-11-20 16:40:58.945225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.306 qpair failed and we were unable to recover it. 00:29:13.306 [2024-11-20 16:40:58.945351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.306 [2024-11-20 16:40:58.945360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.306 qpair failed and we were unable to recover it. 00:29:13.306 [2024-11-20 16:40:58.945686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.306 [2024-11-20 16:40:58.945696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.306 qpair failed and we were unable to recover it. 00:29:13.306 [2024-11-20 16:40:58.945881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.306 [2024-11-20 16:40:58.945891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.306 qpair failed and we were unable to recover it. 00:29:13.306 [2024-11-20 16:40:58.946101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.306 [2024-11-20 16:40:58.946111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.306 qpair failed and we were unable to recover it. 00:29:13.306 [2024-11-20 16:40:58.946430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.306 [2024-11-20 16:40:58.946440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.306 qpair failed and we were unable to recover it. 00:29:13.306 [2024-11-20 16:40:58.946748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.306 [2024-11-20 16:40:58.946758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.306 qpair failed and we were unable to recover it. 00:29:13.306 [2024-11-20 16:40:58.947041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.306 [2024-11-20 16:40:58.947051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.306 qpair failed and we were unable to recover it. 00:29:13.306 [2024-11-20 16:40:58.947352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.306 [2024-11-20 16:40:58.947361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.306 qpair failed and we were unable to recover it. 00:29:13.306 [2024-11-20 16:40:58.947518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.306 [2024-11-20 16:40:58.947527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.306 qpair failed and we were unable to recover it. 00:29:13.306 [2024-11-20 16:40:58.947715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.306 [2024-11-20 16:40:58.947725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.306 qpair failed and we were unable to recover it. 00:29:13.306 [2024-11-20 16:40:58.947887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.306 [2024-11-20 16:40:58.947896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.306 qpair failed and we were unable to recover it. 00:29:13.306 [2024-11-20 16:40:58.948055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.306 [2024-11-20 16:40:58.948065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.306 qpair failed and we were unable to recover it. 00:29:13.306 [2024-11-20 16:40:58.948422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.306 [2024-11-20 16:40:58.948432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.306 qpair failed and we were unable to recover it. 00:29:13.306 [2024-11-20 16:40:58.948738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.306 [2024-11-20 16:40:58.948749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.306 qpair failed and we were unable to recover it. 00:29:13.306 [2024-11-20 16:40:58.949095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.306 [2024-11-20 16:40:58.949106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.306 qpair failed and we were unable to recover it. 00:29:13.306 [2024-11-20 16:40:58.949467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.306 [2024-11-20 16:40:58.949477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.306 qpair failed and we were unable to recover it. 00:29:13.306 [2024-11-20 16:40:58.949785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.306 [2024-11-20 16:40:58.949795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.306 qpair failed and we were unable to recover it. 00:29:13.306 [2024-11-20 16:40:58.950105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.306 [2024-11-20 16:40:58.950115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.306 qpair failed and we were unable to recover it. 00:29:13.306 [2024-11-20 16:40:58.950424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.306 [2024-11-20 16:40:58.950436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.306 qpair failed and we were unable to recover it. 00:29:13.306 [2024-11-20 16:40:58.950608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.306 [2024-11-20 16:40:58.950618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.306 qpair failed and we were unable to recover it. 00:29:13.306 [2024-11-20 16:40:58.950890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.306 [2024-11-20 16:40:58.950899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.306 qpair failed and we were unable to recover it. 00:29:13.306 [2024-11-20 16:40:58.951104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.306 [2024-11-20 16:40:58.951114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.306 qpair failed and we were unable to recover it. 00:29:13.306 [2024-11-20 16:40:58.951381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.306 [2024-11-20 16:40:58.951391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.306 qpair failed and we were unable to recover it. 00:29:13.306 [2024-11-20 16:40:58.951570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.306 [2024-11-20 16:40:58.951580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.306 qpair failed and we were unable to recover it. 00:29:13.306 [2024-11-20 16:40:58.951880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.306 [2024-11-20 16:40:58.951890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.306 qpair failed and we were unable to recover it. 00:29:13.306 [2024-11-20 16:40:58.952222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.306 [2024-11-20 16:40:58.952233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.306 qpair failed and we were unable to recover it. 00:29:13.306 [2024-11-20 16:40:58.952551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.306 [2024-11-20 16:40:58.952561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.306 qpair failed and we were unable to recover it. 00:29:13.306 [2024-11-20 16:40:58.952749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.307 [2024-11-20 16:40:58.952758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.307 qpair failed and we were unable to recover it. 00:29:13.307 [2024-11-20 16:40:58.953042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.307 [2024-11-20 16:40:58.953052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.307 qpair failed and we were unable to recover it. 00:29:13.307 [2024-11-20 16:40:58.953226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.307 [2024-11-20 16:40:58.953237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.307 qpair failed and we were unable to recover it. 00:29:13.307 [2024-11-20 16:40:58.953529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.307 [2024-11-20 16:40:58.953540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.307 qpair failed and we were unable to recover it. 00:29:13.307 [2024-11-20 16:40:58.953803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.307 [2024-11-20 16:40:58.953813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.307 qpair failed and we were unable to recover it. 00:29:13.307 [2024-11-20 16:40:58.954125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.307 [2024-11-20 16:40:58.954135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.307 qpair failed and we were unable to recover it. 00:29:13.307 [2024-11-20 16:40:58.954297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.307 [2024-11-20 16:40:58.954307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.307 qpair failed and we were unable to recover it. 00:29:13.307 [2024-11-20 16:40:58.954634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.307 [2024-11-20 16:40:58.954644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.307 qpair failed and we were unable to recover it. 00:29:13.307 [2024-11-20 16:40:58.954945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.307 [2024-11-20 16:40:58.954955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.307 qpair failed and we were unable to recover it. 00:29:13.307 [2024-11-20 16:40:58.955175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.307 [2024-11-20 16:40:58.955185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.307 qpair failed and we were unable to recover it. 00:29:13.307 [2024-11-20 16:40:58.955509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.307 [2024-11-20 16:40:58.955518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.307 qpair failed and we were unable to recover it. 00:29:13.307 [2024-11-20 16:40:58.955805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.307 [2024-11-20 16:40:58.955814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.307 qpair failed and we were unable to recover it. 00:29:13.307 [2024-11-20 16:40:58.956117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.307 [2024-11-20 16:40:58.956127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.307 qpair failed and we were unable to recover it. 00:29:13.307 [2024-11-20 16:40:58.956324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.307 [2024-11-20 16:40:58.956333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.307 qpair failed and we were unable to recover it. 00:29:13.307 [2024-11-20 16:40:58.956380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.307 [2024-11-20 16:40:58.956389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.307 qpair failed and we were unable to recover it. 00:29:13.307 [2024-11-20 16:40:58.956690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.307 [2024-11-20 16:40:58.956700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.307 qpair failed and we were unable to recover it. 00:29:13.307 [2024-11-20 16:40:58.956986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.307 [2024-11-20 16:40:58.956996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.307 qpair failed and we were unable to recover it. 00:29:13.307 [2024-11-20 16:40:58.957210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.307 [2024-11-20 16:40:58.957220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.307 qpair failed and we were unable to recover it. 00:29:13.307 [2024-11-20 16:40:58.957371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.307 [2024-11-20 16:40:58.957382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.307 qpair failed and we were unable to recover it. 00:29:13.307 [2024-11-20 16:40:58.957722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.307 [2024-11-20 16:40:58.957732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.307 qpair failed and we were unable to recover it. 00:29:13.307 [2024-11-20 16:40:58.958021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.307 [2024-11-20 16:40:58.958032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.307 qpair failed and we were unable to recover it. 00:29:13.307 [2024-11-20 16:40:58.958435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.307 [2024-11-20 16:40:58.958445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.307 qpair failed and we were unable to recover it. 00:29:13.307 [2024-11-20 16:40:58.958732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.307 [2024-11-20 16:40:58.958742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.307 qpair failed and we were unable to recover it. 00:29:13.307 [2024-11-20 16:40:58.958942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.307 [2024-11-20 16:40:58.958952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.307 qpair failed and we were unable to recover it. 00:29:13.307 [2024-11-20 16:40:58.959308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.307 [2024-11-20 16:40:58.959318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.307 qpair failed and we were unable to recover it. 00:29:13.307 [2024-11-20 16:40:58.959506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.307 [2024-11-20 16:40:58.959516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.307 qpair failed and we were unable to recover it. 00:29:13.307 [2024-11-20 16:40:58.959682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.307 [2024-11-20 16:40:58.959692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.307 qpair failed and we were unable to recover it. 00:29:13.307 [2024-11-20 16:40:58.959872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.307 [2024-11-20 16:40:58.959882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.307 qpair failed and we were unable to recover it. 00:29:13.307 [2024-11-20 16:40:58.960050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.307 [2024-11-20 16:40:58.960059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.307 qpair failed and we were unable to recover it. 00:29:13.307 [2024-11-20 16:40:58.960247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.307 [2024-11-20 16:40:58.960256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.307 qpair failed and we were unable to recover it. 00:29:13.307 [2024-11-20 16:40:58.960513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.307 [2024-11-20 16:40:58.960522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.307 qpair failed and we were unable to recover it. 00:29:13.307 [2024-11-20 16:40:58.960789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.307 [2024-11-20 16:40:58.960800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.307 qpair failed and we were unable to recover it. 00:29:13.307 [2024-11-20 16:40:58.961077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.307 [2024-11-20 16:40:58.961087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.307 qpair failed and we were unable to recover it. 00:29:13.307 [2024-11-20 16:40:58.961470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.307 [2024-11-20 16:40:58.961480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.307 qpair failed and we were unable to recover it. 00:29:13.307 [2024-11-20 16:40:58.961809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.307 [2024-11-20 16:40:58.961819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.307 qpair failed and we were unable to recover it. 00:29:13.307 [2024-11-20 16:40:58.962112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.307 [2024-11-20 16:40:58.962122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.307 qpair failed and we were unable to recover it. 00:29:13.307 [2024-11-20 16:40:58.962289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.307 [2024-11-20 16:40:58.962299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.307 qpair failed and we were unable to recover it. 00:29:13.307 [2024-11-20 16:40:58.962603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.307 [2024-11-20 16:40:58.962613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.307 qpair failed and we were unable to recover it. 00:29:13.307 [2024-11-20 16:40:58.962922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.307 [2024-11-20 16:40:58.962933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.307 qpair failed and we were unable to recover it. 00:29:13.308 [2024-11-20 16:40:58.963218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.308 [2024-11-20 16:40:58.963228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.308 qpair failed and we were unable to recover it. 00:29:13.308 [2024-11-20 16:40:58.963542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.308 [2024-11-20 16:40:58.963552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.308 qpair failed and we were unable to recover it. 00:29:13.308 [2024-11-20 16:40:58.963867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.308 [2024-11-20 16:40:58.963877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.308 qpair failed and we were unable to recover it. 00:29:13.308 [2024-11-20 16:40:58.963921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.308 [2024-11-20 16:40:58.963930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.308 qpair failed and we were unable to recover it. 00:29:13.308 [2024-11-20 16:40:58.964080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.308 [2024-11-20 16:40:58.964090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.308 qpair failed and we were unable to recover it. 00:29:13.308 [2024-11-20 16:40:58.964268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.308 [2024-11-20 16:40:58.964277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.308 qpair failed and we were unable to recover it. 00:29:13.308 [2024-11-20 16:40:58.964561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.308 [2024-11-20 16:40:58.964571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.308 qpair failed and we were unable to recover it. 00:29:13.308 [2024-11-20 16:40:58.964791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.308 [2024-11-20 16:40:58.964802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.308 qpair failed and we were unable to recover it. 00:29:13.308 [2024-11-20 16:40:58.965179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.308 [2024-11-20 16:40:58.965189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.308 qpair failed and we were unable to recover it. 00:29:13.308 [2024-11-20 16:40:58.965499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.308 [2024-11-20 16:40:58.965509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.308 qpair failed and we were unable to recover it. 00:29:13.308 [2024-11-20 16:40:58.965641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.308 [2024-11-20 16:40:58.965652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.308 qpair failed and we were unable to recover it. 00:29:13.308 [2024-11-20 16:40:58.965843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.308 [2024-11-20 16:40:58.965852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.308 qpair failed and we were unable to recover it. 00:29:13.308 [2024-11-20 16:40:58.966029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.308 [2024-11-20 16:40:58.966039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.308 qpair failed and we were unable to recover it. 00:29:13.308 [2024-11-20 16:40:58.966409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.308 [2024-11-20 16:40:58.966419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.308 qpair failed and we were unable to recover it. 00:29:13.308 [2024-11-20 16:40:58.966731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.308 [2024-11-20 16:40:58.966741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.308 qpair failed and we were unable to recover it. 00:29:13.308 [2024-11-20 16:40:58.967065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.308 [2024-11-20 16:40:58.967076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.308 qpair failed and we were unable to recover it. 00:29:13.308 [2024-11-20 16:40:58.967360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.308 [2024-11-20 16:40:58.967370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.308 qpair failed and we were unable to recover it. 00:29:13.308 [2024-11-20 16:40:58.967443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.308 [2024-11-20 16:40:58.967452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.308 qpair failed and we were unable to recover it. 00:29:13.308 [2024-11-20 16:40:58.967687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.308 [2024-11-20 16:40:58.967697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.308 qpair failed and we were unable to recover it. 00:29:13.308 [2024-11-20 16:40:58.967879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.308 [2024-11-20 16:40:58.967888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.308 qpair failed and we were unable to recover it. 00:29:13.308 [2024-11-20 16:40:58.968120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.308 [2024-11-20 16:40:58.968135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.308 qpair failed and we were unable to recover it. 00:29:13.308 [2024-11-20 16:40:58.968484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.308 [2024-11-20 16:40:58.968494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.308 qpair failed and we were unable to recover it. 00:29:13.308 [2024-11-20 16:40:58.968782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.308 [2024-11-20 16:40:58.968792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.308 qpair failed and we were unable to recover it. 00:29:13.308 [2024-11-20 16:40:58.968976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.308 [2024-11-20 16:40:58.968992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.308 qpair failed and we were unable to recover it. 00:29:13.308 [2024-11-20 16:40:58.969297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.308 [2024-11-20 16:40:58.969308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.308 qpair failed and we were unable to recover it. 00:29:13.308 [2024-11-20 16:40:58.969419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.308 [2024-11-20 16:40:58.969428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.308 qpair failed and we were unable to recover it. 00:29:13.308 [2024-11-20 16:40:58.969695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.308 [2024-11-20 16:40:58.969705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.308 qpair failed and we were unable to recover it. 00:29:13.308 [2024-11-20 16:40:58.970000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.308 [2024-11-20 16:40:58.970010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.308 qpair failed and we were unable to recover it. 00:29:13.308 [2024-11-20 16:40:58.970311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.308 [2024-11-20 16:40:58.970321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.308 qpair failed and we were unable to recover it. 00:29:13.308 [2024-11-20 16:40:58.970672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.308 [2024-11-20 16:40:58.970683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.308 qpair failed and we were unable to recover it. 00:29:13.308 [2024-11-20 16:40:58.971015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.308 [2024-11-20 16:40:58.971025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.308 qpair failed and we were unable to recover it. 00:29:13.308 [2024-11-20 16:40:58.971319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.308 [2024-11-20 16:40:58.971329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.308 qpair failed and we were unable to recover it. 00:29:13.308 [2024-11-20 16:40:58.971651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.308 [2024-11-20 16:40:58.971662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.308 qpair failed and we were unable to recover it. 00:29:13.308 [2024-11-20 16:40:58.971835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.308 [2024-11-20 16:40:58.971844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.308 qpair failed and we were unable to recover it. 00:29:13.308 [2024-11-20 16:40:58.972036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.308 [2024-11-20 16:40:58.972047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.308 qpair failed and we were unable to recover it. 00:29:13.308 [2024-11-20 16:40:58.972319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.308 [2024-11-20 16:40:58.972331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.308 qpair failed and we were unable to recover it. 00:29:13.308 [2024-11-20 16:40:58.972643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.308 [2024-11-20 16:40:58.972653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.308 qpair failed and we were unable to recover it. 00:29:13.308 [2024-11-20 16:40:58.972936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.308 [2024-11-20 16:40:58.972945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.308 qpair failed and we were unable to recover it. 00:29:13.308 [2024-11-20 16:40:58.973260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.309 [2024-11-20 16:40:58.973270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.309 qpair failed and we were unable to recover it. 00:29:13.309 [2024-11-20 16:40:58.973698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.309 [2024-11-20 16:40:58.973708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.309 qpair failed and we were unable to recover it. 00:29:13.309 [2024-11-20 16:40:58.974024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.309 [2024-11-20 16:40:58.974034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.309 qpair failed and we were unable to recover it. 00:29:13.309 [2024-11-20 16:40:58.974354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.309 [2024-11-20 16:40:58.974364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.309 qpair failed and we were unable to recover it. 00:29:13.309 [2024-11-20 16:40:58.974655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.309 [2024-11-20 16:40:58.974664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.309 qpair failed and we were unable to recover it. 00:29:13.309 [2024-11-20 16:40:58.974898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.309 [2024-11-20 16:40:58.974907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.309 qpair failed and we were unable to recover it. 00:29:13.309 [2024-11-20 16:40:58.975240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.309 [2024-11-20 16:40:58.975250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.309 qpair failed and we were unable to recover it. 00:29:13.309 [2024-11-20 16:40:58.975547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.309 [2024-11-20 16:40:58.975557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.309 qpair failed and we were unable to recover it. 00:29:13.309 [2024-11-20 16:40:58.975882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.309 [2024-11-20 16:40:58.975894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.309 qpair failed and we were unable to recover it. 00:29:13.309 [2024-11-20 16:40:58.976187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.309 [2024-11-20 16:40:58.976202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.309 qpair failed and we were unable to recover it. 00:29:13.309 [2024-11-20 16:40:58.976523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.309 [2024-11-20 16:40:58.976533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.309 qpair failed and we were unable to recover it. 00:29:13.309 [2024-11-20 16:40:58.976771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.309 [2024-11-20 16:40:58.976782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.309 qpair failed and we were unable to recover it. 00:29:13.309 [2024-11-20 16:40:58.977117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.309 [2024-11-20 16:40:58.977127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.309 qpair failed and we were unable to recover it. 00:29:13.309 [2024-11-20 16:40:58.977434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.309 [2024-11-20 16:40:58.977445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.309 qpair failed and we were unable to recover it. 00:29:13.309 [2024-11-20 16:40:58.977757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.309 [2024-11-20 16:40:58.977767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.309 qpair failed and we were unable to recover it. 00:29:13.309 [2024-11-20 16:40:58.978076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.309 [2024-11-20 16:40:58.978087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.309 qpair failed and we were unable to recover it. 00:29:13.309 [2024-11-20 16:40:58.978439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.309 [2024-11-20 16:40:58.978449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.309 qpair failed and we were unable to recover it. 00:29:13.309 [2024-11-20 16:40:58.978839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.309 [2024-11-20 16:40:58.978849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.309 qpair failed and we were unable to recover it. 00:29:13.309 [2024-11-20 16:40:58.979044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.309 [2024-11-20 16:40:58.979054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.309 qpair failed and we were unable to recover it. 00:29:13.309 [2024-11-20 16:40:58.979384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.309 [2024-11-20 16:40:58.979393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.309 qpair failed and we were unable to recover it. 00:29:13.309 [2024-11-20 16:40:58.979557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.309 [2024-11-20 16:40:58.979568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.309 qpair failed and we were unable to recover it. 00:29:13.309 [2024-11-20 16:40:58.979851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.309 [2024-11-20 16:40:58.979862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.309 qpair failed and we were unable to recover it. 00:29:13.309 [2024-11-20 16:40:58.980073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.309 [2024-11-20 16:40:58.980083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.309 qpair failed and we were unable to recover it. 00:29:13.309 [2024-11-20 16:40:58.980319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.309 [2024-11-20 16:40:58.980329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.309 qpair failed and we were unable to recover it. 00:29:13.309 [2024-11-20 16:40:58.980639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.309 [2024-11-20 16:40:58.980649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.309 qpair failed and we were unable to recover it. 00:29:13.309 [2024-11-20 16:40:58.980933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.309 [2024-11-20 16:40:58.980943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.309 qpair failed and we were unable to recover it. 00:29:13.309 [2024-11-20 16:40:58.981233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.309 [2024-11-20 16:40:58.981243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.309 qpair failed and we were unable to recover it. 00:29:13.309 [2024-11-20 16:40:58.981547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.309 [2024-11-20 16:40:58.981563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.309 qpair failed and we were unable to recover it. 00:29:13.309 [2024-11-20 16:40:58.981745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.309 [2024-11-20 16:40:58.981756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.309 qpair failed and we were unable to recover it. 00:29:13.309 [2024-11-20 16:40:58.982057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.309 [2024-11-20 16:40:58.982068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.309 qpair failed and we were unable to recover it. 00:29:13.309 [2024-11-20 16:40:58.982385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.309 [2024-11-20 16:40:58.982395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.309 qpair failed and we were unable to recover it. 00:29:13.309 [2024-11-20 16:40:58.982690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.309 [2024-11-20 16:40:58.982700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.309 qpair failed and we were unable to recover it. 00:29:13.309 [2024-11-20 16:40:58.983060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.309 [2024-11-20 16:40:58.983069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.309 qpair failed and we were unable to recover it. 00:29:13.309 [2024-11-20 16:40:58.983458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.309 [2024-11-20 16:40:58.983468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.309 qpair failed and we were unable to recover it. 00:29:13.309 [2024-11-20 16:40:58.983598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.309 [2024-11-20 16:40:58.983608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.309 qpair failed and we were unable to recover it. 00:29:13.309 [2024-11-20 16:40:58.983929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.309 [2024-11-20 16:40:58.983938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.309 qpair failed and we were unable to recover it. 00:29:13.309 [2024-11-20 16:40:58.984214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.309 [2024-11-20 16:40:58.984224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.309 qpair failed and we were unable to recover it. 00:29:13.309 [2024-11-20 16:40:58.984527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.309 [2024-11-20 16:40:58.984537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.309 qpair failed and we were unable to recover it. 00:29:13.309 [2024-11-20 16:40:58.984584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.309 [2024-11-20 16:40:58.984593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.310 qpair failed and we were unable to recover it. 00:29:13.310 [2024-11-20 16:40:58.984908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.310 [2024-11-20 16:40:58.984917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.310 qpair failed and we were unable to recover it. 00:29:13.310 [2024-11-20 16:40:58.985142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.310 [2024-11-20 16:40:58.985151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.310 qpair failed and we were unable to recover it. 00:29:13.310 [2024-11-20 16:40:58.985461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.310 [2024-11-20 16:40:58.985470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.310 qpair failed and we were unable to recover it. 00:29:13.310 [2024-11-20 16:40:58.985660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.310 [2024-11-20 16:40:58.985671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.310 qpair failed and we were unable to recover it. 00:29:13.310 [2024-11-20 16:40:58.986002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.310 [2024-11-20 16:40:58.986013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.310 qpair failed and we were unable to recover it. 00:29:13.310 [2024-11-20 16:40:58.986333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.310 [2024-11-20 16:40:58.986343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.310 qpair failed and we were unable to recover it. 00:29:13.310 [2024-11-20 16:40:58.986633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.310 [2024-11-20 16:40:58.986644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.310 qpair failed and we were unable to recover it. 00:29:13.310 [2024-11-20 16:40:58.986961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.310 [2024-11-20 16:40:58.986970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.310 qpair failed and we were unable to recover it. 00:29:13.310 [2024-11-20 16:40:58.987131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.310 [2024-11-20 16:40:58.987141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.310 qpair failed and we were unable to recover it. 00:29:13.310 [2024-11-20 16:40:58.987464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.310 [2024-11-20 16:40:58.987474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.310 qpair failed and we were unable to recover it. 00:29:13.310 [2024-11-20 16:40:58.987887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.310 [2024-11-20 16:40:58.987896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.310 qpair failed and we were unable to recover it. 00:29:13.310 [2024-11-20 16:40:58.988057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.310 [2024-11-20 16:40:58.988069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.310 qpair failed and we were unable to recover it. 00:29:13.310 [2024-11-20 16:40:58.988304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.310 [2024-11-20 16:40:58.988314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.310 qpair failed and we were unable to recover it. 00:29:13.310 [2024-11-20 16:40:58.988612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.310 [2024-11-20 16:40:58.988621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.310 qpair failed and we were unable to recover it. 00:29:13.310 [2024-11-20 16:40:58.988958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.310 [2024-11-20 16:40:58.988968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.310 qpair failed and we were unable to recover it. 00:29:13.310 [2024-11-20 16:40:58.989151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.310 [2024-11-20 16:40:58.989161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.310 qpair failed and we were unable to recover it. 00:29:13.310 [2024-11-20 16:40:58.989438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.310 [2024-11-20 16:40:58.989447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.310 qpair failed and we were unable to recover it. 00:29:13.310 [2024-11-20 16:40:58.989668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.310 [2024-11-20 16:40:58.989678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.310 qpair failed and we were unable to recover it. 00:29:13.310 [2024-11-20 16:40:58.989976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.310 [2024-11-20 16:40:58.989991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.310 qpair failed and we were unable to recover it. 00:29:13.310 [2024-11-20 16:40:58.990164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.310 [2024-11-20 16:40:58.990174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.310 qpair failed and we were unable to recover it. 00:29:13.310 [2024-11-20 16:40:58.990458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.310 [2024-11-20 16:40:58.990468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.310 qpair failed and we were unable to recover it. 00:29:13.310 [2024-11-20 16:40:58.990772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.310 [2024-11-20 16:40:58.990782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.310 qpair failed and we were unable to recover it. 00:29:13.310 [2024-11-20 16:40:58.991095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.310 [2024-11-20 16:40:58.991105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.310 qpair failed and we were unable to recover it. 00:29:13.310 [2024-11-20 16:40:58.991292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.310 [2024-11-20 16:40:58.991303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.310 qpair failed and we were unable to recover it. 00:29:13.310 [2024-11-20 16:40:58.991526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.310 [2024-11-20 16:40:58.991542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.310 qpair failed and we were unable to recover it. 00:29:13.310 [2024-11-20 16:40:58.991688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.310 [2024-11-20 16:40:58.991698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.310 qpair failed and we were unable to recover it. 00:29:13.310 [2024-11-20 16:40:58.991987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.310 [2024-11-20 16:40:58.991996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.310 qpair failed and we were unable to recover it. 00:29:13.310 [2024-11-20 16:40:58.992326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.310 [2024-11-20 16:40:58.992336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.310 qpair failed and we were unable to recover it. 00:29:13.310 [2024-11-20 16:40:58.992652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.310 [2024-11-20 16:40:58.992661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.310 qpair failed and we were unable to recover it. 00:29:13.310 [2024-11-20 16:40:58.992989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.310 [2024-11-20 16:40:58.992999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.310 qpair failed and we were unable to recover it. 00:29:13.310 [2024-11-20 16:40:58.993283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.310 [2024-11-20 16:40:58.993301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.310 qpair failed and we were unable to recover it. 00:29:13.310 [2024-11-20 16:40:58.993487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.310 [2024-11-20 16:40:58.993497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.310 qpair failed and we were unable to recover it. 00:29:13.311 [2024-11-20 16:40:58.993807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.311 [2024-11-20 16:40:58.993817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.311 qpair failed and we were unable to recover it. 00:29:13.311 [2024-11-20 16:40:58.994131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.311 [2024-11-20 16:40:58.994142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.311 qpair failed and we were unable to recover it. 00:29:13.311 [2024-11-20 16:40:58.994454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.311 [2024-11-20 16:40:58.994464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.311 qpair failed and we were unable to recover it. 00:29:13.311 [2024-11-20 16:40:58.994758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.311 [2024-11-20 16:40:58.994768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.311 qpair failed and we were unable to recover it. 00:29:13.311 [2024-11-20 16:40:58.995095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.311 [2024-11-20 16:40:58.995105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.311 qpair failed and we were unable to recover it. 00:29:13.311 [2024-11-20 16:40:58.995241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.311 [2024-11-20 16:40:58.995252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.311 qpair failed and we were unable to recover it. 00:29:13.311 [2024-11-20 16:40:58.995530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.311 [2024-11-20 16:40:58.995541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.311 qpair failed and we were unable to recover it. 00:29:13.311 [2024-11-20 16:40:58.995914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.311 [2024-11-20 16:40:58.995923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.311 qpair failed and we were unable to recover it. 00:29:13.311 [2024-11-20 16:40:58.996273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.311 [2024-11-20 16:40:58.996284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.311 qpair failed and we were unable to recover it. 00:29:13.311 [2024-11-20 16:40:58.996456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.311 [2024-11-20 16:40:58.996466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.311 qpair failed and we were unable to recover it. 00:29:13.311 [2024-11-20 16:40:58.996773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.311 [2024-11-20 16:40:58.996782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.311 qpair failed and we were unable to recover it. 00:29:13.311 [2024-11-20 16:40:58.997091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.311 [2024-11-20 16:40:58.997100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.311 qpair failed and we were unable to recover it. 00:29:13.311 [2024-11-20 16:40:58.997422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.311 [2024-11-20 16:40:58.997431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.311 qpair failed and we were unable to recover it. 00:29:13.311 [2024-11-20 16:40:58.997745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.311 [2024-11-20 16:40:58.997755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.311 qpair failed and we were unable to recover it. 00:29:13.311 [2024-11-20 16:40:58.998069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.311 [2024-11-20 16:40:58.998080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.311 qpair failed and we were unable to recover it. 00:29:13.311 [2024-11-20 16:40:58.998255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.311 [2024-11-20 16:40:58.998264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.311 qpair failed and we were unable to recover it. 00:29:13.311 [2024-11-20 16:40:58.998537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.311 [2024-11-20 16:40:58.998546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.311 qpair failed and we were unable to recover it. 00:29:13.311 [2024-11-20 16:40:58.998867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.311 [2024-11-20 16:40:58.998876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.311 qpair failed and we were unable to recover it. 00:29:13.311 [2024-11-20 16:40:58.999154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.311 [2024-11-20 16:40:58.999167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.311 qpair failed and we were unable to recover it. 00:29:13.311 [2024-11-20 16:40:58.999468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.311 [2024-11-20 16:40:58.999478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.311 qpair failed and we were unable to recover it. 00:29:13.311 [2024-11-20 16:40:58.999767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.311 [2024-11-20 16:40:58.999777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.311 qpair failed and we were unable to recover it. 00:29:13.311 [2024-11-20 16:40:58.999965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.311 [2024-11-20 16:40:58.999975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.311 qpair failed and we were unable to recover it. 00:29:13.311 [2024-11-20 16:40:59.000311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.311 [2024-11-20 16:40:59.000321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.311 qpair failed and we were unable to recover it. 00:29:13.311 [2024-11-20 16:40:59.000629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.311 [2024-11-20 16:40:59.000639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.311 qpair failed and we were unable to recover it. 00:29:13.311 [2024-11-20 16:40:59.000828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.311 [2024-11-20 16:40:59.000839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.311 qpair failed and we were unable to recover it. 00:29:13.311 [2024-11-20 16:40:59.001114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.311 [2024-11-20 16:40:59.001128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.311 qpair failed and we were unable to recover it. 00:29:13.311 [2024-11-20 16:40:59.001434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.311 [2024-11-20 16:40:59.001445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.311 qpair failed and we were unable to recover it. 00:29:13.311 [2024-11-20 16:40:59.001511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.311 [2024-11-20 16:40:59.001520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.311 qpair failed and we were unable to recover it. 00:29:13.311 [2024-11-20 16:40:59.001802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.311 [2024-11-20 16:40:59.001812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.311 qpair failed and we were unable to recover it. 00:29:13.311 [2024-11-20 16:40:59.001993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.311 [2024-11-20 16:40:59.002003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.311 qpair failed and we were unable to recover it. 00:29:13.311 [2024-11-20 16:40:59.002368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.311 [2024-11-20 16:40:59.002378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.311 qpair failed and we were unable to recover it. 00:29:13.311 [2024-11-20 16:40:59.002788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.311 [2024-11-20 16:40:59.002798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.311 qpair failed and we were unable to recover it. 00:29:13.311 [2024-11-20 16:40:59.002970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.311 [2024-11-20 16:40:59.002979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.311 qpair failed and we were unable to recover it. 00:29:13.311 [2024-11-20 16:40:59.003063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.311 [2024-11-20 16:40:59.003072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.311 qpair failed and we were unable to recover it. 00:29:13.311 [2024-11-20 16:40:59.003388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.311 [2024-11-20 16:40:59.003398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.311 qpair failed and we were unable to recover it. 00:29:13.311 [2024-11-20 16:40:59.003570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.311 [2024-11-20 16:40:59.003580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.311 qpair failed and we were unable to recover it. 00:29:13.311 [2024-11-20 16:40:59.003893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.311 [2024-11-20 16:40:59.003903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.311 qpair failed and we were unable to recover it. 00:29:13.311 [2024-11-20 16:40:59.003947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.311 [2024-11-20 16:40:59.003956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.311 qpair failed and we were unable to recover it. 00:29:13.311 [2024-11-20 16:40:59.004147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.312 [2024-11-20 16:40:59.004157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.312 qpair failed and we were unable to recover it. 00:29:13.312 [2024-11-20 16:40:59.004492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.312 [2024-11-20 16:40:59.004502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.312 qpair failed and we were unable to recover it. 00:29:13.312 [2024-11-20 16:40:59.004847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.312 [2024-11-20 16:40:59.004857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.312 qpair failed and we were unable to recover it. 00:29:13.312 [2024-11-20 16:40:59.005043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.312 [2024-11-20 16:40:59.005054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.312 qpair failed and we were unable to recover it. 00:29:13.312 [2024-11-20 16:40:59.005269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.312 [2024-11-20 16:40:59.005280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.312 qpair failed and we were unable to recover it. 00:29:13.312 [2024-11-20 16:40:59.005649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.312 [2024-11-20 16:40:59.005658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.312 qpair failed and we were unable to recover it. 00:29:13.312 [2024-11-20 16:40:59.005971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.312 [2024-11-20 16:40:59.005980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.312 qpair failed and we were unable to recover it. 00:29:13.312 [2024-11-20 16:40:59.006304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.312 [2024-11-20 16:40:59.006314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.312 qpair failed and we were unable to recover it. 00:29:13.312 [2024-11-20 16:40:59.006627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.312 [2024-11-20 16:40:59.006636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.312 qpair failed and we were unable to recover it. 00:29:13.312 [2024-11-20 16:40:59.006954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.312 [2024-11-20 16:40:59.006967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.312 qpair failed and we were unable to recover it. 00:29:13.312 [2024-11-20 16:40:59.007255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.312 [2024-11-20 16:40:59.007267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.312 qpair failed and we were unable to recover it. 00:29:13.312 [2024-11-20 16:40:59.007570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.312 [2024-11-20 16:40:59.007579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.312 qpair failed and we were unable to recover it. 00:29:13.312 [2024-11-20 16:40:59.007861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.312 [2024-11-20 16:40:59.007871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.312 qpair failed and we were unable to recover it. 00:29:13.312 [2024-11-20 16:40:59.008191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.312 [2024-11-20 16:40:59.008202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.312 qpair failed and we were unable to recover it. 00:29:13.312 [2024-11-20 16:40:59.008475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.312 [2024-11-20 16:40:59.008485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.312 qpair failed and we were unable to recover it. 00:29:13.312 [2024-11-20 16:40:59.008771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.312 [2024-11-20 16:40:59.008781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.312 qpair failed and we were unable to recover it. 00:29:13.312 [2024-11-20 16:40:59.009120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.312 [2024-11-20 16:40:59.009130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.312 qpair failed and we were unable to recover it. 00:29:13.312 [2024-11-20 16:40:59.009289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.312 [2024-11-20 16:40:59.009300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.312 qpair failed and we were unable to recover it. 00:29:13.312 [2024-11-20 16:40:59.009634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.312 [2024-11-20 16:40:59.009645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.312 qpair failed and we were unable to recover it. 00:29:13.312 [2024-11-20 16:40:59.009976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.312 [2024-11-20 16:40:59.009990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.312 qpair failed and we were unable to recover it. 00:29:13.312 [2024-11-20 16:40:59.010165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.312 [2024-11-20 16:40:59.010175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.312 qpair failed and we were unable to recover it. 00:29:13.312 [2024-11-20 16:40:59.010502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.312 [2024-11-20 16:40:59.010511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.312 qpair failed and we were unable to recover it. 00:29:13.312 [2024-11-20 16:40:59.010689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.312 [2024-11-20 16:40:59.010698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.312 qpair failed and we were unable to recover it. 00:29:13.312 [2024-11-20 16:40:59.010883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.312 [2024-11-20 16:40:59.010892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.312 qpair failed and we were unable to recover it. 00:29:13.312 [2024-11-20 16:40:59.010938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.312 [2024-11-20 16:40:59.010948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.312 qpair failed and we were unable to recover it. 00:29:13.312 [2024-11-20 16:40:59.011276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.312 [2024-11-20 16:40:59.011286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.312 qpair failed and we were unable to recover it. 00:29:13.312 [2024-11-20 16:40:59.011515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.312 [2024-11-20 16:40:59.011525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.312 qpair failed and we were unable to recover it. 00:29:13.312 [2024-11-20 16:40:59.011882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.312 [2024-11-20 16:40:59.011892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.312 qpair failed and we were unable to recover it. 00:29:13.312 [2024-11-20 16:40:59.012194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.312 [2024-11-20 16:40:59.012204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.312 qpair failed and we were unable to recover it. 00:29:13.312 [2024-11-20 16:40:59.012426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.312 [2024-11-20 16:40:59.012436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.312 qpair failed and we were unable to recover it. 00:29:13.312 [2024-11-20 16:40:59.012484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.312 [2024-11-20 16:40:59.012494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.312 qpair failed and we were unable to recover it. 00:29:13.312 [2024-11-20 16:40:59.012648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.312 [2024-11-20 16:40:59.012659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.312 qpair failed and we were unable to recover it. 00:29:13.312 [2024-11-20 16:40:59.012974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.312 [2024-11-20 16:40:59.012987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.312 qpair failed and we were unable to recover it. 00:29:13.312 [2024-11-20 16:40:59.013103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.312 [2024-11-20 16:40:59.013112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.312 qpair failed and we were unable to recover it. 00:29:13.312 [2024-11-20 16:40:59.013481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.312 [2024-11-20 16:40:59.013492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.312 qpair failed and we were unable to recover it. 00:29:13.312 [2024-11-20 16:40:59.013800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.312 [2024-11-20 16:40:59.013810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.312 qpair failed and we were unable to recover it. 00:29:13.312 [2024-11-20 16:40:59.014039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.312 [2024-11-20 16:40:59.014050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.312 qpair failed and we were unable to recover it. 00:29:13.312 [2024-11-20 16:40:59.014252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.312 [2024-11-20 16:40:59.014262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.312 qpair failed and we were unable to recover it. 00:29:13.312 [2024-11-20 16:40:59.014655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.312 [2024-11-20 16:40:59.014666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.313 qpair failed and we were unable to recover it. 00:29:13.313 [2024-11-20 16:40:59.014853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.313 [2024-11-20 16:40:59.014863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.313 qpair failed and we were unable to recover it. 00:29:13.313 [2024-11-20 16:40:59.015169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.313 [2024-11-20 16:40:59.015179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.313 qpair failed and we were unable to recover it. 00:29:13.313 [2024-11-20 16:40:59.015519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.313 [2024-11-20 16:40:59.015528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.313 qpair failed and we were unable to recover it. 00:29:13.313 [2024-11-20 16:40:59.015701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.313 [2024-11-20 16:40:59.015711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.313 qpair failed and we were unable to recover it. 00:29:13.313 [2024-11-20 16:40:59.015929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.313 [2024-11-20 16:40:59.015939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.313 qpair failed and we were unable to recover it. 00:29:13.313 [2024-11-20 16:40:59.016097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.313 [2024-11-20 16:40:59.016107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.313 qpair failed and we were unable to recover it. 00:29:13.313 [2024-11-20 16:40:59.016276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.313 [2024-11-20 16:40:59.016286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.313 qpair failed and we were unable to recover it. 00:29:13.313 [2024-11-20 16:40:59.016578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.313 [2024-11-20 16:40:59.016587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.313 qpair failed and we were unable to recover it. 00:29:13.313 [2024-11-20 16:40:59.016797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.313 [2024-11-20 16:40:59.016807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.313 qpair failed and we were unable to recover it. 00:29:13.313 [2024-11-20 16:40:59.017150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.313 [2024-11-20 16:40:59.017160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.313 qpair failed and we were unable to recover it. 00:29:13.313 [2024-11-20 16:40:59.017459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.313 [2024-11-20 16:40:59.017470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.313 qpair failed and we were unable to recover it. 00:29:13.313 [2024-11-20 16:40:59.017809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.313 [2024-11-20 16:40:59.017819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.313 qpair failed and we were unable to recover it. 00:29:13.313 [2024-11-20 16:40:59.017992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.313 [2024-11-20 16:40:59.018003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.313 qpair failed and we were unable to recover it. 00:29:13.313 [2024-11-20 16:40:59.018311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.313 [2024-11-20 16:40:59.018321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.313 qpair failed and we were unable to recover it. 00:29:13.313 [2024-11-20 16:40:59.018676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.313 [2024-11-20 16:40:59.018687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.313 qpair failed and we were unable to recover it. 00:29:13.313 [2024-11-20 16:40:59.018980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.313 [2024-11-20 16:40:59.018993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.313 qpair failed and we were unable to recover it. 00:29:13.313 [2024-11-20 16:40:59.019295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.313 [2024-11-20 16:40:59.019305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.313 qpair failed and we were unable to recover it. 00:29:13.313 [2024-11-20 16:40:59.019616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.313 [2024-11-20 16:40:59.019627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.313 qpair failed and we were unable to recover it. 00:29:13.313 [2024-11-20 16:40:59.019935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.313 [2024-11-20 16:40:59.019944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.313 qpair failed and we were unable to recover it. 00:29:13.313 [2024-11-20 16:40:59.020233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.313 [2024-11-20 16:40:59.020245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.313 qpair failed and we were unable to recover it. 00:29:13.313 [2024-11-20 16:40:59.020403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.313 [2024-11-20 16:40:59.020413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.313 qpair failed and we were unable to recover it. 00:29:13.313 [2024-11-20 16:40:59.020727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.313 [2024-11-20 16:40:59.020738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.313 qpair failed and we were unable to recover it. 00:29:13.313 [2024-11-20 16:40:59.021036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.313 [2024-11-20 16:40:59.021047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.313 qpair failed and we were unable to recover it. 00:29:13.313 [2024-11-20 16:40:59.021369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.313 [2024-11-20 16:40:59.021379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.313 qpair failed and we were unable to recover it. 00:29:13.313 [2024-11-20 16:40:59.021728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.313 [2024-11-20 16:40:59.021737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.313 qpair failed and we were unable to recover it. 00:29:13.313 [2024-11-20 16:40:59.022105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.313 [2024-11-20 16:40:59.022116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.313 qpair failed and we were unable to recover it. 00:29:13.313 [2024-11-20 16:40:59.022390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.313 [2024-11-20 16:40:59.022399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.313 qpair failed and we were unable to recover it. 00:29:13.313 [2024-11-20 16:40:59.022749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.313 [2024-11-20 16:40:59.022759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.313 qpair failed and we were unable to recover it. 00:29:13.313 [2024-11-20 16:40:59.023065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.313 [2024-11-20 16:40:59.023076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.313 qpair failed and we were unable to recover it. 00:29:13.313 [2024-11-20 16:40:59.023362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.313 [2024-11-20 16:40:59.023372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.313 qpair failed and we were unable to recover it. 00:29:13.313 [2024-11-20 16:40:59.023598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.313 [2024-11-20 16:40:59.023608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.313 qpair failed and we were unable to recover it. 00:29:13.313 [2024-11-20 16:40:59.023946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.313 [2024-11-20 16:40:59.023955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.313 qpair failed and we were unable to recover it. 00:29:13.313 [2024-11-20 16:40:59.024326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.313 [2024-11-20 16:40:59.024336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.313 qpair failed and we were unable to recover it. 00:29:13.313 [2024-11-20 16:40:59.024528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.313 [2024-11-20 16:40:59.024538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.313 qpair failed and we were unable to recover it. 00:29:13.313 [2024-11-20 16:40:59.024677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.313 [2024-11-20 16:40:59.024686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.313 qpair failed and we were unable to recover it. 00:29:13.313 [2024-11-20 16:40:59.025007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.313 [2024-11-20 16:40:59.025018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.313 qpair failed and we were unable to recover it. 00:29:13.313 [2024-11-20 16:40:59.025307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.313 [2024-11-20 16:40:59.025317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.313 qpair failed and we were unable to recover it. 00:29:13.313 [2024-11-20 16:40:59.025479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.313 [2024-11-20 16:40:59.025488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.313 qpair failed and we were unable to recover it. 00:29:13.313 [2024-11-20 16:40:59.025701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.314 [2024-11-20 16:40:59.025715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.314 qpair failed and we were unable to recover it. 00:29:13.314 [2024-11-20 16:40:59.025925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.314 [2024-11-20 16:40:59.025935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.314 qpair failed and we were unable to recover it. 00:29:13.314 [2024-11-20 16:40:59.026221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.314 [2024-11-20 16:40:59.026232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.314 qpair failed and we were unable to recover it. 00:29:13.314 [2024-11-20 16:40:59.026432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.314 [2024-11-20 16:40:59.026441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.314 qpair failed and we were unable to recover it. 00:29:13.314 [2024-11-20 16:40:59.026670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.314 [2024-11-20 16:40:59.026679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.314 qpair failed and we were unable to recover it. 00:29:13.314 [2024-11-20 16:40:59.027003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.314 [2024-11-20 16:40:59.027013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.314 qpair failed and we were unable to recover it. 00:29:13.314 [2024-11-20 16:40:59.027316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.314 [2024-11-20 16:40:59.027326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.314 qpair failed and we were unable to recover it. 00:29:13.314 [2024-11-20 16:40:59.027686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.314 [2024-11-20 16:40:59.027695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.314 qpair failed and we were unable to recover it. 00:29:13.314 [2024-11-20 16:40:59.028020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.314 [2024-11-20 16:40:59.028030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.314 qpair failed and we were unable to recover it. 00:29:13.314 [2024-11-20 16:40:59.028305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.314 [2024-11-20 16:40:59.028315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.314 qpair failed and we were unable to recover it. 00:29:13.314 [2024-11-20 16:40:59.028510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.314 [2024-11-20 16:40:59.028519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.314 qpair failed and we were unable to recover it. 00:29:13.314 [2024-11-20 16:40:59.028863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.314 [2024-11-20 16:40:59.028872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.314 qpair failed and we were unable to recover it. 00:29:13.314 [2024-11-20 16:40:59.029198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.314 [2024-11-20 16:40:59.029208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.314 qpair failed and we were unable to recover it. 00:29:13.314 [2024-11-20 16:40:59.029545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.314 [2024-11-20 16:40:59.029555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.314 qpair failed and we were unable to recover it. 00:29:13.314 [2024-11-20 16:40:59.029720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.314 [2024-11-20 16:40:59.029730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.314 qpair failed and we were unable to recover it. 00:29:13.314 [2024-11-20 16:40:59.029916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.314 [2024-11-20 16:40:59.029926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.314 qpair failed and we were unable to recover it. 00:29:13.314 [2024-11-20 16:40:59.030237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.314 [2024-11-20 16:40:59.030248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.314 qpair failed and we were unable to recover it. 00:29:13.314 [2024-11-20 16:40:59.030539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.314 [2024-11-20 16:40:59.030549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.314 qpair failed and we were unable to recover it. 00:29:13.314 [2024-11-20 16:40:59.030860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.314 [2024-11-20 16:40:59.030870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.314 qpair failed and we were unable to recover it. 00:29:13.314 [2024-11-20 16:40:59.031040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.314 [2024-11-20 16:40:59.031050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.314 qpair failed and we were unable to recover it. 00:29:13.314 [2024-11-20 16:40:59.031410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.314 [2024-11-20 16:40:59.031419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.314 qpair failed and we were unable to recover it. 00:29:13.314 [2024-11-20 16:40:59.031738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.314 [2024-11-20 16:40:59.031748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.314 qpair failed and we were unable to recover it. 00:29:13.314 [2024-11-20 16:40:59.032091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.314 [2024-11-20 16:40:59.032102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.314 qpair failed and we were unable to recover it. 00:29:13.314 [2024-11-20 16:40:59.032467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.314 [2024-11-20 16:40:59.032476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.314 qpair failed and we were unable to recover it. 00:29:13.314 [2024-11-20 16:40:59.032757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.314 [2024-11-20 16:40:59.032767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.314 qpair failed and we were unable to recover it. 00:29:13.314 [2024-11-20 16:40:59.033113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.314 [2024-11-20 16:40:59.033123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.314 qpair failed and we were unable to recover it. 00:29:13.314 [2024-11-20 16:40:59.033423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.314 [2024-11-20 16:40:59.033432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.314 qpair failed and we were unable to recover it. 00:29:13.314 [2024-11-20 16:40:59.033749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.314 [2024-11-20 16:40:59.033758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.314 qpair failed and we were unable to recover it. 00:29:13.314 [2024-11-20 16:40:59.034069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.314 [2024-11-20 16:40:59.034079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.314 qpair failed and we were unable to recover it. 00:29:13.314 [2024-11-20 16:40:59.034401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.314 [2024-11-20 16:40:59.034410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.314 qpair failed and we were unable to recover it. 00:29:13.314 [2024-11-20 16:40:59.034763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.314 [2024-11-20 16:40:59.034772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.314 qpair failed and we were unable to recover it. 00:29:13.314 [2024-11-20 16:40:59.035087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.314 [2024-11-20 16:40:59.035097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.314 qpair failed and we were unable to recover it. 00:29:13.314 [2024-11-20 16:40:59.035412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.314 [2024-11-20 16:40:59.035422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.314 qpair failed and we were unable to recover it. 00:29:13.314 [2024-11-20 16:40:59.035809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.315 [2024-11-20 16:40:59.035823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.315 qpair failed and we were unable to recover it. 00:29:13.315 [2024-11-20 16:40:59.035906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.315 [2024-11-20 16:40:59.035915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.315 qpair failed and we were unable to recover it. 00:29:13.315 [2024-11-20 16:40:59.036108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.315 [2024-11-20 16:40:59.036119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.315 qpair failed and we were unable to recover it. 00:29:13.315 [2024-11-20 16:40:59.036478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.315 [2024-11-20 16:40:59.036489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.315 qpair failed and we were unable to recover it. 00:29:13.315 [2024-11-20 16:40:59.036626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.315 [2024-11-20 16:40:59.036636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.315 qpair failed and we were unable to recover it. 00:29:13.315 [2024-11-20 16:40:59.036913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.315 [2024-11-20 16:40:59.036923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.315 qpair failed and we were unable to recover it. 00:29:13.315 [2024-11-20 16:40:59.037001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.315 [2024-11-20 16:40:59.037011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.315 qpair failed and we were unable to recover it. 00:29:13.315 [2024-11-20 16:40:59.037204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.315 [2024-11-20 16:40:59.037214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.315 qpair failed and we were unable to recover it. 00:29:13.315 [2024-11-20 16:40:59.037420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.315 [2024-11-20 16:40:59.037432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.315 qpair failed and we were unable to recover it. 00:29:13.315 [2024-11-20 16:40:59.037623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.315 [2024-11-20 16:40:59.037632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.315 qpair failed and we were unable to recover it. 00:29:13.315 [2024-11-20 16:40:59.037872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.315 [2024-11-20 16:40:59.037882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.315 qpair failed and we were unable to recover it. 00:29:13.315 [2024-11-20 16:40:59.038167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.315 [2024-11-20 16:40:59.038177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.315 qpair failed and we were unable to recover it. 00:29:13.315 [2024-11-20 16:40:59.038400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.315 [2024-11-20 16:40:59.038410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.315 qpair failed and we were unable to recover it. 00:29:13.315 [2024-11-20 16:40:59.038809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.315 [2024-11-20 16:40:59.038821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.315 qpair failed and we were unable to recover it. 00:29:13.315 [2024-11-20 16:40:59.038988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.315 [2024-11-20 16:40:59.038999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.315 qpair failed and we were unable to recover it. 00:29:13.315 [2024-11-20 16:40:59.039258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.315 [2024-11-20 16:40:59.039267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.315 qpair failed and we were unable to recover it. 00:29:13.315 [2024-11-20 16:40:59.039485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.315 [2024-11-20 16:40:59.039495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.315 qpair failed and we were unable to recover it. 00:29:13.315 [2024-11-20 16:40:59.039816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.315 [2024-11-20 16:40:59.039826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.315 qpair failed and we were unable to recover it. 00:29:13.315 [2024-11-20 16:40:59.040143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.315 [2024-11-20 16:40:59.040154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.315 qpair failed and we were unable to recover it. 00:29:13.315 [2024-11-20 16:40:59.040330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.315 [2024-11-20 16:40:59.040340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.315 qpair failed and we were unable to recover it. 00:29:13.315 [2024-11-20 16:40:59.040624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.315 [2024-11-20 16:40:59.040633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.315 qpair failed and we were unable to recover it. 00:29:13.315 [2024-11-20 16:40:59.040966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.315 [2024-11-20 16:40:59.040976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.315 qpair failed and we were unable to recover it. 00:29:13.315 [2024-11-20 16:40:59.041271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.315 [2024-11-20 16:40:59.041281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.315 qpair failed and we were unable to recover it. 00:29:13.315 [2024-11-20 16:40:59.041649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.315 [2024-11-20 16:40:59.041659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.315 qpair failed and we were unable to recover it. 00:29:13.315 [2024-11-20 16:40:59.041851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.315 [2024-11-20 16:40:59.041861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.315 qpair failed and we were unable to recover it. 00:29:13.315 [2024-11-20 16:40:59.042188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.315 [2024-11-20 16:40:59.042198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.315 qpair failed and we were unable to recover it. 00:29:13.315 [2024-11-20 16:40:59.042522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.315 [2024-11-20 16:40:59.042532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.315 qpair failed and we were unable to recover it. 00:29:13.315 [2024-11-20 16:40:59.042725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.315 [2024-11-20 16:40:59.042734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.315 qpair failed and we were unable to recover it. 00:29:13.315 [2024-11-20 16:40:59.042889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.315 [2024-11-20 16:40:59.042899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.315 qpair failed and we were unable to recover it. 00:29:13.315 [2024-11-20 16:40:59.043166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.315 [2024-11-20 16:40:59.043176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.315 qpair failed and we were unable to recover it. 00:29:13.315 [2024-11-20 16:40:59.043495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.315 [2024-11-20 16:40:59.043505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.315 qpair failed and we were unable to recover it. 00:29:13.315 [2024-11-20 16:40:59.043724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.315 [2024-11-20 16:40:59.043734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.315 qpair failed and we were unable to recover it. 00:29:13.315 [2024-11-20 16:40:59.044067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.315 [2024-11-20 16:40:59.044077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.315 qpair failed and we were unable to recover it. 00:29:13.315 [2024-11-20 16:40:59.044378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.315 [2024-11-20 16:40:59.044388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.315 qpair failed and we were unable to recover it. 00:29:13.315 [2024-11-20 16:40:59.044678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.315 [2024-11-20 16:40:59.044688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.315 qpair failed and we were unable to recover it. 00:29:13.315 [2024-11-20 16:40:59.045006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.315 [2024-11-20 16:40:59.045018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.315 qpair failed and we were unable to recover it. 00:29:13.315 [2024-11-20 16:40:59.045211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.315 [2024-11-20 16:40:59.045220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.315 qpair failed and we were unable to recover it. 00:29:13.315 [2024-11-20 16:40:59.045264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.315 [2024-11-20 16:40:59.045273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.315 qpair failed and we were unable to recover it. 00:29:13.315 [2024-11-20 16:40:59.045596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.315 [2024-11-20 16:40:59.045606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.315 qpair failed and we were unable to recover it. 00:29:13.316 [2024-11-20 16:40:59.045836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.316 [2024-11-20 16:40:59.045846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.316 qpair failed and we were unable to recover it. 00:29:13.316 [2024-11-20 16:40:59.046147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.316 [2024-11-20 16:40:59.046158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.316 qpair failed and we were unable to recover it. 00:29:13.316 [2024-11-20 16:40:59.046460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.316 [2024-11-20 16:40:59.046470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.316 qpair failed and we were unable to recover it. 00:29:13.316 [2024-11-20 16:40:59.046555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.316 [2024-11-20 16:40:59.046564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.316 qpair failed and we were unable to recover it. 00:29:13.316 [2024-11-20 16:40:59.046871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.316 [2024-11-20 16:40:59.046881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.316 qpair failed and we were unable to recover it. 00:29:13.316 [2024-11-20 16:40:59.046926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.316 [2024-11-20 16:40:59.046935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.316 qpair failed and we were unable to recover it. 00:29:13.316 [2024-11-20 16:40:59.047252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.316 [2024-11-20 16:40:59.047262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.316 qpair failed and we were unable to recover it. 00:29:13.316 [2024-11-20 16:40:59.047575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.316 [2024-11-20 16:40:59.047586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.316 qpair failed and we were unable to recover it. 00:29:13.316 [2024-11-20 16:40:59.047757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.316 [2024-11-20 16:40:59.047767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.316 qpair failed and we were unable to recover it. 00:29:13.316 [2024-11-20 16:40:59.047959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.316 [2024-11-20 16:40:59.047971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.316 qpair failed and we were unable to recover it. 00:29:13.316 [2024-11-20 16:40:59.048164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.316 [2024-11-20 16:40:59.048175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.316 qpair failed and we were unable to recover it. 00:29:13.316 [2024-11-20 16:40:59.048356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.316 [2024-11-20 16:40:59.048366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.316 qpair failed and we were unable to recover it. 00:29:13.316 [2024-11-20 16:40:59.048601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.316 [2024-11-20 16:40:59.048611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.316 qpair failed and we were unable to recover it. 00:29:13.316 [2024-11-20 16:40:59.048781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.316 [2024-11-20 16:40:59.048791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.316 qpair failed and we were unable to recover it. 00:29:13.316 [2024-11-20 16:40:59.049076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.316 [2024-11-20 16:40:59.049086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.316 qpair failed and we were unable to recover it. 00:29:13.316 [2024-11-20 16:40:59.049272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.316 [2024-11-20 16:40:59.049282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.316 qpair failed and we were unable to recover it. 00:29:13.316 [2024-11-20 16:40:59.049523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.316 [2024-11-20 16:40:59.049534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.316 qpair failed and we were unable to recover it. 00:29:13.316 [2024-11-20 16:40:59.049699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.316 [2024-11-20 16:40:59.049709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.316 qpair failed and we were unable to recover it. 00:29:13.316 [2024-11-20 16:40:59.050022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.316 [2024-11-20 16:40:59.050032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.316 qpair failed and we were unable to recover it. 00:29:13.316 [2024-11-20 16:40:59.050336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.316 [2024-11-20 16:40:59.050347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.316 qpair failed and we were unable to recover it. 00:29:13.316 [2024-11-20 16:40:59.050531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.316 [2024-11-20 16:40:59.050541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.316 qpair failed and we were unable to recover it. 00:29:13.316 [2024-11-20 16:40:59.050699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.316 [2024-11-20 16:40:59.050709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.316 qpair failed and we were unable to recover it. 00:29:13.316 [2024-11-20 16:40:59.051001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.316 [2024-11-20 16:40:59.051011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.316 qpair failed and we were unable to recover it. 00:29:13.316 [2024-11-20 16:40:59.051195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.316 [2024-11-20 16:40:59.051205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.316 qpair failed and we were unable to recover it. 00:29:13.316 [2024-11-20 16:40:59.051532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.316 [2024-11-20 16:40:59.051541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.316 qpair failed and we were unable to recover it. 00:29:13.316 [2024-11-20 16:40:59.051874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.316 [2024-11-20 16:40:59.051884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.316 qpair failed and we were unable to recover it. 00:29:13.316 [2024-11-20 16:40:59.052050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.316 [2024-11-20 16:40:59.052060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.316 qpair failed and we were unable to recover it. 00:29:13.316 [2024-11-20 16:40:59.052425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.316 [2024-11-20 16:40:59.052436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.316 qpair failed and we were unable to recover it. 00:29:13.316 [2024-11-20 16:40:59.052618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.316 [2024-11-20 16:40:59.052629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.316 qpair failed and we were unable to recover it. 00:29:13.316 [2024-11-20 16:40:59.052700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.316 [2024-11-20 16:40:59.052710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.316 qpair failed and we were unable to recover it. 00:29:13.316 [2024-11-20 16:40:59.052935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.316 [2024-11-20 16:40:59.052946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.316 qpair failed and we were unable to recover it. 00:29:13.316 [2024-11-20 16:40:59.053126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.316 [2024-11-20 16:40:59.053136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.316 qpair failed and we were unable to recover it. 00:29:13.316 [2024-11-20 16:40:59.053301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.316 [2024-11-20 16:40:59.053311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.316 qpair failed and we were unable to recover it. 00:29:13.316 [2024-11-20 16:40:59.053552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.316 [2024-11-20 16:40:59.053561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.316 qpair failed and we were unable to recover it. 00:29:13.316 [2024-11-20 16:40:59.053884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.316 [2024-11-20 16:40:59.053894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.316 qpair failed and we were unable to recover it. 00:29:13.316 [2024-11-20 16:40:59.054084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.316 [2024-11-20 16:40:59.054094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.316 qpair failed and we were unable to recover it. 00:29:13.316 [2024-11-20 16:40:59.054303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.316 [2024-11-20 16:40:59.054313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.316 qpair failed and we were unable to recover it. 00:29:13.316 [2024-11-20 16:40:59.054627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.316 [2024-11-20 16:40:59.054639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.316 qpair failed and we were unable to recover it. 00:29:13.316 [2024-11-20 16:40:59.054926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.317 [2024-11-20 16:40:59.054936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.317 qpair failed and we were unable to recover it. 00:29:13.317 [2024-11-20 16:40:59.055260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.317 [2024-11-20 16:40:59.055269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.317 qpair failed and we were unable to recover it. 00:29:13.317 [2024-11-20 16:40:59.055432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.317 [2024-11-20 16:40:59.055442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.317 qpair failed and we were unable to recover it. 00:29:13.317 [2024-11-20 16:40:59.055613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.317 [2024-11-20 16:40:59.055623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.317 qpair failed and we were unable to recover it. 00:29:13.317 [2024-11-20 16:40:59.055812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.317 [2024-11-20 16:40:59.055824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.317 qpair failed and we were unable to recover it. 00:29:13.317 [2024-11-20 16:40:59.055959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.317 [2024-11-20 16:40:59.055969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.317 qpair failed and we were unable to recover it. 00:29:13.317 [2024-11-20 16:40:59.056268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.317 [2024-11-20 16:40:59.056279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.317 qpair failed and we were unable to recover it. 00:29:13.317 [2024-11-20 16:40:59.056464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.317 [2024-11-20 16:40:59.056473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.317 qpair failed and we were unable to recover it. 00:29:13.317 [2024-11-20 16:40:59.056778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.317 [2024-11-20 16:40:59.056789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.317 qpair failed and we were unable to recover it. 00:29:13.317 [2024-11-20 16:40:59.056970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.317 [2024-11-20 16:40:59.056985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.317 qpair failed and we were unable to recover it. 00:29:13.317 [2024-11-20 16:40:59.057283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.317 [2024-11-20 16:40:59.057293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.317 qpair failed and we were unable to recover it. 00:29:13.317 [2024-11-20 16:40:59.057496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.317 [2024-11-20 16:40:59.057507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.317 qpair failed and we were unable to recover it. 00:29:13.317 [2024-11-20 16:40:59.057821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.317 [2024-11-20 16:40:59.057831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.317 qpair failed and we were unable to recover it. 00:29:13.317 [2024-11-20 16:40:59.057879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.317 [2024-11-20 16:40:59.057889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.317 qpair failed and we were unable to recover it. 00:29:13.317 [2024-11-20 16:40:59.058157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.317 [2024-11-20 16:40:59.058168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.317 qpair failed and we were unable to recover it. 00:29:13.317 [2024-11-20 16:40:59.058477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.317 [2024-11-20 16:40:59.058487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.317 qpair failed and we were unable to recover it. 00:29:13.317 [2024-11-20 16:40:59.058535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.317 [2024-11-20 16:40:59.058544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.317 qpair failed and we were unable to recover it. 00:29:13.317 [2024-11-20 16:40:59.058858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.317 [2024-11-20 16:40:59.058869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.317 qpair failed and we were unable to recover it. 00:29:13.317 [2024-11-20 16:40:59.059150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.317 [2024-11-20 16:40:59.059160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.317 qpair failed and we were unable to recover it. 00:29:13.317 [2024-11-20 16:40:59.059505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.317 [2024-11-20 16:40:59.059515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.317 qpair failed and we were unable to recover it. 00:29:13.317 [2024-11-20 16:40:59.059832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.317 [2024-11-20 16:40:59.059842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.317 qpair failed and we were unable to recover it. 00:29:13.317 [2024-11-20 16:40:59.060149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.317 [2024-11-20 16:40:59.060159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.317 qpair failed and we were unable to recover it. 00:29:13.317 [2024-11-20 16:40:59.060417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.317 [2024-11-20 16:40:59.060427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.317 qpair failed and we were unable to recover it. 00:29:13.317 [2024-11-20 16:40:59.060777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.317 [2024-11-20 16:40:59.060787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.317 qpair failed and we were unable to recover it. 00:29:13.317 [2024-11-20 16:40:59.060952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.317 [2024-11-20 16:40:59.060961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.317 qpair failed and we were unable to recover it. 00:29:13.317 [2024-11-20 16:40:59.061259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.317 [2024-11-20 16:40:59.061269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.317 qpair failed and we were unable to recover it. 00:29:13.317 [2024-11-20 16:40:59.061479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.317 [2024-11-20 16:40:59.061491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.317 qpair failed and we were unable to recover it. 00:29:13.317 [2024-11-20 16:40:59.061853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.317 [2024-11-20 16:40:59.061863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.317 qpair failed and we were unable to recover it. 00:29:13.317 [2024-11-20 16:40:59.062058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.317 [2024-11-20 16:40:59.062068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.317 qpair failed and we were unable to recover it. 00:29:13.317 [2024-11-20 16:40:59.062448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.317 [2024-11-20 16:40:59.062459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.317 qpair failed and we were unable to recover it. 00:29:13.317 [2024-11-20 16:40:59.062771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.317 [2024-11-20 16:40:59.062781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.317 qpair failed and we were unable to recover it. 00:29:13.317 [2024-11-20 16:40:59.062953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.317 [2024-11-20 16:40:59.062963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.317 qpair failed and we were unable to recover it. 00:29:13.317 [2024-11-20 16:40:59.063146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.317 [2024-11-20 16:40:59.063158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.317 qpair failed and we were unable to recover it. 00:29:13.317 [2024-11-20 16:40:59.063456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.317 [2024-11-20 16:40:59.063467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.317 qpair failed and we were unable to recover it. 00:29:13.317 [2024-11-20 16:40:59.063645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.317 [2024-11-20 16:40:59.063655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.317 qpair failed and we were unable to recover it. 00:29:13.317 [2024-11-20 16:40:59.063906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.317 [2024-11-20 16:40:59.063916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.317 qpair failed and we were unable to recover it. 00:29:13.317 [2024-11-20 16:40:59.064271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.317 [2024-11-20 16:40:59.064281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.317 qpair failed and we were unable to recover it. 00:29:13.317 [2024-11-20 16:40:59.064486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.317 [2024-11-20 16:40:59.064496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.317 qpair failed and we were unable to recover it. 00:29:13.317 [2024-11-20 16:40:59.064802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.317 [2024-11-20 16:40:59.064812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.318 qpair failed and we were unable to recover it. 00:29:13.318 [2024-11-20 16:40:59.065200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.318 [2024-11-20 16:40:59.065211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.318 qpair failed and we were unable to recover it. 00:29:13.318 [2024-11-20 16:40:59.065257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.318 [2024-11-20 16:40:59.065267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.318 qpair failed and we were unable to recover it. 00:29:13.318 [2024-11-20 16:40:59.065327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.318 [2024-11-20 16:40:59.065336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.318 qpair failed and we were unable to recover it. 00:29:13.318 [2024-11-20 16:40:59.065565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.318 [2024-11-20 16:40:59.065575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.318 qpair failed and we were unable to recover it. 00:29:13.318 [2024-11-20 16:40:59.065889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.318 [2024-11-20 16:40:59.065899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.318 qpair failed and we were unable to recover it. 00:29:13.318 [2024-11-20 16:40:59.066210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.318 [2024-11-20 16:40:59.066221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.318 qpair failed and we were unable to recover it. 00:29:13.318 [2024-11-20 16:40:59.066562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.318 [2024-11-20 16:40:59.066572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.318 qpair failed and we were unable to recover it. 00:29:13.318 [2024-11-20 16:40:59.066789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.318 [2024-11-20 16:40:59.066799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.318 qpair failed and we were unable to recover it. 00:29:13.318 [2024-11-20 16:40:59.066990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.318 [2024-11-20 16:40:59.067002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.318 qpair failed and we were unable to recover it. 00:29:13.318 [2024-11-20 16:40:59.067165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.318 [2024-11-20 16:40:59.067175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.318 qpair failed and we were unable to recover it. 00:29:13.318 [2024-11-20 16:40:59.067224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.318 [2024-11-20 16:40:59.067234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.318 qpair failed and we were unable to recover it. 00:29:13.318 [2024-11-20 16:40:59.067556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.318 [2024-11-20 16:40:59.067566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.318 qpair failed and we were unable to recover it. 00:29:13.318 [2024-11-20 16:40:59.067882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.318 [2024-11-20 16:40:59.067892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.318 qpair failed and we were unable to recover it. 00:29:13.318 [2024-11-20 16:40:59.068251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.318 [2024-11-20 16:40:59.068261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.318 qpair failed and we were unable to recover it. 00:29:13.318 [2024-11-20 16:40:59.068560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.318 [2024-11-20 16:40:59.068570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.318 qpair failed and we were unable to recover it. 00:29:13.318 [2024-11-20 16:40:59.068761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.318 [2024-11-20 16:40:59.068772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.318 qpair failed and we were unable to recover it. 00:29:13.318 [2024-11-20 16:40:59.068957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.318 [2024-11-20 16:40:59.068968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.318 qpair failed and we were unable to recover it. 00:29:13.318 [2024-11-20 16:40:59.069305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.318 [2024-11-20 16:40:59.069316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.318 qpair failed and we were unable to recover it. 00:29:13.318 [2024-11-20 16:40:59.069461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.318 [2024-11-20 16:40:59.069472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.318 qpair failed and we were unable to recover it. 00:29:13.318 [2024-11-20 16:40:59.069777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.318 [2024-11-20 16:40:59.069787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.318 qpair failed and we were unable to recover it. 00:29:13.318 [2024-11-20 16:40:59.070065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.318 [2024-11-20 16:40:59.070075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.318 qpair failed and we were unable to recover it. 00:29:13.318 [2024-11-20 16:40:59.070279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.318 [2024-11-20 16:40:59.070288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.318 qpair failed and we were unable to recover it. 00:29:13.318 [2024-11-20 16:40:59.070585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.318 [2024-11-20 16:40:59.070595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.318 qpair failed and we were unable to recover it. 00:29:13.318 [2024-11-20 16:40:59.070908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.318 [2024-11-20 16:40:59.070918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.318 qpair failed and we were unable to recover it. 00:29:13.318 [2024-11-20 16:40:59.071237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.318 [2024-11-20 16:40:59.071248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.318 qpair failed and we were unable to recover it. 00:29:13.318 [2024-11-20 16:40:59.071432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.318 [2024-11-20 16:40:59.071441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.318 qpair failed and we were unable to recover it. 00:29:13.318 [2024-11-20 16:40:59.071844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.318 [2024-11-20 16:40:59.071855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.318 qpair failed and we were unable to recover it. 00:29:13.318 [2024-11-20 16:40:59.072142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.318 [2024-11-20 16:40:59.072152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.318 qpair failed and we were unable to recover it. 00:29:13.318 [2024-11-20 16:40:59.072457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.318 [2024-11-20 16:40:59.072469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.318 qpair failed and we were unable to recover it. 00:29:13.318 [2024-11-20 16:40:59.072809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.318 [2024-11-20 16:40:59.072819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.318 qpair failed and we were unable to recover it. 00:29:13.318 [2024-11-20 16:40:59.073145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.318 [2024-11-20 16:40:59.073155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.318 qpair failed and we were unable to recover it. 00:29:13.318 [2024-11-20 16:40:59.073462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.318 [2024-11-20 16:40:59.073473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.318 qpair failed and we were unable to recover it. 00:29:13.318 [2024-11-20 16:40:59.073641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.318 [2024-11-20 16:40:59.073651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.318 qpair failed and we were unable to recover it. 00:29:13.318 [2024-11-20 16:40:59.073695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.318 [2024-11-20 16:40:59.073705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.318 qpair failed and we were unable to recover it. 00:29:13.318 [2024-11-20 16:40:59.073902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.318 [2024-11-20 16:40:59.073912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.318 qpair failed and we were unable to recover it. 00:29:13.318 [2024-11-20 16:40:59.073960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.318 [2024-11-20 16:40:59.073970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.318 qpair failed and we were unable to recover it. 00:29:13.318 [2024-11-20 16:40:59.074318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.318 [2024-11-20 16:40:59.074329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.318 qpair failed and we were unable to recover it. 00:29:13.318 [2024-11-20 16:40:59.074630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.318 [2024-11-20 16:40:59.074641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.318 qpair failed and we were unable to recover it. 00:29:13.318 [2024-11-20 16:40:59.074962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.319 [2024-11-20 16:40:59.074973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.319 qpair failed and we were unable to recover it. 00:29:13.319 [2024-11-20 16:40:59.075168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.319 [2024-11-20 16:40:59.075178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.319 qpair failed and we were unable to recover it. 00:29:13.319 [2024-11-20 16:40:59.075462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.319 [2024-11-20 16:40:59.075473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.319 qpair failed and we were unable to recover it. 00:29:13.319 [2024-11-20 16:40:59.075820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.319 [2024-11-20 16:40:59.075831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.319 qpair failed and we were unable to recover it. 00:29:13.319 [2024-11-20 16:40:59.075877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.319 [2024-11-20 16:40:59.075887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.319 qpair failed and we were unable to recover it. 00:29:13.319 [2024-11-20 16:40:59.076048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.319 [2024-11-20 16:40:59.076058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.319 qpair failed and we were unable to recover it. 00:29:13.319 [2024-11-20 16:40:59.076336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.319 [2024-11-20 16:40:59.076347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.319 qpair failed and we were unable to recover it. 00:29:13.319 [2024-11-20 16:40:59.076564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.319 [2024-11-20 16:40:59.076575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.319 qpair failed and we were unable to recover it. 00:29:13.319 [2024-11-20 16:40:59.076774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.319 [2024-11-20 16:40:59.076784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.319 qpair failed and we were unable to recover it. 00:29:13.319 [2024-11-20 16:40:59.077106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.319 [2024-11-20 16:40:59.077116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.319 qpair failed and we were unable to recover it. 00:29:13.319 [2024-11-20 16:40:59.077437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.319 [2024-11-20 16:40:59.077448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.319 qpair failed and we were unable to recover it. 00:29:13.319 [2024-11-20 16:40:59.077630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.319 [2024-11-20 16:40:59.077641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.319 qpair failed and we were unable to recover it. 00:29:13.319 [2024-11-20 16:40:59.077812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.319 [2024-11-20 16:40:59.077821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.319 qpair failed and we were unable to recover it. 00:29:13.319 [2024-11-20 16:40:59.078031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.319 [2024-11-20 16:40:59.078042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.319 qpair failed and we were unable to recover it. 00:29:13.319 [2024-11-20 16:40:59.078133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.319 [2024-11-20 16:40:59.078142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.319 qpair failed and we were unable to recover it. 00:29:13.319 [2024-11-20 16:40:59.078565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.319 [2024-11-20 16:40:59.078593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.319 qpair failed and we were unable to recover it. 00:29:13.319 [2024-11-20 16:40:59.078916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.319 [2024-11-20 16:40:59.078925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.319 qpair failed and we were unable to recover it. 00:29:13.319 [2024-11-20 16:40:59.079213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.319 [2024-11-20 16:40:59.079244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.319 qpair failed and we were unable to recover it. 00:29:13.319 [2024-11-20 16:40:59.079563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.319 [2024-11-20 16:40:59.079572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.319 qpair failed and we were unable to recover it. 00:29:13.319 [2024-11-20 16:40:59.079616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.319 [2024-11-20 16:40:59.079623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.319 qpair failed and we were unable to recover it. 00:29:13.319 [2024-11-20 16:40:59.079944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.319 [2024-11-20 16:40:59.079951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.319 qpair failed and we were unable to recover it. 00:29:13.319 [2024-11-20 16:40:59.080273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.319 [2024-11-20 16:40:59.080281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.319 qpair failed and we were unable to recover it. 00:29:13.319 [2024-11-20 16:40:59.080568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.319 [2024-11-20 16:40:59.080576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.319 qpair failed and we were unable to recover it. 00:29:13.319 [2024-11-20 16:40:59.080759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.319 [2024-11-20 16:40:59.080767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.319 qpair failed and we were unable to recover it. 00:29:13.319 [2024-11-20 16:40:59.081087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.319 [2024-11-20 16:40:59.081094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.319 qpair failed and we were unable to recover it. 00:29:13.319 [2024-11-20 16:40:59.081471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.319 [2024-11-20 16:40:59.081479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.319 qpair failed and we were unable to recover it. 00:29:13.319 [2024-11-20 16:40:59.081794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.319 [2024-11-20 16:40:59.081801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.319 qpair failed and we were unable to recover it. 00:29:13.319 [2024-11-20 16:40:59.082121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.319 [2024-11-20 16:40:59.082128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.319 qpair failed and we were unable to recover it. 00:29:13.319 [2024-11-20 16:40:59.082331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.319 [2024-11-20 16:40:59.082338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.319 qpair failed and we were unable to recover it. 00:29:13.319 [2024-11-20 16:40:59.082529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.319 [2024-11-20 16:40:59.082537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.319 qpair failed and we were unable to recover it. 00:29:13.319 [2024-11-20 16:40:59.082891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.319 [2024-11-20 16:40:59.082898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.319 qpair failed and we were unable to recover it. 00:29:13.319 [2024-11-20 16:40:59.083113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.319 [2024-11-20 16:40:59.083121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.319 qpair failed and we were unable to recover it. 00:29:13.319 [2024-11-20 16:40:59.083506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.319 [2024-11-20 16:40:59.083513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.319 qpair failed and we were unable to recover it. 00:29:13.319 [2024-11-20 16:40:59.083693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.319 [2024-11-20 16:40:59.083701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.319 qpair failed and we were unable to recover it. 00:29:13.319 [2024-11-20 16:40:59.084084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.320 [2024-11-20 16:40:59.084091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.320 qpair failed and we were unable to recover it. 00:29:13.320 [2024-11-20 16:40:59.084390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.320 [2024-11-20 16:40:59.084398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.320 qpair failed and we were unable to recover it. 00:29:13.320 [2024-11-20 16:40:59.084445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.320 [2024-11-20 16:40:59.084452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.320 qpair failed and we were unable to recover it. 00:29:13.320 [2024-11-20 16:40:59.084602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.320 [2024-11-20 16:40:59.084610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.320 qpair failed and we were unable to recover it. 00:29:13.320 [2024-11-20 16:40:59.084928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.320 [2024-11-20 16:40:59.084936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.320 qpair failed and we were unable to recover it. 00:29:13.320 [2024-11-20 16:40:59.085320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.320 [2024-11-20 16:40:59.085328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.320 qpair failed and we were unable to recover it. 00:29:13.320 [2024-11-20 16:40:59.085616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.320 [2024-11-20 16:40:59.085623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.320 qpair failed and we were unable to recover it. 00:29:13.320 [2024-11-20 16:40:59.085813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.320 [2024-11-20 16:40:59.085820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.320 qpair failed and we were unable to recover it. 00:29:13.320 [2024-11-20 16:40:59.086010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.320 [2024-11-20 16:40:59.086017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.320 qpair failed and we were unable to recover it. 00:29:13.320 [2024-11-20 16:40:59.086194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.320 [2024-11-20 16:40:59.086202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.320 qpair failed and we were unable to recover it. 00:29:13.320 [2024-11-20 16:40:59.086534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.320 [2024-11-20 16:40:59.086542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.320 qpair failed and we were unable to recover it. 00:29:13.320 [2024-11-20 16:40:59.086841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.320 [2024-11-20 16:40:59.086849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.320 qpair failed and we were unable to recover it. 00:29:13.320 [2024-11-20 16:40:59.087032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.320 [2024-11-20 16:40:59.087039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.320 qpair failed and we were unable to recover it. 00:29:13.320 [2024-11-20 16:40:59.087318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.320 [2024-11-20 16:40:59.087325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.320 qpair failed and we were unable to recover it. 00:29:13.320 [2024-11-20 16:40:59.087618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.320 [2024-11-20 16:40:59.087625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.320 qpair failed and we were unable to recover it. 00:29:13.320 [2024-11-20 16:40:59.087917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.320 [2024-11-20 16:40:59.087930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.320 qpair failed and we were unable to recover it. 00:29:13.320 [2024-11-20 16:40:59.088267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.320 [2024-11-20 16:40:59.088274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.320 qpair failed and we were unable to recover it. 00:29:13.320 [2024-11-20 16:40:59.088483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.320 [2024-11-20 16:40:59.088490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.320 qpair failed and we were unable to recover it. 00:29:13.320 [2024-11-20 16:40:59.088864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.320 [2024-11-20 16:40:59.088871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.320 qpair failed and we were unable to recover it. 00:29:13.320 [2024-11-20 16:40:59.089034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.320 [2024-11-20 16:40:59.089042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.320 qpair failed and we were unable to recover it. 00:29:13.320 [2024-11-20 16:40:59.089276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.320 [2024-11-20 16:40:59.089283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.320 qpair failed and we were unable to recover it. 00:29:13.320 [2024-11-20 16:40:59.089573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.320 [2024-11-20 16:40:59.089579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.320 qpair failed and we were unable to recover it. 00:29:13.320 [2024-11-20 16:40:59.089856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.320 [2024-11-20 16:40:59.089862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.320 qpair failed and we were unable to recover it. 00:29:13.320 [2024-11-20 16:40:59.090072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.320 [2024-11-20 16:40:59.090084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.320 qpair failed and we were unable to recover it. 00:29:13.320 [2024-11-20 16:40:59.090295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.320 [2024-11-20 16:40:59.090302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.320 qpair failed and we were unable to recover it. 00:29:13.320 [2024-11-20 16:40:59.090482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.320 [2024-11-20 16:40:59.090489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.320 qpair failed and we were unable to recover it. 00:29:13.320 [2024-11-20 16:40:59.090526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.320 [2024-11-20 16:40:59.090532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.320 qpair failed and we were unable to recover it. 00:29:13.320 [2024-11-20 16:40:59.090758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.320 [2024-11-20 16:40:59.090765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.320 qpair failed and we were unable to recover it. 00:29:13.320 [2024-11-20 16:40:59.091042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.320 [2024-11-20 16:40:59.091051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.320 qpair failed and we were unable to recover it. 00:29:13.320 [2024-11-20 16:40:59.091379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.320 [2024-11-20 16:40:59.091386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.320 qpair failed and we were unable to recover it. 00:29:13.320 [2024-11-20 16:40:59.091717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.320 [2024-11-20 16:40:59.091724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.320 qpair failed and we were unable to recover it. 00:29:13.320 [2024-11-20 16:40:59.091908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.320 [2024-11-20 16:40:59.091916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.320 qpair failed and we were unable to recover it. 00:29:13.320 [2024-11-20 16:40:59.092225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.320 [2024-11-20 16:40:59.092232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.320 qpair failed and we were unable to recover it. 00:29:13.320 [2024-11-20 16:40:59.092525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.320 [2024-11-20 16:40:59.092533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.320 qpair failed and we were unable to recover it. 00:29:13.320 [2024-11-20 16:40:59.092841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.320 [2024-11-20 16:40:59.092848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.320 qpair failed and we were unable to recover it. 00:29:13.320 [2024-11-20 16:40:59.093148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.320 [2024-11-20 16:40:59.093156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.320 qpair failed and we were unable to recover it. 00:29:13.320 [2024-11-20 16:40:59.093479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.320 [2024-11-20 16:40:59.093486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.320 qpair failed and we were unable to recover it. 00:29:13.320 [2024-11-20 16:40:59.093774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.320 [2024-11-20 16:40:59.093782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.320 qpair failed and we were unable to recover it. 00:29:13.320 [2024-11-20 16:40:59.093993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.321 [2024-11-20 16:40:59.094001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.321 qpair failed and we were unable to recover it. 00:29:13.321 [2024-11-20 16:40:59.094293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.321 [2024-11-20 16:40:59.094300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.321 qpair failed and we were unable to recover it. 00:29:13.321 [2024-11-20 16:40:59.094617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.321 [2024-11-20 16:40:59.094624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.321 qpair failed and we were unable to recover it. 00:29:13.321 [2024-11-20 16:40:59.094823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.321 [2024-11-20 16:40:59.094831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.321 qpair failed and we were unable to recover it. 00:29:13.321 [2024-11-20 16:40:59.095000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.321 [2024-11-20 16:40:59.095007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.321 qpair failed and we were unable to recover it. 00:29:13.321 [2024-11-20 16:40:59.095288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.321 [2024-11-20 16:40:59.095295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.321 qpair failed and we were unable to recover it. 00:29:13.321 [2024-11-20 16:40:59.095479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.321 [2024-11-20 16:40:59.095485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.321 qpair failed and we were unable to recover it. 00:29:13.321 [2024-11-20 16:40:59.095868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.321 [2024-11-20 16:40:59.095875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.321 qpair failed and we were unable to recover it. 00:29:13.321 [2024-11-20 16:40:59.096093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.321 [2024-11-20 16:40:59.096100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.321 qpair failed and we were unable to recover it. 00:29:13.321 [2024-11-20 16:40:59.096387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.321 [2024-11-20 16:40:59.096394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.321 qpair failed and we were unable to recover it. 00:29:13.321 [2024-11-20 16:40:59.096715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.321 [2024-11-20 16:40:59.096723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.321 qpair failed and we were unable to recover it. 00:29:13.321 [2024-11-20 16:40:59.096902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.321 [2024-11-20 16:40:59.096910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.321 qpair failed and we were unable to recover it. 00:29:13.321 [2024-11-20 16:40:59.097076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.321 [2024-11-20 16:40:59.097084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.321 qpair failed and we were unable to recover it. 00:29:13.321 [2024-11-20 16:40:59.097407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.321 [2024-11-20 16:40:59.097414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.321 qpair failed and we were unable to recover it. 00:29:13.321 [2024-11-20 16:40:59.097722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.321 [2024-11-20 16:40:59.097729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.321 qpair failed and we were unable to recover it. 00:29:13.321 [2024-11-20 16:40:59.097929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.321 [2024-11-20 16:40:59.097935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.321 qpair failed and we were unable to recover it. 00:29:13.321 [2024-11-20 16:40:59.098387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.321 [2024-11-20 16:40:59.098394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.321 qpair failed and we were unable to recover it. 00:29:13.321 [2024-11-20 16:40:59.098583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.321 [2024-11-20 16:40:59.098590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.321 qpair failed and we were unable to recover it. 00:29:13.321 [2024-11-20 16:40:59.098976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.321 [2024-11-20 16:40:59.098988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.321 qpair failed and we were unable to recover it. 00:29:13.321 [2024-11-20 16:40:59.099166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.321 [2024-11-20 16:40:59.099173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.321 qpair failed and we were unable to recover it. 00:29:13.321 [2024-11-20 16:40:59.099516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.321 [2024-11-20 16:40:59.099522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.321 qpair failed and we were unable to recover it. 00:29:13.321 [2024-11-20 16:40:59.099855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.321 [2024-11-20 16:40:59.099862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.321 qpair failed and we were unable to recover it. 00:29:13.321 [2024-11-20 16:40:59.100149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.321 [2024-11-20 16:40:59.100157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.321 qpair failed and we were unable to recover it. 00:29:13.321 [2024-11-20 16:40:59.100330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.321 [2024-11-20 16:40:59.100337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.321 qpair failed and we were unable to recover it. 00:29:13.321 [2024-11-20 16:40:59.100659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.321 [2024-11-20 16:40:59.100667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.321 qpair failed and we were unable to recover it. 00:29:13.321 [2024-11-20 16:40:59.100876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.321 [2024-11-20 16:40:59.100884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.321 qpair failed and we were unable to recover it. 00:29:13.321 [2024-11-20 16:40:59.101204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.321 [2024-11-20 16:40:59.101211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.321 qpair failed and we were unable to recover it. 00:29:13.321 [2024-11-20 16:40:59.101541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.321 [2024-11-20 16:40:59.101549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.321 qpair failed and we were unable to recover it. 00:29:13.321 [2024-11-20 16:40:59.101722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.321 [2024-11-20 16:40:59.101729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.321 qpair failed and we were unable to recover it. 00:29:13.321 [2024-11-20 16:40:59.102041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.321 [2024-11-20 16:40:59.102048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.321 qpair failed and we were unable to recover it. 00:29:13.321 [2024-11-20 16:40:59.102234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.321 [2024-11-20 16:40:59.102240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.321 qpair failed and we were unable to recover it. 00:29:13.321 [2024-11-20 16:40:59.102477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.321 [2024-11-20 16:40:59.102484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.321 qpair failed and we were unable to recover it. 00:29:13.321 [2024-11-20 16:40:59.102803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.321 [2024-11-20 16:40:59.102809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.321 qpair failed and we were unable to recover it. 00:29:13.321 [2024-11-20 16:40:59.103165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.321 [2024-11-20 16:40:59.103172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.321 qpair failed and we were unable to recover it. 00:29:13.321 [2024-11-20 16:40:59.103351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.321 [2024-11-20 16:40:59.103358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.321 qpair failed and we were unable to recover it. 00:29:13.321 [2024-11-20 16:40:59.103551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.321 [2024-11-20 16:40:59.103557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.321 qpair failed and we were unable to recover it. 00:29:13.321 [2024-11-20 16:40:59.103723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.321 [2024-11-20 16:40:59.103730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.321 qpair failed and we were unable to recover it. 00:29:13.321 [2024-11-20 16:40:59.103774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.321 [2024-11-20 16:40:59.103780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.321 qpair failed and we were unable to recover it. 00:29:13.321 [2024-11-20 16:40:59.104081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.322 [2024-11-20 16:40:59.104088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.322 qpair failed and we were unable to recover it. 00:29:13.322 [2024-11-20 16:40:59.104137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.322 [2024-11-20 16:40:59.104144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.322 qpair failed and we were unable to recover it. 00:29:13.322 [2024-11-20 16:40:59.104295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.322 [2024-11-20 16:40:59.104302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.322 qpair failed and we were unable to recover it. 00:29:13.322 [2024-11-20 16:40:59.104580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.322 [2024-11-20 16:40:59.104586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.322 qpair failed and we were unable to recover it. 00:29:13.322 [2024-11-20 16:40:59.104931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.322 [2024-11-20 16:40:59.104938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.322 qpair failed and we were unable to recover it. 00:29:13.322 [2024-11-20 16:40:59.105223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.322 [2024-11-20 16:40:59.105230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.322 qpair failed and we were unable to recover it. 00:29:13.322 [2024-11-20 16:40:59.105397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.322 [2024-11-20 16:40:59.105405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.322 qpair failed and we were unable to recover it. 00:29:13.322 [2024-11-20 16:40:59.105636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.322 [2024-11-20 16:40:59.105643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.322 qpair failed and we were unable to recover it. 00:29:13.322 [2024-11-20 16:40:59.105912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.322 [2024-11-20 16:40:59.105919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.322 qpair failed and we were unable to recover it. 00:29:13.322 [2024-11-20 16:40:59.106141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.322 [2024-11-20 16:40:59.106148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.322 qpair failed and we were unable to recover it. 00:29:13.322 [2024-11-20 16:40:59.106487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.322 [2024-11-20 16:40:59.106494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.322 qpair failed and we were unable to recover it. 00:29:13.322 [2024-11-20 16:40:59.106798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.322 [2024-11-20 16:40:59.106805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.322 qpair failed and we were unable to recover it. 00:29:13.322 [2024-11-20 16:40:59.107189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.322 [2024-11-20 16:40:59.107196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.322 qpair failed and we were unable to recover it. 00:29:13.322 [2024-11-20 16:40:59.107508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.322 [2024-11-20 16:40:59.107515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.322 qpair failed and we were unable to recover it. 00:29:13.322 [2024-11-20 16:40:59.107823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.322 [2024-11-20 16:40:59.107832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.322 qpair failed and we were unable to recover it. 00:29:13.322 [2024-11-20 16:40:59.107987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.322 [2024-11-20 16:40:59.107995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.322 qpair failed and we were unable to recover it. 00:29:13.322 [2024-11-20 16:40:59.108254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.322 [2024-11-20 16:40:59.108261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.322 qpair failed and we were unable to recover it. 00:29:13.322 [2024-11-20 16:40:59.108535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.322 [2024-11-20 16:40:59.108542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.322 qpair failed and we were unable to recover it. 00:29:13.322 [2024-11-20 16:40:59.108861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.322 [2024-11-20 16:40:59.108868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.322 qpair failed and we were unable to recover it. 00:29:13.322 [2024-11-20 16:40:59.109069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.322 [2024-11-20 16:40:59.109076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.322 qpair failed and we were unable to recover it. 00:29:13.322 [2024-11-20 16:40:59.109306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.322 [2024-11-20 16:40:59.109313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.322 qpair failed and we were unable to recover it. 00:29:13.322 [2024-11-20 16:40:59.109588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.322 [2024-11-20 16:40:59.109595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.322 qpair failed and we were unable to recover it. 00:29:13.322 [2024-11-20 16:40:59.109780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.322 [2024-11-20 16:40:59.109787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.322 qpair failed and we were unable to recover it. 00:29:13.322 [2024-11-20 16:40:59.110104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.322 [2024-11-20 16:40:59.110112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.322 qpair failed and we were unable to recover it. 00:29:13.322 [2024-11-20 16:40:59.110291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.322 [2024-11-20 16:40:59.110297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.322 qpair failed and we were unable to recover it. 00:29:13.322 [2024-11-20 16:40:59.110697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.322 [2024-11-20 16:40:59.110704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.322 qpair failed and we were unable to recover it. 00:29:13.322 [2024-11-20 16:40:59.110885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.322 [2024-11-20 16:40:59.110893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.322 qpair failed and we were unable to recover it. 00:29:13.322 [2024-11-20 16:40:59.111203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.322 [2024-11-20 16:40:59.111210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.322 qpair failed and we were unable to recover it. 00:29:13.322 [2024-11-20 16:40:59.111384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.322 [2024-11-20 16:40:59.111391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.322 qpair failed and we were unable to recover it. 00:29:13.322 [2024-11-20 16:40:59.111554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.322 [2024-11-20 16:40:59.111561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.322 qpair failed and we were unable to recover it. 00:29:13.322 [2024-11-20 16:40:59.111635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.322 [2024-11-20 16:40:59.111642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.322 qpair failed and we were unable to recover it. 00:29:13.322 [2024-11-20 16:40:59.111847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.322 [2024-11-20 16:40:59.111854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.322 qpair failed and we were unable to recover it. 00:29:13.322 [2024-11-20 16:40:59.112084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.322 [2024-11-20 16:40:59.112091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.322 qpair failed and we were unable to recover it. 00:29:13.322 [2024-11-20 16:40:59.112283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.322 [2024-11-20 16:40:59.112289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.322 qpair failed and we were unable to recover it. 00:29:13.322 [2024-11-20 16:40:59.112470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.323 [2024-11-20 16:40:59.112477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.323 qpair failed and we were unable to recover it. 00:29:13.323 [2024-11-20 16:40:59.112642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.323 [2024-11-20 16:40:59.112649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.323 qpair failed and we were unable to recover it. 00:29:13.323 [2024-11-20 16:40:59.112951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.323 [2024-11-20 16:40:59.112958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.323 qpair failed and we were unable to recover it. 00:29:13.323 [2024-11-20 16:40:59.113275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.323 [2024-11-20 16:40:59.113283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.323 qpair failed and we were unable to recover it. 00:29:13.323 [2024-11-20 16:40:59.113599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.323 [2024-11-20 16:40:59.113605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.323 qpair failed and we were unable to recover it. 00:29:13.323 [2024-11-20 16:40:59.113879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.323 [2024-11-20 16:40:59.113886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.323 qpair failed and we were unable to recover it. 00:29:13.323 [2024-11-20 16:40:59.114071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.323 [2024-11-20 16:40:59.114077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.323 qpair failed and we were unable to recover it. 00:29:13.323 [2024-11-20 16:40:59.114367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.323 [2024-11-20 16:40:59.114374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.323 qpair failed and we were unable to recover it. 00:29:13.323 [2024-11-20 16:40:59.114685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.323 [2024-11-20 16:40:59.114692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.323 qpair failed and we were unable to recover it. 00:29:13.323 [2024-11-20 16:40:59.114876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.323 [2024-11-20 16:40:59.114883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.323 qpair failed and we were unable to recover it. 00:29:13.323 [2024-11-20 16:40:59.115110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.323 [2024-11-20 16:40:59.115117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.323 qpair failed and we were unable to recover it. 00:29:13.323 [2024-11-20 16:40:59.115440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.323 [2024-11-20 16:40:59.115447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.323 qpair failed and we were unable to recover it. 00:29:13.323 [2024-11-20 16:40:59.115644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.323 [2024-11-20 16:40:59.115650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.323 qpair failed and we were unable to recover it. 00:29:13.323 [2024-11-20 16:40:59.115839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.323 [2024-11-20 16:40:59.115847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.323 qpair failed and we were unable to recover it. 00:29:13.323 [2024-11-20 16:40:59.116146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.323 [2024-11-20 16:40:59.116153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.323 qpair failed and we were unable to recover it. 00:29:13.323 [2024-11-20 16:40:59.116465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.323 [2024-11-20 16:40:59.116472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.323 qpair failed and we were unable to recover it. 00:29:13.323 [2024-11-20 16:40:59.116584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.323 [2024-11-20 16:40:59.116590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.323 qpair failed and we were unable to recover it. 00:29:13.323 [2024-11-20 16:40:59.116866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.323 [2024-11-20 16:40:59.116873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.323 qpair failed and we were unable to recover it. 00:29:13.323 [2024-11-20 16:40:59.117166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.323 [2024-11-20 16:40:59.117174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.323 qpair failed and we were unable to recover it. 00:29:13.323 [2024-11-20 16:40:59.117346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.323 [2024-11-20 16:40:59.117353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.323 qpair failed and we were unable to recover it. 00:29:13.323 [2024-11-20 16:40:59.117572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.323 [2024-11-20 16:40:59.117581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.323 qpair failed and we were unable to recover it. 00:29:13.323 [2024-11-20 16:40:59.117762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.323 [2024-11-20 16:40:59.117768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.323 qpair failed and we were unable to recover it. 00:29:13.323 [2024-11-20 16:40:59.118082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.323 [2024-11-20 16:40:59.118089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.323 qpair failed and we were unable to recover it. 00:29:13.323 [2024-11-20 16:40:59.118421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.323 [2024-11-20 16:40:59.118427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.323 qpair failed and we were unable to recover it. 00:29:13.323 [2024-11-20 16:40:59.118594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.323 [2024-11-20 16:40:59.118601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.323 qpair failed and we were unable to recover it. 00:29:13.323 [2024-11-20 16:40:59.118872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.323 [2024-11-20 16:40:59.118878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.323 qpair failed and we were unable to recover it. 00:29:13.323 [2024-11-20 16:40:59.119198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.323 [2024-11-20 16:40:59.119205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.323 qpair failed and we were unable to recover it. 00:29:13.323 [2024-11-20 16:40:59.119405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.323 [2024-11-20 16:40:59.119412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.323 qpair failed and we were unable to recover it. 00:29:13.323 [2024-11-20 16:40:59.119624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.323 [2024-11-20 16:40:59.119631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.323 qpair failed and we were unable to recover it. 00:29:13.323 [2024-11-20 16:40:59.119971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.323 [2024-11-20 16:40:59.119978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.324 qpair failed and we were unable to recover it. 00:29:13.324 [2024-11-20 16:40:59.120288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.324 [2024-11-20 16:40:59.120296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.324 qpair failed and we were unable to recover it. 00:29:13.324 [2024-11-20 16:40:59.120494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.324 [2024-11-20 16:40:59.120501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.324 qpair failed and we were unable to recover it. 00:29:13.324 [2024-11-20 16:40:59.120667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.324 [2024-11-20 16:40:59.120675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.324 qpair failed and we were unable to recover it. 00:29:13.324 [2024-11-20 16:40:59.120870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.324 [2024-11-20 16:40:59.120878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.324 qpair failed and we were unable to recover it. 00:29:13.324 [2024-11-20 16:40:59.121246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.324 [2024-11-20 16:40:59.121253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.324 qpair failed and we were unable to recover it. 00:29:13.324 [2024-11-20 16:40:59.121586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.324 [2024-11-20 16:40:59.121594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.324 qpair failed and we were unable to recover it. 00:29:13.324 [2024-11-20 16:40:59.121905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.324 [2024-11-20 16:40:59.121913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.324 qpair failed and we were unable to recover it. 00:29:13.324 [2024-11-20 16:40:59.122234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.324 [2024-11-20 16:40:59.122242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.324 qpair failed and we were unable to recover it. 00:29:13.324 [2024-11-20 16:40:59.122397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.324 [2024-11-20 16:40:59.122405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.324 qpair failed and we were unable to recover it. 00:29:13.324 [2024-11-20 16:40:59.122723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.324 [2024-11-20 16:40:59.122731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.324 qpair failed and we were unable to recover it. 00:29:13.324 [2024-11-20 16:40:59.122792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.324 [2024-11-20 16:40:59.122799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.324 qpair failed and we were unable to recover it. 00:29:13.324 [2024-11-20 16:40:59.122948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.324 [2024-11-20 16:40:59.122956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.324 qpair failed and we were unable to recover it. 00:29:13.324 [2024-11-20 16:40:59.123171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.324 [2024-11-20 16:40:59.123179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.324 qpair failed and we were unable to recover it. 00:29:13.324 [2024-11-20 16:40:59.123538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.324 [2024-11-20 16:40:59.123546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.324 qpair failed and we were unable to recover it. 00:29:13.324 [2024-11-20 16:40:59.123849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.324 [2024-11-20 16:40:59.123856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.324 qpair failed and we were unable to recover it. 00:29:13.324 [2024-11-20 16:40:59.124148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.324 [2024-11-20 16:40:59.124156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.324 qpair failed and we were unable to recover it. 00:29:13.324 [2024-11-20 16:40:59.124305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.324 [2024-11-20 16:40:59.124313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.324 qpair failed and we were unable to recover it. 00:29:13.324 [2024-11-20 16:40:59.124491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.324 [2024-11-20 16:40:59.124499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.324 qpair failed and we were unable to recover it. 00:29:13.324 [2024-11-20 16:40:59.124677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.324 [2024-11-20 16:40:59.124685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.324 qpair failed and we were unable to recover it. 00:29:13.324 [2024-11-20 16:40:59.125111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.324 [2024-11-20 16:40:59.125118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.324 qpair failed and we were unable to recover it. 00:29:13.324 [2024-11-20 16:40:59.125485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.324 [2024-11-20 16:40:59.125492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.324 qpair failed and we were unable to recover it. 00:29:13.324 [2024-11-20 16:40:59.125714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.324 [2024-11-20 16:40:59.125721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.324 qpair failed and we were unable to recover it. 00:29:13.324 [2024-11-20 16:40:59.125789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.324 [2024-11-20 16:40:59.125796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.324 qpair failed and we were unable to recover it. 00:29:13.324 [2024-11-20 16:40:59.126095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.324 [2024-11-20 16:40:59.126102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.324 qpair failed and we were unable to recover it. 00:29:13.324 [2024-11-20 16:40:59.126432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.324 [2024-11-20 16:40:59.126439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.324 qpair failed and we were unable to recover it. 00:29:13.324 [2024-11-20 16:40:59.126747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.324 [2024-11-20 16:40:59.126754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.324 qpair failed and we were unable to recover it. 00:29:13.324 [2024-11-20 16:40:59.126915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.324 [2024-11-20 16:40:59.126922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.324 qpair failed and we were unable to recover it. 00:29:13.324 [2024-11-20 16:40:59.127093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.324 [2024-11-20 16:40:59.127100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.324 qpair failed and we were unable to recover it. 00:29:13.324 [2024-11-20 16:40:59.127289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.324 [2024-11-20 16:40:59.127295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.324 qpair failed and we were unable to recover it. 00:29:13.324 [2024-11-20 16:40:59.127506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.324 [2024-11-20 16:40:59.127512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.324 qpair failed and we were unable to recover it. 00:29:13.324 [2024-11-20 16:40:59.127729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.324 [2024-11-20 16:40:59.127737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.324 qpair failed and we were unable to recover it. 00:29:13.324 [2024-11-20 16:40:59.127774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.325 [2024-11-20 16:40:59.127781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.325 qpair failed and we were unable to recover it. 00:29:13.325 [2024-11-20 16:40:59.128076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.325 [2024-11-20 16:40:59.128084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.325 qpair failed and we were unable to recover it. 00:29:13.325 [2024-11-20 16:40:59.128402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.325 [2024-11-20 16:40:59.128409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.325 qpair failed and we were unable to recover it. 00:29:13.325 [2024-11-20 16:40:59.128730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.325 [2024-11-20 16:40:59.128736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.325 qpair failed and we were unable to recover it. 00:29:13.325 [2024-11-20 16:40:59.128904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.325 [2024-11-20 16:40:59.128912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.325 qpair failed and we were unable to recover it. 00:29:13.325 [2024-11-20 16:40:59.129220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.325 [2024-11-20 16:40:59.129227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.325 qpair failed and we were unable to recover it. 00:29:13.325 [2024-11-20 16:40:59.129539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.325 [2024-11-20 16:40:59.129547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.325 qpair failed and we were unable to recover it. 00:29:13.325 [2024-11-20 16:40:59.129848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.325 [2024-11-20 16:40:59.129856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.325 qpair failed and we were unable to recover it. 00:29:13.325 16:40:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:13.325 [2024-11-20 16:40:59.130210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.325 [2024-11-20 16:40:59.130217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.325 qpair failed and we were unable to recover it. 00:29:13.325 16:40:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:13.325 [2024-11-20 16:40:59.130542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.325 [2024-11-20 16:40:59.130549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.325 qpair failed and we were unable to recover it. 00:29:13.325 16:40:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:13.325 [2024-11-20 16:40:59.130857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.325 [2024-11-20 16:40:59.130865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.325 qpair failed and we were unable to recover it. 00:29:13.325 16:40:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:13.325 [2024-11-20 16:40:59.131176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.325 [2024-11-20 16:40:59.131184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.325 qpair failed and we were unable to recover it. 00:29:13.325 16:40:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:13.325 [2024-11-20 16:40:59.131374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.325 [2024-11-20 16:40:59.131381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.325 qpair failed and we were unable to recover it. 00:29:13.325 [2024-11-20 16:40:59.131690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.325 [2024-11-20 16:40:59.131697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.325 qpair failed and we were unable to recover it. 00:29:13.325 [2024-11-20 16:40:59.132037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.325 [2024-11-20 16:40:59.132043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.325 qpair failed and we were unable to recover it. 00:29:13.325 [2024-11-20 16:40:59.132368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.325 [2024-11-20 16:40:59.132375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.325 qpair failed and we were unable to recover it. 00:29:13.325 [2024-11-20 16:40:59.132547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.325 [2024-11-20 16:40:59.132555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.325 qpair failed and we were unable to recover it. 00:29:13.325 [2024-11-20 16:40:59.132868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.325 [2024-11-20 16:40:59.132875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.325 qpair failed and we were unable to recover it. 00:29:13.325 [2024-11-20 16:40:59.133182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.325 [2024-11-20 16:40:59.133189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.325 qpair failed and we were unable to recover it. 00:29:13.325 [2024-11-20 16:40:59.133257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.325 [2024-11-20 16:40:59.133264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.325 qpair failed and we were unable to recover it. 00:29:13.325 [2024-11-20 16:40:59.133555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.325 [2024-11-20 16:40:59.133562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.325 qpair failed and we were unable to recover it. 00:29:13.325 [2024-11-20 16:40:59.133853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.325 [2024-11-20 16:40:59.133860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.325 qpair failed and we were unable to recover it. 00:29:13.325 [2024-11-20 16:40:59.134066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.325 [2024-11-20 16:40:59.134073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.325 qpair failed and we were unable to recover it. 00:29:13.325 [2024-11-20 16:40:59.134237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.325 [2024-11-20 16:40:59.134244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.325 qpair failed and we were unable to recover it. 00:29:13.325 [2024-11-20 16:40:59.134544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.325 [2024-11-20 16:40:59.134552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.325 qpair failed and we were unable to recover it. 00:29:13.325 [2024-11-20 16:40:59.134782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.325 [2024-11-20 16:40:59.134789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.325 qpair failed and we were unable to recover it. 00:29:13.326 [2024-11-20 16:40:59.135103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.326 [2024-11-20 16:40:59.135110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.326 qpair failed and we were unable to recover it. 00:29:13.326 [2024-11-20 16:40:59.135306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.326 [2024-11-20 16:40:59.135314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.326 qpair failed and we were unable to recover it. 00:29:13.326 [2024-11-20 16:40:59.135607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.326 [2024-11-20 16:40:59.135615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.326 qpair failed and we were unable to recover it. 00:29:13.326 [2024-11-20 16:40:59.135781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.326 [2024-11-20 16:40:59.135788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.326 qpair failed and we were unable to recover it. 00:29:13.326 [2024-11-20 16:40:59.136114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.326 [2024-11-20 16:40:59.136121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.326 qpair failed and we were unable to recover it. 00:29:13.326 [2024-11-20 16:40:59.136444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.326 [2024-11-20 16:40:59.136451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.326 qpair failed and we were unable to recover it. 00:29:13.326 [2024-11-20 16:40:59.136661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.326 [2024-11-20 16:40:59.136668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.326 qpair failed and we were unable to recover it. 00:29:13.326 [2024-11-20 16:40:59.137056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.326 [2024-11-20 16:40:59.137063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.326 qpair failed and we were unable to recover it. 00:29:13.326 [2024-11-20 16:40:59.137250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.326 [2024-11-20 16:40:59.137257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.326 qpair failed and we were unable to recover it. 00:29:13.326 [2024-11-20 16:40:59.137476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.326 [2024-11-20 16:40:59.137483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.326 qpair failed and we were unable to recover it. 00:29:13.326 [2024-11-20 16:40:59.137770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.326 [2024-11-20 16:40:59.137776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.326 qpair failed and we were unable to recover it. 00:29:13.326 [2024-11-20 16:40:59.138067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.326 [2024-11-20 16:40:59.138076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.326 qpair failed and we were unable to recover it. 00:29:13.326 [2024-11-20 16:40:59.138244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.326 [2024-11-20 16:40:59.138251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.326 qpair failed and we were unable to recover it. 00:29:13.326 [2024-11-20 16:40:59.138411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.326 [2024-11-20 16:40:59.138418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.326 qpair failed and we were unable to recover it. 00:29:13.326 [2024-11-20 16:40:59.138708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.326 [2024-11-20 16:40:59.138716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.326 qpair failed and we were unable to recover it. 00:29:13.326 [2024-11-20 16:40:59.139039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.326 [2024-11-20 16:40:59.139047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.326 qpair failed and we were unable to recover it. 00:29:13.326 [2024-11-20 16:40:59.139357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.326 [2024-11-20 16:40:59.139365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.326 qpair failed and we were unable to recover it. 00:29:13.326 [2024-11-20 16:40:59.139674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.326 [2024-11-20 16:40:59.139682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.326 qpair failed and we were unable to recover it. 00:29:13.326 [2024-11-20 16:40:59.139966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.326 [2024-11-20 16:40:59.139974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.326 qpair failed and we were unable to recover it. 00:29:13.326 [2024-11-20 16:40:59.140171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.326 [2024-11-20 16:40:59.140178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.326 qpair failed and we were unable to recover it. 00:29:13.326 [2024-11-20 16:40:59.140497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.326 [2024-11-20 16:40:59.140504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.326 qpair failed and we were unable to recover it. 00:29:13.326 [2024-11-20 16:40:59.140762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.326 [2024-11-20 16:40:59.140770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.326 qpair failed and we were unable to recover it. 00:29:13.326 [2024-11-20 16:40:59.140985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.326 [2024-11-20 16:40:59.140993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.326 qpair failed and we were unable to recover it. 00:29:13.326 [2024-11-20 16:40:59.141283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.326 [2024-11-20 16:40:59.141291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.326 qpair failed and we were unable to recover it. 00:29:13.326 [2024-11-20 16:40:59.141582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.326 [2024-11-20 16:40:59.141589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.326 qpair failed and we were unable to recover it. 00:29:13.326 [2024-11-20 16:40:59.141922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.326 [2024-11-20 16:40:59.141930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.326 qpair failed and we were unable to recover it. 00:29:13.326 [2024-11-20 16:40:59.142104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.326 [2024-11-20 16:40:59.142112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.326 qpair failed and we were unable to recover it. 00:29:13.326 [2024-11-20 16:40:59.142430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.326 [2024-11-20 16:40:59.142437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.326 qpair failed and we were unable to recover it. 00:29:13.326 [2024-11-20 16:40:59.142787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.326 [2024-11-20 16:40:59.142794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.326 qpair failed and we were unable to recover it. 00:29:13.326 [2024-11-20 16:40:59.143116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.327 [2024-11-20 16:40:59.143123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.327 qpair failed and we were unable to recover it. 00:29:13.327 [2024-11-20 16:40:59.143430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.327 [2024-11-20 16:40:59.143437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.327 qpair failed and we were unable to recover it. 00:29:13.327 [2024-11-20 16:40:59.143728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.327 [2024-11-20 16:40:59.143735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.327 qpair failed and we were unable to recover it. 00:29:13.327 [2024-11-20 16:40:59.144075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.327 [2024-11-20 16:40:59.144085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.327 qpair failed and we were unable to recover it. 00:29:13.327 [2024-11-20 16:40:59.144402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.327 [2024-11-20 16:40:59.144409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.327 qpair failed and we were unable to recover it. 00:29:13.327 [2024-11-20 16:40:59.144703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.327 [2024-11-20 16:40:59.144710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.327 qpair failed and we were unable to recover it. 00:29:13.327 [2024-11-20 16:40:59.144780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.327 [2024-11-20 16:40:59.144786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.327 qpair failed and we were unable to recover it. 00:29:13.327 [2024-11-20 16:40:59.145073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.327 [2024-11-20 16:40:59.145080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.327 qpair failed and we were unable to recover it. 00:29:13.327 [2024-11-20 16:40:59.145476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.327 [2024-11-20 16:40:59.145484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.327 qpair failed and we were unable to recover it. 00:29:13.327 [2024-11-20 16:40:59.145793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.327 [2024-11-20 16:40:59.145800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.327 qpair failed and we were unable to recover it. 00:29:13.327 [2024-11-20 16:40:59.146031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.327 [2024-11-20 16:40:59.146039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.327 qpair failed and we were unable to recover it. 00:29:13.327 [2024-11-20 16:40:59.146200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.327 [2024-11-20 16:40:59.146207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.327 qpair failed and we were unable to recover it. 00:29:13.327 [2024-11-20 16:40:59.146592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.327 [2024-11-20 16:40:59.146600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.327 qpair failed and we were unable to recover it. 00:29:13.327 [2024-11-20 16:40:59.146913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.327 [2024-11-20 16:40:59.146919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.327 qpair failed and we were unable to recover it. 00:29:13.327 [2024-11-20 16:40:59.147109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.327 [2024-11-20 16:40:59.147117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.327 qpair failed and we were unable to recover it. 00:29:13.327 [2024-11-20 16:40:59.147339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.327 [2024-11-20 16:40:59.147346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.327 qpair failed and we were unable to recover it. 00:29:13.327 [2024-11-20 16:40:59.147714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.327 [2024-11-20 16:40:59.147721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.327 qpair failed and we were unable to recover it. 00:29:13.327 [2024-11-20 16:40:59.147907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.327 [2024-11-20 16:40:59.147913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.327 qpair failed and we were unable to recover it. 00:29:13.327 [2024-11-20 16:40:59.148096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.327 [2024-11-20 16:40:59.148103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.327 qpair failed and we were unable to recover it. 00:29:13.327 [2024-11-20 16:40:59.148254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.327 [2024-11-20 16:40:59.148261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.327 qpair failed and we were unable to recover it. 00:29:13.327 [2024-11-20 16:40:59.148453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.327 [2024-11-20 16:40:59.148460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.327 qpair failed and we were unable to recover it. 00:29:13.327 [2024-11-20 16:40:59.148626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.327 [2024-11-20 16:40:59.148633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.327 qpair failed and we were unable to recover it. 00:29:13.327 [2024-11-20 16:40:59.148927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.327 [2024-11-20 16:40:59.148937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.327 qpair failed and we were unable to recover it. 00:29:13.327 [2024-11-20 16:40:59.149272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.327 [2024-11-20 16:40:59.149279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.327 qpair failed and we were unable to recover it. 00:29:13.327 [2024-11-20 16:40:59.149560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.327 [2024-11-20 16:40:59.149567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.327 qpair failed and we were unable to recover it. 00:29:13.327 [2024-11-20 16:40:59.149892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.327 [2024-11-20 16:40:59.149900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.327 qpair failed and we were unable to recover it. 00:29:13.327 [2024-11-20 16:40:59.150059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.327 [2024-11-20 16:40:59.150066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.327 qpair failed and we were unable to recover it. 00:29:13.327 [2024-11-20 16:40:59.150253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.327 [2024-11-20 16:40:59.150260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.327 qpair failed and we were unable to recover it. 00:29:13.327 [2024-11-20 16:40:59.150520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.327 [2024-11-20 16:40:59.150528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.327 qpair failed and we were unable to recover it. 00:29:13.327 [2024-11-20 16:40:59.150914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.327 [2024-11-20 16:40:59.150921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.327 qpair failed and we were unable to recover it. 00:29:13.328 [2024-11-20 16:40:59.151238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.328 [2024-11-20 16:40:59.151245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.328 qpair failed and we were unable to recover it. 00:29:13.328 [2024-11-20 16:40:59.151460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.328 [2024-11-20 16:40:59.151467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.328 qpair failed and we were unable to recover it. 00:29:13.328 [2024-11-20 16:40:59.151671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.328 [2024-11-20 16:40:59.151678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.328 qpair failed and we were unable to recover it. 00:29:13.328 [2024-11-20 16:40:59.151965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.328 [2024-11-20 16:40:59.151973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.328 qpair failed and we were unable to recover it. 00:29:13.328 [2024-11-20 16:40:59.152144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.328 [2024-11-20 16:40:59.152152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.328 qpair failed and we were unable to recover it. 00:29:13.328 [2024-11-20 16:40:59.152459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.328 [2024-11-20 16:40:59.152468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.328 qpair failed and we were unable to recover it. 00:29:13.328 [2024-11-20 16:40:59.152820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.328 [2024-11-20 16:40:59.152827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.328 qpair failed and we were unable to recover it. 00:29:13.328 [2024-11-20 16:40:59.152998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.328 [2024-11-20 16:40:59.153006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.328 qpair failed and we were unable to recover it. 00:29:13.328 [2024-11-20 16:40:59.153302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.328 [2024-11-20 16:40:59.153310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.328 qpair failed and we were unable to recover it. 00:29:13.328 [2024-11-20 16:40:59.153375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.328 [2024-11-20 16:40:59.153383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.328 qpair failed and we were unable to recover it. 00:29:13.328 [2024-11-20 16:40:59.153545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.328 [2024-11-20 16:40:59.153552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.328 qpair failed and we were unable to recover it. 00:29:13.328 [2024-11-20 16:40:59.153873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.328 [2024-11-20 16:40:59.153881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.328 qpair failed and we were unable to recover it. 00:29:13.328 [2024-11-20 16:40:59.154181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.328 [2024-11-20 16:40:59.154188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.328 qpair failed and we were unable to recover it. 00:29:13.328 [2024-11-20 16:40:59.154316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.328 [2024-11-20 16:40:59.154323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.328 qpair failed and we were unable to recover it. 00:29:13.328 [2024-11-20 16:40:59.154482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.328 [2024-11-20 16:40:59.154490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.328 qpair failed and we were unable to recover it. 00:29:13.328 [2024-11-20 16:40:59.154785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.328 [2024-11-20 16:40:59.154792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.328 qpair failed and we were unable to recover it. 00:29:13.328 [2024-11-20 16:40:59.154973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.328 [2024-11-20 16:40:59.154980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.328 qpair failed and we were unable to recover it. 00:29:13.328 [2024-11-20 16:40:59.155201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.328 [2024-11-20 16:40:59.155208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.328 qpair failed and we were unable to recover it. 00:29:13.328 [2024-11-20 16:40:59.155515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.328 [2024-11-20 16:40:59.155522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.328 qpair failed and we were unable to recover it. 00:29:13.328 [2024-11-20 16:40:59.155810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.328 [2024-11-20 16:40:59.155818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.328 qpair failed and we were unable to recover it. 00:29:13.328 [2024-11-20 16:40:59.156138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.328 [2024-11-20 16:40:59.156146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.328 qpair failed and we were unable to recover it. 00:29:13.328 [2024-11-20 16:40:59.156466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.328 [2024-11-20 16:40:59.156473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.328 qpair failed and we were unable to recover it. 00:29:13.328 [2024-11-20 16:40:59.156641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.328 [2024-11-20 16:40:59.156648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.328 qpair failed and we were unable to recover it. 00:29:13.328 [2024-11-20 16:40:59.156959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.328 [2024-11-20 16:40:59.156966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.328 qpair failed and we were unable to recover it. 00:29:13.328 [2024-11-20 16:40:59.157283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.328 [2024-11-20 16:40:59.157290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.328 qpair failed and we were unable to recover it. 00:29:13.328 [2024-11-20 16:40:59.157614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.328 [2024-11-20 16:40:59.157622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.328 qpair failed and we were unable to recover it. 00:29:13.328 [2024-11-20 16:40:59.157815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.328 [2024-11-20 16:40:59.157823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.328 qpair failed and we were unable to recover it. 00:29:13.328 [2024-11-20 16:40:59.158148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.328 [2024-11-20 16:40:59.158155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.328 qpair failed and we were unable to recover it. 00:29:13.328 [2024-11-20 16:40:59.158486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.328 [2024-11-20 16:40:59.158493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.328 qpair failed and we were unable to recover it. 00:29:13.328 [2024-11-20 16:40:59.158662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.328 [2024-11-20 16:40:59.158675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.328 qpair failed and we were unable to recover it. 00:29:13.328 [2024-11-20 16:40:59.158977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.328 [2024-11-20 16:40:59.158988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.328 qpair failed and we were unable to recover it. 00:29:13.328 [2024-11-20 16:40:59.159158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.328 [2024-11-20 16:40:59.159167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.328 qpair failed and we were unable to recover it. 00:29:13.328 [2024-11-20 16:40:59.159520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.328 [2024-11-20 16:40:59.159528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.328 qpair failed and we were unable to recover it. 00:29:13.329 [2024-11-20 16:40:59.159827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.329 [2024-11-20 16:40:59.159833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.329 qpair failed and we were unable to recover it. 00:29:13.329 [2024-11-20 16:40:59.160020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.329 [2024-11-20 16:40:59.160028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.329 qpair failed and we were unable to recover it. 00:29:13.329 [2024-11-20 16:40:59.160345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.329 [2024-11-20 16:40:59.160352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.329 qpair failed and we were unable to recover it. 00:29:13.329 [2024-11-20 16:40:59.160510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.329 [2024-11-20 16:40:59.160518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.329 qpair failed and we were unable to recover it. 00:29:13.329 [2024-11-20 16:40:59.160698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.329 [2024-11-20 16:40:59.160705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.329 qpair failed and we were unable to recover it. 00:29:13.329 [2024-11-20 16:40:59.160874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.329 [2024-11-20 16:40:59.160882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.329 qpair failed and we were unable to recover it. 00:29:13.329 [2024-11-20 16:40:59.161269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.329 [2024-11-20 16:40:59.161277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.329 qpair failed and we were unable to recover it. 00:29:13.329 [2024-11-20 16:40:59.161475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.329 [2024-11-20 16:40:59.161481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.329 qpair failed and we were unable to recover it. 00:29:13.329 [2024-11-20 16:40:59.161668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.329 [2024-11-20 16:40:59.161675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.329 qpair failed and we were unable to recover it. 00:29:13.329 [2024-11-20 16:40:59.161969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.329 [2024-11-20 16:40:59.161977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.329 qpair failed and we were unable to recover it. 00:29:13.329 [2024-11-20 16:40:59.162017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.329 [2024-11-20 16:40:59.162023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.329 qpair failed and we were unable to recover it. 00:29:13.329 [2024-11-20 16:40:59.162335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.329 [2024-11-20 16:40:59.162342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.329 qpair failed and we were unable to recover it. 00:29:13.329 [2024-11-20 16:40:59.162637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.329 [2024-11-20 16:40:59.162646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.329 qpair failed and we were unable to recover it. 00:29:13.329 [2024-11-20 16:40:59.162805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.329 [2024-11-20 16:40:59.162813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.329 qpair failed and we were unable to recover it. 00:29:13.329 [2024-11-20 16:40:59.162964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.329 [2024-11-20 16:40:59.162971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.329 qpair failed and we were unable to recover it. 00:29:13.329 [2024-11-20 16:40:59.163288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.329 [2024-11-20 16:40:59.163296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.329 qpair failed and we were unable to recover it. 00:29:13.329 [2024-11-20 16:40:59.163624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.329 [2024-11-20 16:40:59.163631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.329 qpair failed and we were unable to recover it. 00:29:13.329 [2024-11-20 16:40:59.163940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.329 [2024-11-20 16:40:59.163947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.329 qpair failed and we were unable to recover it. 00:29:13.329 [2024-11-20 16:40:59.164286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.329 [2024-11-20 16:40:59.164294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.329 qpair failed and we were unable to recover it. 00:29:13.329 [2024-11-20 16:40:59.164328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.329 [2024-11-20 16:40:59.164335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.329 qpair failed and we were unable to recover it. 00:29:13.329 [2024-11-20 16:40:59.164507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.329 [2024-11-20 16:40:59.164515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.329 qpair failed and we were unable to recover it. 00:29:13.329 [2024-11-20 16:40:59.164843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.329 [2024-11-20 16:40:59.164851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.329 qpair failed and we were unable to recover it. 00:29:13.329 [2024-11-20 16:40:59.165147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.329 [2024-11-20 16:40:59.165155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.329 qpair failed and we were unable to recover it. 00:29:13.329 [2024-11-20 16:40:59.165459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.329 [2024-11-20 16:40:59.165466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.329 qpair failed and we were unable to recover it. 00:29:13.329 [2024-11-20 16:40:59.165665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.329 [2024-11-20 16:40:59.165672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.329 qpair failed and we were unable to recover it. 00:29:13.329 [2024-11-20 16:40:59.165959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.329 [2024-11-20 16:40:59.165966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.329 qpair failed and we were unable to recover it. 00:29:13.329 [2024-11-20 16:40:59.166266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.329 [2024-11-20 16:40:59.166274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.329 qpair failed and we were unable to recover it. 00:29:13.330 [2024-11-20 16:40:59.166456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.330 [2024-11-20 16:40:59.166463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.330 qpair failed and we were unable to recover it. 00:29:13.330 [2024-11-20 16:40:59.166754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.330 [2024-11-20 16:40:59.166761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.330 qpair failed and we were unable to recover it. 00:29:13.330 [2024-11-20 16:40:59.166970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.330 [2024-11-20 16:40:59.166977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.330 qpair failed and we were unable to recover it. 00:29:13.330 [2024-11-20 16:40:59.167312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.330 [2024-11-20 16:40:59.167320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.330 qpair failed and we were unable to recover it. 00:29:13.330 [2024-11-20 16:40:59.167471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.330 [2024-11-20 16:40:59.167479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.330 qpair failed and we were unable to recover it. 00:29:13.330 [2024-11-20 16:40:59.167759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.330 [2024-11-20 16:40:59.167766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.330 qpair failed and we were unable to recover it. 00:29:13.330 [2024-11-20 16:40:59.167950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.330 [2024-11-20 16:40:59.167957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.330 qpair failed and we were unable to recover it. 00:29:13.330 [2024-11-20 16:40:59.168181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.330 [2024-11-20 16:40:59.168188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.330 qpair failed and we were unable to recover it. 00:29:13.330 [2024-11-20 16:40:59.168367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.330 [2024-11-20 16:40:59.168374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.330 qpair failed and we were unable to recover it. 00:29:13.330 [2024-11-20 16:40:59.168731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.330 [2024-11-20 16:40:59.168739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.330 qpair failed and we were unable to recover it. 00:29:13.330 [2024-11-20 16:40:59.168932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.330 [2024-11-20 16:40:59.168939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.330 qpair failed and we were unable to recover it. 00:29:13.330 [2024-11-20 16:40:59.169228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.330 [2024-11-20 16:40:59.169236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.330 qpair failed and we were unable to recover it. 00:29:13.330 [2024-11-20 16:40:59.169424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.330 [2024-11-20 16:40:59.169434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.330 qpair failed and we were unable to recover it. 00:29:13.330 [2024-11-20 16:40:59.169621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.330 [2024-11-20 16:40:59.169628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.330 qpair failed and we were unable to recover it. 00:29:13.330 [2024-11-20 16:40:59.169951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.330 [2024-11-20 16:40:59.169958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.330 qpair failed and we were unable to recover it. 00:29:13.330 [2024-11-20 16:40:59.170174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.330 [2024-11-20 16:40:59.170181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.330 qpair failed and we were unable to recover it. 00:29:13.330 16:40:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:13.330 [2024-11-20 16:40:59.170542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.330 [2024-11-20 16:40:59.170550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.330 qpair failed and we were unable to recover it. 00:29:13.330 16:40:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:13.330 [2024-11-20 16:40:59.170866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.330 [2024-11-20 16:40:59.170874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.330 qpair failed and we were unable to recover it. 00:29:13.330 16:40:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.330 [2024-11-20 16:40:59.171206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.330 [2024-11-20 16:40:59.171215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.330 qpair failed and we were unable to recover it. 00:29:13.330 16:40:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:13.330 [2024-11-20 16:40:59.171395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.330 [2024-11-20 16:40:59.171403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.330 qpair failed and we were unable to recover it. 00:29:13.330 [2024-11-20 16:40:59.171704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.330 [2024-11-20 16:40:59.171711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.330 qpair failed and we were unable to recover it. 00:29:13.330 [2024-11-20 16:40:59.172071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.330 [2024-11-20 16:40:59.172078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.331 qpair failed and we were unable to recover it. 00:29:13.331 [2024-11-20 16:40:59.172252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.331 [2024-11-20 16:40:59.172259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.331 qpair failed and we were unable to recover it. 00:29:13.331 [2024-11-20 16:40:59.172567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.331 [2024-11-20 16:40:59.172574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.331 qpair failed and we were unable to recover it. 00:29:13.331 [2024-11-20 16:40:59.172888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.331 [2024-11-20 16:40:59.172896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.331 qpair failed and we were unable to recover it. 00:29:13.331 [2024-11-20 16:40:59.173216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.331 [2024-11-20 16:40:59.173223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.331 qpair failed and we were unable to recover it. 00:29:13.331 [2024-11-20 16:40:59.173573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.331 [2024-11-20 16:40:59.173580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.331 qpair failed and we were unable to recover it. 00:29:13.331 [2024-11-20 16:40:59.173761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.331 [2024-11-20 16:40:59.173768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.331 qpair failed and we were unable to recover it. 00:29:13.331 [2024-11-20 16:40:59.173947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.331 [2024-11-20 16:40:59.173954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.331 qpair failed and we were unable to recover it. 00:29:13.331 [2024-11-20 16:40:59.174176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.331 [2024-11-20 16:40:59.174184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.331 qpair failed and we were unable to recover it. 00:29:13.331 [2024-11-20 16:40:59.174476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.331 [2024-11-20 16:40:59.174483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.331 qpair failed and we were unable to recover it. 00:29:13.331 [2024-11-20 16:40:59.174821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.331 [2024-11-20 16:40:59.174828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.331 qpair failed and we were unable to recover it. 00:29:13.331 [2024-11-20 16:40:59.175047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.331 [2024-11-20 16:40:59.175053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.331 qpair failed and we were unable to recover it. 00:29:13.331 [2024-11-20 16:40:59.175361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.331 [2024-11-20 16:40:59.175368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.331 qpair failed and we were unable to recover it. 00:29:13.331 [2024-11-20 16:40:59.175404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.331 [2024-11-20 16:40:59.175411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.331 qpair failed and we were unable to recover it. 00:29:13.331 [2024-11-20 16:40:59.175698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.331 [2024-11-20 16:40:59.175705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.331 qpair failed and we were unable to recover it. 00:29:13.331 [2024-11-20 16:40:59.176014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.331 [2024-11-20 16:40:59.176022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.331 qpair failed and we were unable to recover it. 00:29:13.331 [2024-11-20 16:40:59.176192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.331 [2024-11-20 16:40:59.176204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.331 qpair failed and we were unable to recover it. 00:29:13.331 [2024-11-20 16:40:59.176506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.331 [2024-11-20 16:40:59.176513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.331 qpair failed and we were unable to recover it. 00:29:13.331 [2024-11-20 16:40:59.176577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.331 [2024-11-20 16:40:59.176583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.331 qpair failed and we were unable to recover it. 00:29:13.331 [2024-11-20 16:40:59.176884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.331 [2024-11-20 16:40:59.176891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.331 qpair failed and we were unable to recover it. 00:29:13.331 [2024-11-20 16:40:59.177210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.331 [2024-11-20 16:40:59.177216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.331 qpair failed and we were unable to recover it. 00:29:13.331 [2024-11-20 16:40:59.177502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.331 [2024-11-20 16:40:59.177515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.331 qpair failed and we were unable to recover it. 00:29:13.331 [2024-11-20 16:40:59.177827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.331 [2024-11-20 16:40:59.177834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.331 qpair failed and we were unable to recover it. 00:29:13.331 [2024-11-20 16:40:59.178001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.331 [2024-11-20 16:40:59.178009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.331 qpair failed and we were unable to recover it. 00:29:13.331 [2024-11-20 16:40:59.178188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.331 [2024-11-20 16:40:59.178195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.331 qpair failed and we were unable to recover it. 00:29:13.331 [2024-11-20 16:40:59.178484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.331 [2024-11-20 16:40:59.178490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.331 qpair failed and we were unable to recover it. 00:29:13.331 [2024-11-20 16:40:59.178795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.331 [2024-11-20 16:40:59.178801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.331 qpair failed and we were unable to recover it. 00:29:13.331 [2024-11-20 16:40:59.179105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.331 [2024-11-20 16:40:59.179113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.331 qpair failed and we were unable to recover it. 00:29:13.331 [2024-11-20 16:40:59.179363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.331 [2024-11-20 16:40:59.179370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.331 qpair failed and we were unable to recover it. 00:29:13.331 [2024-11-20 16:40:59.179544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.331 [2024-11-20 16:40:59.179553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.331 qpair failed and we were unable to recover it. 00:29:13.331 [2024-11-20 16:40:59.179727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.331 [2024-11-20 16:40:59.179734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.331 qpair failed and we were unable to recover it. 00:29:13.331 [2024-11-20 16:40:59.180043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.331 [2024-11-20 16:40:59.180051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.331 qpair failed and we were unable to recover it. 00:29:13.331 [2024-11-20 16:40:59.180220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.331 [2024-11-20 16:40:59.180228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.332 qpair failed and we were unable to recover it. 00:29:13.332 [2024-11-20 16:40:59.180304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.332 [2024-11-20 16:40:59.180310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.332 qpair failed and we were unable to recover it. 00:29:13.332 [2024-11-20 16:40:59.180577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.332 [2024-11-20 16:40:59.180584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.332 qpair failed and we were unable to recover it. 00:29:13.332 [2024-11-20 16:40:59.180901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.332 [2024-11-20 16:40:59.180907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.332 qpair failed and we were unable to recover it. 00:29:13.332 [2024-11-20 16:40:59.181243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.332 [2024-11-20 16:40:59.181250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.332 qpair failed and we were unable to recover it. 00:29:13.332 [2024-11-20 16:40:59.181552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.332 [2024-11-20 16:40:59.181560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.332 qpair failed and we were unable to recover it. 00:29:13.332 [2024-11-20 16:40:59.181879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.332 [2024-11-20 16:40:59.181886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.332 qpair failed and we were unable to recover it. 00:29:13.332 [2024-11-20 16:40:59.182186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.332 [2024-11-20 16:40:59.182193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.332 qpair failed and we were unable to recover it. 00:29:13.332 [2024-11-20 16:40:59.182504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.332 [2024-11-20 16:40:59.182511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.332 qpair failed and we were unable to recover it. 00:29:13.332 [2024-11-20 16:40:59.182663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.332 [2024-11-20 16:40:59.182670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.332 qpair failed and we were unable to recover it. 00:29:13.332 [2024-11-20 16:40:59.182946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.332 [2024-11-20 16:40:59.182953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.332 qpair failed and we were unable to recover it. 00:29:13.332 [2024-11-20 16:40:59.183236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.332 [2024-11-20 16:40:59.183242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.332 qpair failed and we were unable to recover it. 00:29:13.332 [2024-11-20 16:40:59.183552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.332 [2024-11-20 16:40:59.183559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.332 qpair failed and we were unable to recover it. 00:29:13.332 [2024-11-20 16:40:59.183722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.332 [2024-11-20 16:40:59.183730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.332 qpair failed and we were unable to recover it. 00:29:13.332 [2024-11-20 16:40:59.184049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.332 [2024-11-20 16:40:59.184056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.332 qpair failed and we were unable to recover it. 00:29:13.332 [2024-11-20 16:40:59.184362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.332 [2024-11-20 16:40:59.184369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.332 qpair failed and we were unable to recover it. 00:29:13.332 [2024-11-20 16:40:59.184681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.332 [2024-11-20 16:40:59.184688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.332 qpair failed and we were unable to recover it. 00:29:13.332 [2024-11-20 16:40:59.184851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.332 [2024-11-20 16:40:59.184858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.332 qpair failed and we were unable to recover it. 00:29:13.332 [2024-11-20 16:40:59.185282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.332 [2024-11-20 16:40:59.185288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.332 qpair failed and we were unable to recover it. 00:29:13.332 [2024-11-20 16:40:59.185440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.332 [2024-11-20 16:40:59.185447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.332 qpair failed and we were unable to recover it. 00:29:13.332 [2024-11-20 16:40:59.185747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.332 [2024-11-20 16:40:59.185754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.332 qpair failed and we were unable to recover it. 00:29:13.332 [2024-11-20 16:40:59.186061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.332 [2024-11-20 16:40:59.186068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.332 qpair failed and we were unable to recover it. 00:29:13.332 [2024-11-20 16:40:59.186401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.332 [2024-11-20 16:40:59.186408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.332 qpair failed and we were unable to recover it. 00:29:13.332 [2024-11-20 16:40:59.186734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.332 [2024-11-20 16:40:59.186742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.332 qpair failed and we were unable to recover it. 00:29:13.332 [2024-11-20 16:40:59.187091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.332 [2024-11-20 16:40:59.187098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.332 qpair failed and we were unable to recover it. 00:29:13.332 [2024-11-20 16:40:59.187346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.332 [2024-11-20 16:40:59.187353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.332 qpair failed and we were unable to recover it. 00:29:13.332 [2024-11-20 16:40:59.187709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.332 [2024-11-20 16:40:59.187716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.332 qpair failed and we were unable to recover it. 00:29:13.332 [2024-11-20 16:40:59.187765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.332 [2024-11-20 16:40:59.187772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.332 qpair failed and we were unable to recover it. 00:29:13.332 [2024-11-20 16:40:59.188039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.332 [2024-11-20 16:40:59.188046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.332 qpair failed and we were unable to recover it. 00:29:13.332 [2024-11-20 16:40:59.188227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.332 [2024-11-20 16:40:59.188233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.332 qpair failed and we were unable to recover it. 00:29:13.332 [2024-11-20 16:40:59.188546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.332 [2024-11-20 16:40:59.188553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.332 qpair failed and we were unable to recover it. 00:29:13.333 [2024-11-20 16:40:59.188870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.333 [2024-11-20 16:40:59.188877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.333 qpair failed and we were unable to recover it. 00:29:13.333 [2024-11-20 16:40:59.189214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.333 [2024-11-20 16:40:59.189221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.333 qpair failed and we were unable to recover it. 00:29:13.333 [2024-11-20 16:40:59.189615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.333 [2024-11-20 16:40:59.189622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.333 qpair failed and we were unable to recover it. 00:29:13.333 [2024-11-20 16:40:59.189972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.333 [2024-11-20 16:40:59.189979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.333 qpair failed and we were unable to recover it. 00:29:13.333 [2024-11-20 16:40:59.190202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.333 [2024-11-20 16:40:59.190209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.333 qpair failed and we were unable to recover it. 00:29:13.333 [2024-11-20 16:40:59.190389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.333 [2024-11-20 16:40:59.190395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.333 qpair failed and we were unable to recover it. 00:29:13.333 [2024-11-20 16:40:59.190703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.333 [2024-11-20 16:40:59.190712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.333 qpair failed and we were unable to recover it. 00:29:13.333 [2024-11-20 16:40:59.191024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.333 [2024-11-20 16:40:59.191032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.333 qpair failed and we were unable to recover it. 00:29:13.333 [2024-11-20 16:40:59.191248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.333 [2024-11-20 16:40:59.191254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.333 qpair failed and we were unable to recover it. 00:29:13.333 [2024-11-20 16:40:59.191570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.333 [2024-11-20 16:40:59.191577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.333 qpair failed and we were unable to recover it. 00:29:13.333 [2024-11-20 16:40:59.191731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.333 [2024-11-20 16:40:59.191738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.333 qpair failed and we were unable to recover it. 00:29:13.333 [2024-11-20 16:40:59.191927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.333 [2024-11-20 16:40:59.191933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.333 qpair failed and we were unable to recover it. 00:29:13.333 [2024-11-20 16:40:59.192186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.333 [2024-11-20 16:40:59.192193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.333 qpair failed and we were unable to recover it. 00:29:13.333 [2024-11-20 16:40:59.192502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.333 [2024-11-20 16:40:59.192509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.333 qpair failed and we were unable to recover it. 00:29:13.333 [2024-11-20 16:40:59.192718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.333 [2024-11-20 16:40:59.192725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.333 qpair failed and we were unable to recover it. 00:29:13.333 [2024-11-20 16:40:59.192951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.333 [2024-11-20 16:40:59.192958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.333 qpair failed and we were unable to recover it. 00:29:13.333 [2024-11-20 16:40:59.193213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.333 [2024-11-20 16:40:59.193220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.333 qpair failed and we were unable to recover it. 00:29:13.333 [2024-11-20 16:40:59.193552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.333 [2024-11-20 16:40:59.193559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.333 qpair failed and we were unable to recover it. 00:29:13.333 [2024-11-20 16:40:59.193869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.333 [2024-11-20 16:40:59.193875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.333 qpair failed and we were unable to recover it. 00:29:13.333 [2024-11-20 16:40:59.194036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.333 [2024-11-20 16:40:59.194043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.333 qpair failed and we were unable to recover it. 00:29:13.333 [2024-11-20 16:40:59.194409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.333 [2024-11-20 16:40:59.194416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.333 qpair failed and we were unable to recover it. 00:29:13.333 [2024-11-20 16:40:59.194588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.333 [2024-11-20 16:40:59.194595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.333 qpair failed and we were unable to recover it. 00:29:13.333 [2024-11-20 16:40:59.194852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.333 [2024-11-20 16:40:59.194859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.333 qpair failed and we were unable to recover it. 00:29:13.333 [2024-11-20 16:40:59.195154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.333 [2024-11-20 16:40:59.195161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.333 qpair failed and we were unable to recover it. 00:29:13.333 [2024-11-20 16:40:59.195334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.333 [2024-11-20 16:40:59.195342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.333 qpair failed and we were unable to recover it. 00:29:13.333 [2024-11-20 16:40:59.195718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.334 [2024-11-20 16:40:59.195724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.334 qpair failed and we were unable to recover it. 00:29:13.334 [2024-11-20 16:40:59.196032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.334 [2024-11-20 16:40:59.196039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.334 qpair failed and we were unable to recover it. 00:29:13.334 [2024-11-20 16:40:59.196381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.334 [2024-11-20 16:40:59.196388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.334 qpair failed and we were unable to recover it. 00:29:13.334 [2024-11-20 16:40:59.196546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.334 [2024-11-20 16:40:59.196553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.334 qpair failed and we were unable to recover it. 00:29:13.334 [2024-11-20 16:40:59.196847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.334 [2024-11-20 16:40:59.196861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.334 qpair failed and we were unable to recover it. 00:29:13.334 [2024-11-20 16:40:59.197159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.334 [2024-11-20 16:40:59.197166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.334 qpair failed and we were unable to recover it. 00:29:13.334 [2024-11-20 16:40:59.197472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.334 [2024-11-20 16:40:59.197480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.334 qpair failed and we were unable to recover it. 00:29:13.334 [2024-11-20 16:40:59.197793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.334 [2024-11-20 16:40:59.197800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.334 qpair failed and we were unable to recover it. 00:29:13.334 [2024-11-20 16:40:59.198117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.334 [2024-11-20 16:40:59.198125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.334 qpair failed and we were unable to recover it. 00:29:13.334 [2024-11-20 16:40:59.198440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.334 [2024-11-20 16:40:59.198447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.334 qpair failed and we were unable to recover it. 00:29:13.334 [2024-11-20 16:40:59.198751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.334 [2024-11-20 16:40:59.198758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.334 qpair failed and we were unable to recover it. 00:29:13.334 [2024-11-20 16:40:59.199081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.334 [2024-11-20 16:40:59.199088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.334 qpair failed and we were unable to recover it. 00:29:13.334 [2024-11-20 16:40:59.199280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.334 [2024-11-20 16:40:59.199288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.334 qpair failed and we were unable to recover it. 00:29:13.334 [2024-11-20 16:40:59.199636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.334 [2024-11-20 16:40:59.199643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.334 qpair failed and we were unable to recover it. 00:29:13.334 [2024-11-20 16:40:59.199959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.334 [2024-11-20 16:40:59.199966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.334 qpair failed and we were unable to recover it. 00:29:13.334 [2024-11-20 16:40:59.200149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.334 [2024-11-20 16:40:59.200157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.334 qpair failed and we were unable to recover it. 00:29:13.334 [2024-11-20 16:40:59.200326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.334 [2024-11-20 16:40:59.200333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.334 qpair failed and we were unable to recover it. 00:29:13.334 [2024-11-20 16:40:59.200515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.334 [2024-11-20 16:40:59.200521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.334 qpair failed and we were unable to recover it. 00:29:13.334 [2024-11-20 16:40:59.200813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.334 [2024-11-20 16:40:59.200820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.334 qpair failed and we were unable to recover it. 00:29:13.334 [2024-11-20 16:40:59.201148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.334 [2024-11-20 16:40:59.201156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.334 qpair failed and we were unable to recover it. 00:29:13.334 [2024-11-20 16:40:59.201466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.334 [2024-11-20 16:40:59.201473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.334 qpair failed and we were unable to recover it. 00:29:13.334 [2024-11-20 16:40:59.201786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.334 [2024-11-20 16:40:59.201794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.334 qpair failed and we were unable to recover it. 00:29:13.334 [2024-11-20 16:40:59.202090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.334 [2024-11-20 16:40:59.202097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.334 qpair failed and we were unable to recover it. 00:29:13.334 [2024-11-20 16:40:59.202276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.334 [2024-11-20 16:40:59.202283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.334 qpair failed and we were unable to recover it. 00:29:13.334 [2024-11-20 16:40:59.202454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.334 [2024-11-20 16:40:59.202461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.334 qpair failed and we were unable to recover it. 00:29:13.334 [2024-11-20 16:40:59.202849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.334 [2024-11-20 16:40:59.202857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.334 qpair failed and we were unable to recover it. 00:29:13.334 [2024-11-20 16:40:59.203149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.334 [2024-11-20 16:40:59.203155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.334 qpair failed and we were unable to recover it. 00:29:13.334 [2024-11-20 16:40:59.203192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.334 [2024-11-20 16:40:59.203198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.334 qpair failed and we were unable to recover it. 00:29:13.334 [2024-11-20 16:40:59.203358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.334 [2024-11-20 16:40:59.203365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.334 qpair failed and we were unable to recover it. 00:29:13.335 [2024-11-20 16:40:59.203625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.335 [2024-11-20 16:40:59.203632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.335 qpair failed and we were unable to recover it. 00:29:13.335 [2024-11-20 16:40:59.203807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.335 [2024-11-20 16:40:59.203815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.335 qpair failed and we were unable to recover it. 00:29:13.335 [2024-11-20 16:40:59.204131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.335 [2024-11-20 16:40:59.204139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.335 qpair failed and we were unable to recover it. 00:29:13.335 [2024-11-20 16:40:59.204450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.335 [2024-11-20 16:40:59.204463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.335 qpair failed and we were unable to recover it. 00:29:13.335 [2024-11-20 16:40:59.204629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.335 [2024-11-20 16:40:59.204636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.335 qpair failed and we were unable to recover it. 00:29:13.335 [2024-11-20 16:40:59.204674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.335 [2024-11-20 16:40:59.204680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.335 qpair failed and we were unable to recover it. 00:29:13.335 [2024-11-20 16:40:59.205014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.335 [2024-11-20 16:40:59.205021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.335 qpair failed and we were unable to recover it. 00:29:13.335 [2024-11-20 16:40:59.205188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.335 [2024-11-20 16:40:59.205195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.335 qpair failed and we were unable to recover it. 00:29:13.335 [2024-11-20 16:40:59.205382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.335 [2024-11-20 16:40:59.205390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.335 qpair failed and we were unable to recover it. 00:29:13.335 [2024-11-20 16:40:59.205705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.335 [2024-11-20 16:40:59.205712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.335 qpair failed and we were unable to recover it. 00:29:13.335 [2024-11-20 16:40:59.205997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.335 [2024-11-20 16:40:59.206004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.335 qpair failed and we were unable to recover it. 00:29:13.335 Malloc0 00:29:13.335 [2024-11-20 16:40:59.206334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.335 [2024-11-20 16:40:59.206342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.335 qpair failed and we were unable to recover it. 00:29:13.335 [2024-11-20 16:40:59.206651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.335 [2024-11-20 16:40:59.206658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.335 qpair failed and we were unable to recover it. 00:29:13.335 [2024-11-20 16:40:59.206971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.335 [2024-11-20 16:40:59.206978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.335 qpair failed and we were unable to recover it. 00:29:13.335 16:40:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.335 [2024-11-20 16:40:59.207163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.335 [2024-11-20 16:40:59.207170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.335 qpair failed and we were unable to recover it. 00:29:13.335 [2024-11-20 16:40:59.207248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.335 [2024-11-20 16:40:59.207255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.335 qpair failed and we were unable to recover it. 00:29:13.335 [2024-11-20 16:40:59.207444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.335 [2024-11-20 16:40:59.207451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.335 qpair failed and we were unable to recover it. 00:29:13.335 16:40:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:13.335 [2024-11-20 16:40:59.207786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.335 16:40:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.335 [2024-11-20 16:40:59.207795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.335 qpair failed and we were unable to recover it. 00:29:13.335 16:40:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:13.335 [2024-11-20 16:40:59.208117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.335 [2024-11-20 16:40:59.208125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.335 qpair failed and we were unable to recover it. 00:29:13.335 [2024-11-20 16:40:59.208293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.335 [2024-11-20 16:40:59.208301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.335 qpair failed and we were unable to recover it. 00:29:13.335 [2024-11-20 16:40:59.208463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.335 [2024-11-20 16:40:59.208470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.335 qpair failed and we were unable to recover it. 00:29:13.335 [2024-11-20 16:40:59.208756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.335 [2024-11-20 16:40:59.208762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.335 qpair failed and we were unable to recover it. 00:29:13.335 [2024-11-20 16:40:59.209069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.335 [2024-11-20 16:40:59.209077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.335 qpair failed and we were unable to recover it. 00:29:13.335 [2024-11-20 16:40:59.209265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.335 [2024-11-20 16:40:59.209272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.335 qpair failed and we were unable to recover it. 00:29:13.336 [2024-11-20 16:40:59.209490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.336 [2024-11-20 16:40:59.209497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.336 qpair failed and we were unable to recover it. 00:29:13.336 [2024-11-20 16:40:59.209814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.336 [2024-11-20 16:40:59.209820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.336 qpair failed and we were unable to recover it. 00:29:13.336 [2024-11-20 16:40:59.210154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.336 [2024-11-20 16:40:59.210162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.336 qpair failed and we were unable to recover it. 00:29:13.336 [2024-11-20 16:40:59.210195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.336 [2024-11-20 16:40:59.210202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.336 qpair failed and we were unable to recover it. 00:29:13.336 [2024-11-20 16:40:59.210377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.336 [2024-11-20 16:40:59.210384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.336 qpair failed and we were unable to recover it. 00:29:13.336 [2024-11-20 16:40:59.210562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.336 [2024-11-20 16:40:59.210569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.336 qpair failed and we were unable to recover it. 00:29:13.336 [2024-11-20 16:40:59.210896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.336 [2024-11-20 16:40:59.210903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.336 qpair failed and we were unable to recover it. 00:29:13.336 [2024-11-20 16:40:59.211092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.336 [2024-11-20 16:40:59.211099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.336 qpair failed and we were unable to recover it. 00:29:13.336 [2024-11-20 16:40:59.211378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.336 [2024-11-20 16:40:59.211385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.336 qpair failed and we were unable to recover it. 00:29:13.336 [2024-11-20 16:40:59.211456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.336 [2024-11-20 16:40:59.211462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.336 qpair failed and we were unable to recover it. 00:29:13.336 [2024-11-20 16:40:59.211642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.336 [2024-11-20 16:40:59.211649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.336 qpair failed and we were unable to recover it. 00:29:13.336 [2024-11-20 16:40:59.211974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.336 [2024-11-20 16:40:59.211984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.336 qpair failed and we were unable to recover it. 00:29:13.336 [2024-11-20 16:40:59.212355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.336 [2024-11-20 16:40:59.212362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.336 qpair failed and we were unable to recover it. 00:29:13.336 [2024-11-20 16:40:59.212519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.336 [2024-11-20 16:40:59.212525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.336 qpair failed and we were unable to recover it. 00:29:13.336 [2024-11-20 16:40:59.212592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.336 [2024-11-20 16:40:59.212598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.336 qpair failed and we were unable to recover it. 00:29:13.336 [2024-11-20 16:40:59.212918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.336 [2024-11-20 16:40:59.212925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.336 qpair failed and we were unable to recover it. 00:29:13.336 [2024-11-20 16:40:59.213205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.336 [2024-11-20 16:40:59.213212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.336 qpair failed and we were unable to recover it. 00:29:13.336 [2024-11-20 16:40:59.213576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.336 [2024-11-20 16:40:59.213583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.336 qpair failed and we were unable to recover it. 00:29:13.336 [2024-11-20 16:40:59.213791] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:13.336 [2024-11-20 16:40:59.213900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.336 [2024-11-20 16:40:59.213907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.336 qpair failed and we were unable to recover it. 00:29:13.336 [2024-11-20 16:40:59.214278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.336 [2024-11-20 16:40:59.214285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.336 qpair failed and we were unable to recover it. 00:29:13.336 [2024-11-20 16:40:59.214597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.336 [2024-11-20 16:40:59.214604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.336 qpair failed and we were unable to recover it. 00:29:13.336 [2024-11-20 16:40:59.214910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.336 [2024-11-20 16:40:59.214916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.336 qpair failed and we were unable to recover it. 00:29:13.336 [2024-11-20 16:40:59.215232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.336 [2024-11-20 16:40:59.215238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.336 qpair failed and we were unable to recover it. 00:29:13.337 [2024-11-20 16:40:59.215405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.337 [2024-11-20 16:40:59.215412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.337 qpair failed and we were unable to recover it. 00:29:13.337 [2024-11-20 16:40:59.215690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.337 [2024-11-20 16:40:59.215697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.337 qpair failed and we were unable to recover it. 00:29:13.337 [2024-11-20 16:40:59.215880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.337 [2024-11-20 16:40:59.215887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.337 qpair failed and we were unable to recover it. 00:29:13.337 [2024-11-20 16:40:59.216110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.337 [2024-11-20 16:40:59.216117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.337 qpair failed and we were unable to recover it. 00:29:13.337 [2024-11-20 16:40:59.216338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.337 [2024-11-20 16:40:59.216345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.337 qpair failed and we were unable to recover it. 00:29:13.337 [2024-11-20 16:40:59.216550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.337 [2024-11-20 16:40:59.216557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.337 qpair failed and we were unable to recover it. 00:29:13.337 [2024-11-20 16:40:59.216750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.337 [2024-11-20 16:40:59.216757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.337 qpair failed and we were unable to recover it. 00:29:13.337 [2024-11-20 16:40:59.216922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.337 [2024-11-20 16:40:59.216929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.337 qpair failed and we were unable to recover it. 00:29:13.337 [2024-11-20 16:40:59.217213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.337 [2024-11-20 16:40:59.217221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.337 qpair failed and we were unable to recover it. 00:29:13.337 [2024-11-20 16:40:59.217388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.337 [2024-11-20 16:40:59.217396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.337 qpair failed and we were unable to recover it. 00:29:13.337 [2024-11-20 16:40:59.217692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.337 [2024-11-20 16:40:59.217699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.337 qpair failed and we were unable to recover it. 00:29:13.337 [2024-11-20 16:40:59.217992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.337 [2024-11-20 16:40:59.217999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.337 qpair failed and we were unable to recover it. 00:29:13.337 [2024-11-20 16:40:59.218182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.337 [2024-11-20 16:40:59.218189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.337 qpair failed and we were unable to recover it. 00:29:13.337 [2024-11-20 16:40:59.218520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.337 [2024-11-20 16:40:59.218526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.337 qpair failed and we were unable to recover it. 00:29:13.337 [2024-11-20 16:40:59.218731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.337 [2024-11-20 16:40:59.218737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.337 qpair failed and we were unable to recover it. 00:29:13.337 [2024-11-20 16:40:59.219011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.337 [2024-11-20 16:40:59.219019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.337 qpair failed and we were unable to recover it. 00:29:13.337 [2024-11-20 16:40:59.219199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.337 [2024-11-20 16:40:59.219207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.337 qpair failed and we were unable to recover it. 00:29:13.337 [2024-11-20 16:40:59.219510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.337 [2024-11-20 16:40:59.219517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.337 qpair failed and we were unable to recover it. 00:29:13.337 [2024-11-20 16:40:59.219828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.337 [2024-11-20 16:40:59.219835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.337 qpair failed and we were unable to recover it. 00:29:13.337 [2024-11-20 16:40:59.220130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.337 [2024-11-20 16:40:59.220137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.337 qpair failed and we were unable to recover it. 00:29:13.337 [2024-11-20 16:40:59.220436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.337 [2024-11-20 16:40:59.220444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.337 qpair failed and we were unable to recover it. 00:29:13.337 [2024-11-20 16:40:59.220566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.337 [2024-11-20 16:40:59.220574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.337 qpair failed and we were unable to recover it. 00:29:13.337 [2024-11-20 16:40:59.220865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.337 [2024-11-20 16:40:59.220871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.337 qpair failed and we were unable to recover it. 00:29:13.337 [2024-11-20 16:40:59.221217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.337 [2024-11-20 16:40:59.221227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.337 qpair failed and we were unable to recover it. 00:29:13.337 [2024-11-20 16:40:59.221569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.337 [2024-11-20 16:40:59.221582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.337 qpair failed and we were unable to recover it. 00:29:13.337 [2024-11-20 16:40:59.221889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.337 [2024-11-20 16:40:59.221896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.337 qpair failed and we were unable to recover it. 00:29:13.337 [2024-11-20 16:40:59.222099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.337 [2024-11-20 16:40:59.222106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.337 qpair failed and we were unable to recover it. 00:29:13.337 [2024-11-20 16:40:59.222449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.337 [2024-11-20 16:40:59.222456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.337 qpair failed and we were unable to recover it. 00:29:13.337 16:40:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.337 [2024-11-20 16:40:59.222756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.337 [2024-11-20 16:40:59.222764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.337 qpair failed and we were unable to recover it. 00:29:13.337 [2024-11-20 16:40:59.222917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.337 [2024-11-20 16:40:59.222924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.337 qpair failed and we were unable to recover it. 00:29:13.337 [2024-11-20 16:40:59.223098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.337 [2024-11-20 16:40:59.223106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.337 qpair failed and we were unable to recover it. 00:29:13.337 16:40:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:13.337 [2024-11-20 16:40:59.223449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.337 [2024-11-20 16:40:59.223456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.337 qpair failed and we were unable to recover it. 00:29:13.338 16:40:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.338 16:40:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:13.338 [2024-11-20 16:40:59.223781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.338 [2024-11-20 16:40:59.223788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.338 qpair failed and we were unable to recover it. 00:29:13.338 [2024-11-20 16:40:59.224102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.338 [2024-11-20 16:40:59.224109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.338 qpair failed and we were unable to recover it. 00:29:13.338 [2024-11-20 16:40:59.224429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.338 [2024-11-20 16:40:59.224436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.338 qpair failed and we were unable to recover it. 00:29:13.338 [2024-11-20 16:40:59.224756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.338 [2024-11-20 16:40:59.224764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.338 qpair failed and we were unable to recover it. 00:29:13.338 [2024-11-20 16:40:59.224933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.338 [2024-11-20 16:40:59.224941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.338 qpair failed and we were unable to recover it. 00:29:13.338 [2024-11-20 16:40:59.225109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.338 [2024-11-20 16:40:59.225117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.338 qpair failed and we were unable to recover it. 00:29:13.338 [2024-11-20 16:40:59.225410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.338 [2024-11-20 16:40:59.225416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.338 qpair failed and we were unable to recover it. 00:29:13.338 [2024-11-20 16:40:59.225745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.338 [2024-11-20 16:40:59.225753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.338 qpair failed and we were unable to recover it. 00:29:13.338 [2024-11-20 16:40:59.225925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.338 [2024-11-20 16:40:59.225932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.338 qpair failed and we were unable to recover it. 00:29:13.338 [2024-11-20 16:40:59.226257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.338 [2024-11-20 16:40:59.226264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.338 qpair failed and we were unable to recover it. 00:29:13.338 [2024-11-20 16:40:59.226562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.338 [2024-11-20 16:40:59.226569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.338 qpair failed and we were unable to recover it. 00:29:13.338 [2024-11-20 16:40:59.226931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.338 [2024-11-20 16:40:59.226938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.338 qpair failed and we were unable to recover it. 00:29:13.338 [2024-11-20 16:40:59.227222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.338 [2024-11-20 16:40:59.227230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.338 qpair failed and we were unable to recover it. 00:29:13.338 [2024-11-20 16:40:59.227401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.338 [2024-11-20 16:40:59.227409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.338 qpair failed and we were unable to recover it. 00:29:13.338 [2024-11-20 16:40:59.227608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.338 [2024-11-20 16:40:59.227615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.338 qpair failed and we were unable to recover it. 00:29:13.338 [2024-11-20 16:40:59.227923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.338 [2024-11-20 16:40:59.227931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.338 qpair failed and we were unable to recover it. 00:29:13.338 [2024-11-20 16:40:59.228218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.338 [2024-11-20 16:40:59.228226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.338 qpair failed and we were unable to recover it. 00:29:13.338 [2024-11-20 16:40:59.228509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.338 [2024-11-20 16:40:59.228517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.338 qpair failed and we were unable to recover it. 00:29:13.338 [2024-11-20 16:40:59.228722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.338 [2024-11-20 16:40:59.228730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.338 qpair failed and we were unable to recover it. 00:29:13.338 [2024-11-20 16:40:59.228917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.338 [2024-11-20 16:40:59.228924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.338 qpair failed and we were unable to recover it. 00:29:13.338 [2024-11-20 16:40:59.229099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.338 [2024-11-20 16:40:59.229107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.338 qpair failed and we were unable to recover it. 00:29:13.338 [2024-11-20 16:40:59.229413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.338 [2024-11-20 16:40:59.229421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.338 qpair failed and we were unable to recover it. 00:29:13.338 [2024-11-20 16:40:59.229609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.338 [2024-11-20 16:40:59.229616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.338 qpair failed and we were unable to recover it. 00:29:13.338 [2024-11-20 16:40:59.229781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.338 [2024-11-20 16:40:59.229789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.338 qpair failed and we were unable to recover it. 00:29:13.338 [2024-11-20 16:40:59.229971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.338 [2024-11-20 16:40:59.229978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.338 qpair failed and we were unable to recover it. 00:29:13.338 [2024-11-20 16:40:59.230319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.338 [2024-11-20 16:40:59.230327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.338 qpair failed and we were unable to recover it. 00:29:13.338 [2024-11-20 16:40:59.230652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.338 [2024-11-20 16:40:59.230659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.338 qpair failed and we were unable to recover it. 00:29:13.338 [2024-11-20 16:40:59.230827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.338 [2024-11-20 16:40:59.230835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.338 qpair failed and we were unable to recover it. 00:29:13.338 [2024-11-20 16:40:59.231133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.338 [2024-11-20 16:40:59.231140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.338 qpair failed and we were unable to recover it. 00:29:13.338 [2024-11-20 16:40:59.231326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.338 [2024-11-20 16:40:59.231335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.338 qpair failed and we were unable to recover it. 00:29:13.338 [2024-11-20 16:40:59.231658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.338 [2024-11-20 16:40:59.231664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.338 qpair failed and we were unable to recover it. 00:29:13.338 [2024-11-20 16:40:59.231968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.338 [2024-11-20 16:40:59.231975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.338 qpair failed and we were unable to recover it. 00:29:13.338 [2024-11-20 16:40:59.232289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.338 [2024-11-20 16:40:59.232296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.338 qpair failed and we were unable to recover it. 00:29:13.338 [2024-11-20 16:40:59.232582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.338 [2024-11-20 16:40:59.232595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.338 qpair failed and we were unable to recover it. 00:29:13.338 [2024-11-20 16:40:59.232775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.338 [2024-11-20 16:40:59.232781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.338 qpair failed and we were unable to recover it. 00:29:13.338 [2024-11-20 16:40:59.232943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.338 [2024-11-20 16:40:59.232949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.338 qpair failed and we were unable to recover it. 00:29:13.338 [2024-11-20 16:40:59.233129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.338 [2024-11-20 16:40:59.233137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.338 qpair failed and we were unable to recover it. 00:29:13.338 [2024-11-20 16:40:59.233427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.339 [2024-11-20 16:40:59.233435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.339 qpair failed and we were unable to recover it. 00:29:13.339 [2024-11-20 16:40:59.233751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.339 [2024-11-20 16:40:59.233758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.339 qpair failed and we were unable to recover it. 00:29:13.339 [2024-11-20 16:40:59.234042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.339 [2024-11-20 16:40:59.234049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.339 qpair failed and we were unable to recover it. 00:29:13.339 [2024-11-20 16:40:59.234235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.339 [2024-11-20 16:40:59.234243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.339 qpair failed and we were unable to recover it. 00:29:13.339 [2024-11-20 16:40:59.234444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.339 [2024-11-20 16:40:59.234451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.339 qpair failed and we were unable to recover it. 00:29:13.339 [2024-11-20 16:40:59.234647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.339 [2024-11-20 16:40:59.234653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.339 qpair failed and we were unable to recover it. 00:29:13.339 [2024-11-20 16:40:59.234856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.339 [2024-11-20 16:40:59.234864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.339 16:40:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.339 qpair failed and we were unable to recover it. 00:29:13.339 [2024-11-20 16:40:59.235174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.339 [2024-11-20 16:40:59.235181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.339 qpair failed and we were unable to recover it. 00:29:13.339 16:40:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:13.339 [2024-11-20 16:40:59.235500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.339 [2024-11-20 16:40:59.235507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.339 qpair failed and we were unable to recover it. 00:29:13.339 16:40:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.339 16:40:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:13.339 [2024-11-20 16:40:59.235819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.339 [2024-11-20 16:40:59.235827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.339 qpair failed and we were unable to recover it. 00:29:13.339 [2024-11-20 16:40:59.236127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.339 [2024-11-20 16:40:59.236135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.339 qpair failed and we were unable to recover it. 00:29:13.339 [2024-11-20 16:40:59.236459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.339 [2024-11-20 16:40:59.236466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.339 qpair failed and we were unable to recover it. 00:29:13.339 [2024-11-20 16:40:59.236577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.339 [2024-11-20 16:40:59.236583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.339 qpair failed and we were unable to recover it. 00:29:13.339 [2024-11-20 16:40:59.236784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.339 [2024-11-20 16:40:59.236790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.339 qpair failed and we were unable to recover it. 00:29:13.339 [2024-11-20 16:40:59.237111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.339 [2024-11-20 16:40:59.237117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.339 qpair failed and we were unable to recover it. 00:29:13.339 [2024-11-20 16:40:59.237303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.339 [2024-11-20 16:40:59.237309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.339 qpair failed and we were unable to recover it. 00:29:13.339 [2024-11-20 16:40:59.237424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.339 [2024-11-20 16:40:59.237431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.339 qpair failed and we were unable to recover it. 00:29:13.339 [2024-11-20 16:40:59.237584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.339 [2024-11-20 16:40:59.237592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.339 qpair failed and we were unable to recover it. 00:29:13.339 [2024-11-20 16:40:59.237886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.339 [2024-11-20 16:40:59.237893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.339 qpair failed and we were unable to recover it. 00:29:13.339 [2024-11-20 16:40:59.238208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.339 [2024-11-20 16:40:59.238216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.339 qpair failed and we were unable to recover it. 00:29:13.339 [2024-11-20 16:40:59.238547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.339 [2024-11-20 16:40:59.238554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.339 qpair failed and we were unable to recover it. 00:29:13.339 [2024-11-20 16:40:59.238868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.339 [2024-11-20 16:40:59.238875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.339 qpair failed and we were unable to recover it. 00:29:13.339 [2024-11-20 16:40:59.239200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.339 [2024-11-20 16:40:59.239207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.339 qpair failed and we were unable to recover it. 00:29:13.339 [2024-11-20 16:40:59.239410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.339 [2024-11-20 16:40:59.239416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.339 qpair failed and we were unable to recover it. 00:29:13.339 [2024-11-20 16:40:59.239757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.339 [2024-11-20 16:40:59.239765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.339 qpair failed and we were unable to recover it. 00:29:13.339 [2024-11-20 16:40:59.240097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.339 [2024-11-20 16:40:59.240104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.339 qpair failed and we were unable to recover it. 00:29:13.339 [2024-11-20 16:40:59.240282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.339 [2024-11-20 16:40:59.240289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.339 qpair failed and we were unable to recover it. 00:29:13.339 [2024-11-20 16:40:59.240330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.339 [2024-11-20 16:40:59.240336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.339 qpair failed and we were unable to recover it. 00:29:13.339 [2024-11-20 16:40:59.240533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.339 [2024-11-20 16:40:59.240540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.339 qpair failed and we were unable to recover it. 00:29:13.339 [2024-11-20 16:40:59.240707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.339 [2024-11-20 16:40:59.240714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.339 qpair failed and we were unable to recover it. 00:29:13.339 [2024-11-20 16:40:59.240924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.339 [2024-11-20 16:40:59.240931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.339 qpair failed and we were unable to recover it. 00:29:13.339 [2024-11-20 16:40:59.241254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.339 [2024-11-20 16:40:59.241261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.339 qpair failed and we were unable to recover it. 00:29:13.339 [2024-11-20 16:40:59.241430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.339 [2024-11-20 16:40:59.241437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.339 qpair failed and we were unable to recover it. 00:29:13.339 [2024-11-20 16:40:59.241634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.339 [2024-11-20 16:40:59.241641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.339 qpair failed and we were unable to recover it. 00:29:13.339 [2024-11-20 16:40:59.241971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.339 [2024-11-20 16:40:59.241978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.339 qpair failed and we were unable to recover it. 00:29:13.340 [2024-11-20 16:40:59.242361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.340 [2024-11-20 16:40:59.242368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.340 qpair failed and we were unable to recover it. 00:29:13.340 [2024-11-20 16:40:59.242671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.340 [2024-11-20 16:40:59.242678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.340 qpair failed and we were unable to recover it. 00:29:13.340 [2024-11-20 16:40:59.243042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.340 [2024-11-20 16:40:59.243049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.340 qpair failed and we were unable to recover it. 00:29:13.340 [2024-11-20 16:40:59.243350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.340 [2024-11-20 16:40:59.243356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.340 qpair failed and we were unable to recover it. 00:29:13.340 [2024-11-20 16:40:59.243542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.340 [2024-11-20 16:40:59.243549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.340 qpair failed and we were unable to recover it. 00:29:13.340 [2024-11-20 16:40:59.243820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.340 [2024-11-20 16:40:59.243827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.340 qpair failed and we were unable to recover it. 00:29:13.340 [2024-11-20 16:40:59.243998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.340 [2024-11-20 16:40:59.244005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.340 qpair failed and we were unable to recover it. 00:29:13.340 [2024-11-20 16:40:59.244325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.340 [2024-11-20 16:40:59.244332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.340 qpair failed and we were unable to recover it. 00:29:13.340 [2024-11-20 16:40:59.244498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.340 [2024-11-20 16:40:59.244505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.340 qpair failed and we were unable to recover it. 00:29:13.340 [2024-11-20 16:40:59.244801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.340 [2024-11-20 16:40:59.244808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.340 qpair failed and we were unable to recover it. 00:29:13.340 [2024-11-20 16:40:59.245122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.340 [2024-11-20 16:40:59.245129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.340 qpair failed and we were unable to recover it. 00:29:13.340 [2024-11-20 16:40:59.245424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.340 [2024-11-20 16:40:59.245430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.340 qpair failed and we were unable to recover it. 00:29:13.340 [2024-11-20 16:40:59.245726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.340 [2024-11-20 16:40:59.245733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.340 qpair failed and we were unable to recover it. 00:29:13.606 [2024-11-20 16:40:59.246026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.606 [2024-11-20 16:40:59.246035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.606 qpair failed and we were unable to recover it. 00:29:13.606 [2024-11-20 16:40:59.246346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.606 [2024-11-20 16:40:59.246354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.606 qpair failed and we were unable to recover it. 00:29:13.606 [2024-11-20 16:40:59.246646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.606 [2024-11-20 16:40:59.246653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.606 qpair failed and we were unable to recover it. 00:29:13.606 [2024-11-20 16:40:59.246845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.606 [2024-11-20 16:40:59.246852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.606 qpair failed and we were unable to recover it. 00:29:13.606 16:40:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.606 [2024-11-20 16:40:59.247155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.606 [2024-11-20 16:40:59.247162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.606 qpair failed and we were unable to recover it. 00:29:13.606 16:40:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:13.606 [2024-11-20 16:40:59.247475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.606 [2024-11-20 16:40:59.247482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.606 qpair failed and we were unable to recover it. 00:29:13.606 16:40:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.606 [2024-11-20 16:40:59.247796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.606 [2024-11-20 16:40:59.247803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.606 qpair failed and we were unable to recover it. 00:29:13.606 16:40:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:13.606 [2024-11-20 16:40:59.248186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.606 [2024-11-20 16:40:59.248195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.606 qpair failed and we were unable to recover it. 00:29:13.606 [2024-11-20 16:40:59.248377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.606 [2024-11-20 16:40:59.248385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe984000b90 with addr=10.0.0.2, port=4420 00:29:13.606 qpair failed and we were unable to recover it. 00:29:13.606 [2024-11-20 16:40:59.248670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.606 [2024-11-20 16:40:59.248707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.606 qpair failed and we were unable to recover it. 00:29:13.606 [2024-11-20 16:40:59.249063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.606 [2024-11-20 16:40:59.249078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.606 qpair failed and we were unable to recover it. 00:29:13.606 [2024-11-20 16:40:59.249385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-11-20 16:40:59.249423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-11-20 16:40:59.249745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-11-20 16:40:59.249758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-11-20 16:40:59.250060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-11-20 16:40:59.250072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-11-20 16:40:59.250417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-11-20 16:40:59.250427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-11-20 16:40:59.250471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-11-20 16:40:59.250479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-11-20 16:40:59.250791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-11-20 16:40:59.250800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-11-20 16:40:59.251004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-11-20 16:40:59.251015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-11-20 16:40:59.251445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-11-20 16:40:59.251455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-11-20 16:40:59.251744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-11-20 16:40:59.251753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-11-20 16:40:59.251931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-11-20 16:40:59.251940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-11-20 16:40:59.252260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-11-20 16:40:59.252271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-11-20 16:40:59.252572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-11-20 16:40:59.252583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-11-20 16:40:59.252763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-11-20 16:40:59.252773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-11-20 16:40:59.253009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-11-20 16:40:59.253020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-11-20 16:40:59.253309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-11-20 16:40:59.253319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-11-20 16:40:59.253500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-11-20 16:40:59.253511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-11-20 16:40:59.253792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.607 [2024-11-20 16:40:59.253803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8a1010 with addr=10.0.0.2, port=4420 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-11-20 16:40:59.254017] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:13.607 16:40:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.607 16:40:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:13.607 16:40:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.607 16:40:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:13.607 [2024-11-20 16:40:59.264729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.607 [2024-11-20 16:40:59.264799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.607 [2024-11-20 16:40:59.264817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.607 [2024-11-20 16:40:59.264825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.607 [2024-11-20 16:40:59.264832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.607 [2024-11-20 16:40:59.264851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 16:40:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.607 16:40:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2395486 00:29:13.607 [2024-11-20 16:40:59.274622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.607 [2024-11-20 16:40:59.274692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.607 [2024-11-20 16:40:59.274706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.607 [2024-11-20 16:40:59.274713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.607 [2024-11-20 16:40:59.274720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.607 [2024-11-20 16:40:59.274734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-11-20 16:40:59.284635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.607 [2024-11-20 16:40:59.284693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.607 [2024-11-20 16:40:59.284708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.607 [2024-11-20 16:40:59.284715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.607 [2024-11-20 16:40:59.284722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.607 [2024-11-20 16:40:59.284736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-11-20 16:40:59.294657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.607 [2024-11-20 16:40:59.294722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.607 [2024-11-20 16:40:59.294735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.607 [2024-11-20 16:40:59.294742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.607 [2024-11-20 16:40:59.294749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.607 [2024-11-20 16:40:59.294763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-11-20 16:40:59.304673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.607 [2024-11-20 16:40:59.304737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.607 [2024-11-20 16:40:59.304750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.607 [2024-11-20 16:40:59.304758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.607 [2024-11-20 16:40:59.304764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.607 [2024-11-20 16:40:59.304777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-11-20 16:40:59.314491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.607 [2024-11-20 16:40:59.314545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.607 [2024-11-20 16:40:59.314559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.607 [2024-11-20 16:40:59.314570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.607 [2024-11-20 16:40:59.314577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.607 [2024-11-20 16:40:59.314590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.607 qpair failed and we were unable to recover it. 00:29:13.607 [2024-11-20 16:40:59.324631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.607 [2024-11-20 16:40:59.324718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.607 [2024-11-20 16:40:59.324731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.607 [2024-11-20 16:40:59.324739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.608 [2024-11-20 16:40:59.324745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.608 [2024-11-20 16:40:59.324759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.608 qpair failed and we were unable to recover it. 00:29:13.608 [2024-11-20 16:40:59.334680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.608 [2024-11-20 16:40:59.334738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.608 [2024-11-20 16:40:59.334752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.608 [2024-11-20 16:40:59.334759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.608 [2024-11-20 16:40:59.334765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.608 [2024-11-20 16:40:59.334779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.608 qpair failed and we were unable to recover it. 00:29:13.608 [2024-11-20 16:40:59.344737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.608 [2024-11-20 16:40:59.344798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.608 [2024-11-20 16:40:59.344816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.608 [2024-11-20 16:40:59.344823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.608 [2024-11-20 16:40:59.344829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.608 [2024-11-20 16:40:59.344844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.608 qpair failed and we were unable to recover it. 00:29:13.608 [2024-11-20 16:40:59.354726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.608 [2024-11-20 16:40:59.354780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.608 [2024-11-20 16:40:59.354793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.608 [2024-11-20 16:40:59.354801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.608 [2024-11-20 16:40:59.354807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.608 [2024-11-20 16:40:59.354824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.608 qpair failed and we were unable to recover it. 00:29:13.608 [2024-11-20 16:40:59.364763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.608 [2024-11-20 16:40:59.364812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.608 [2024-11-20 16:40:59.364825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.608 [2024-11-20 16:40:59.364832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.608 [2024-11-20 16:40:59.364839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.608 [2024-11-20 16:40:59.364853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.608 qpair failed and we were unable to recover it. 00:29:13.608 [2024-11-20 16:40:59.374784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.608 [2024-11-20 16:40:59.374844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.608 [2024-11-20 16:40:59.374857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.608 [2024-11-20 16:40:59.374865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.608 [2024-11-20 16:40:59.374871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.608 [2024-11-20 16:40:59.374885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.608 qpair failed and we were unable to recover it. 00:29:13.608 [2024-11-20 16:40:59.384837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.608 [2024-11-20 16:40:59.384894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.608 [2024-11-20 16:40:59.384908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.608 [2024-11-20 16:40:59.384916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.608 [2024-11-20 16:40:59.384922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.608 [2024-11-20 16:40:59.384936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.608 qpair failed and we were unable to recover it. 00:29:13.608 [2024-11-20 16:40:59.394837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.608 [2024-11-20 16:40:59.394892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.608 [2024-11-20 16:40:59.394906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.608 [2024-11-20 16:40:59.394913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.608 [2024-11-20 16:40:59.394920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.608 [2024-11-20 16:40:59.394933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.608 qpair failed and we were unable to recover it. 00:29:13.608 [2024-11-20 16:40:59.404879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.608 [2024-11-20 16:40:59.404974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.608 [2024-11-20 16:40:59.404992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.608 [2024-11-20 16:40:59.405000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.608 [2024-11-20 16:40:59.405006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.608 [2024-11-20 16:40:59.405020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.608 qpair failed and we were unable to recover it. 00:29:13.608 [2024-11-20 16:40:59.414889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.608 [2024-11-20 16:40:59.414969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.608 [2024-11-20 16:40:59.414986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.608 [2024-11-20 16:40:59.414994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.608 [2024-11-20 16:40:59.415000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.608 [2024-11-20 16:40:59.415014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.608 qpair failed and we were unable to recover it. 00:29:13.608 [2024-11-20 16:40:59.424818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.608 [2024-11-20 16:40:59.424883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.608 [2024-11-20 16:40:59.424896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.608 [2024-11-20 16:40:59.424903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.608 [2024-11-20 16:40:59.424910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.608 [2024-11-20 16:40:59.424923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.608 qpair failed and we were unable to recover it. 00:29:13.608 [2024-11-20 16:40:59.434957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.608 [2024-11-20 16:40:59.435018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.608 [2024-11-20 16:40:59.435032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.608 [2024-11-20 16:40:59.435040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.608 [2024-11-20 16:40:59.435046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.608 [2024-11-20 16:40:59.435060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.608 qpair failed and we were unable to recover it. 00:29:13.608 [2024-11-20 16:40:59.444995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.608 [2024-11-20 16:40:59.445048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.608 [2024-11-20 16:40:59.445061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.608 [2024-11-20 16:40:59.445072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.608 [2024-11-20 16:40:59.445078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.608 [2024-11-20 16:40:59.445092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.608 qpair failed and we were unable to recover it. 00:29:13.608 [2024-11-20 16:40:59.454932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.608 [2024-11-20 16:40:59.454987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.608 [2024-11-20 16:40:59.455001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.608 [2024-11-20 16:40:59.455008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.608 [2024-11-20 16:40:59.455014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.609 [2024-11-20 16:40:59.455028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.609 qpair failed and we were unable to recover it. 00:29:13.609 [2024-11-20 16:40:59.465065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.609 [2024-11-20 16:40:59.465121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.609 [2024-11-20 16:40:59.465137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.609 [2024-11-20 16:40:59.465145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.609 [2024-11-20 16:40:59.465152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.609 [2024-11-20 16:40:59.465168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.609 qpair failed and we were unable to recover it. 00:29:13.609 [2024-11-20 16:40:59.475118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.609 [2024-11-20 16:40:59.475189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.609 [2024-11-20 16:40:59.475204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.609 [2024-11-20 16:40:59.475211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.609 [2024-11-20 16:40:59.475217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.609 [2024-11-20 16:40:59.475231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.609 qpair failed and we were unable to recover it. 00:29:13.609 [2024-11-20 16:40:59.485141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.609 [2024-11-20 16:40:59.485201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.609 [2024-11-20 16:40:59.485216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.609 [2024-11-20 16:40:59.485223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.609 [2024-11-20 16:40:59.485229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.609 [2024-11-20 16:40:59.485247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.609 qpair failed and we were unable to recover it. 00:29:13.609 [2024-11-20 16:40:59.495170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.609 [2024-11-20 16:40:59.495229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.609 [2024-11-20 16:40:59.495242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.609 [2024-11-20 16:40:59.495249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.609 [2024-11-20 16:40:59.495256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.609 [2024-11-20 16:40:59.495269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.609 qpair failed and we were unable to recover it. 00:29:13.609 [2024-11-20 16:40:59.505257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.609 [2024-11-20 16:40:59.505316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.609 [2024-11-20 16:40:59.505330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.609 [2024-11-20 16:40:59.505337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.609 [2024-11-20 16:40:59.505343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.609 [2024-11-20 16:40:59.505357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.609 qpair failed and we were unable to recover it. 00:29:13.609 [2024-11-20 16:40:59.515232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.609 [2024-11-20 16:40:59.515324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.609 [2024-11-20 16:40:59.515337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.609 [2024-11-20 16:40:59.515344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.609 [2024-11-20 16:40:59.515350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.609 [2024-11-20 16:40:59.515364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.609 qpair failed and we were unable to recover it. 00:29:13.609 [2024-11-20 16:40:59.525265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.609 [2024-11-20 16:40:59.525348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.609 [2024-11-20 16:40:59.525362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.609 [2024-11-20 16:40:59.525369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.609 [2024-11-20 16:40:59.525375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.609 [2024-11-20 16:40:59.525389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.609 qpair failed and we were unable to recover it. 00:29:13.609 [2024-11-20 16:40:59.535262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.609 [2024-11-20 16:40:59.535330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.609 [2024-11-20 16:40:59.535343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.609 [2024-11-20 16:40:59.535350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.609 [2024-11-20 16:40:59.535356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.609 [2024-11-20 16:40:59.535370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.609 qpair failed and we were unable to recover it. 00:29:13.609 [2024-11-20 16:40:59.545314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.609 [2024-11-20 16:40:59.545370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.609 [2024-11-20 16:40:59.545383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.609 [2024-11-20 16:40:59.545390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.609 [2024-11-20 16:40:59.545397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.609 [2024-11-20 16:40:59.545410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.609 qpair failed and we were unable to recover it. 00:29:13.609 [2024-11-20 16:40:59.555331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.609 [2024-11-20 16:40:59.555394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.609 [2024-11-20 16:40:59.555408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.609 [2024-11-20 16:40:59.555416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.609 [2024-11-20 16:40:59.555422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.609 [2024-11-20 16:40:59.555435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.609 qpair failed and we were unable to recover it. 00:29:13.871 [2024-11-20 16:40:59.565338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.871 [2024-11-20 16:40:59.565389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.871 [2024-11-20 16:40:59.565402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.871 [2024-11-20 16:40:59.565409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.871 [2024-11-20 16:40:59.565416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.871 [2024-11-20 16:40:59.565429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.871 qpair failed and we were unable to recover it. 00:29:13.871 [2024-11-20 16:40:59.575428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.871 [2024-11-20 16:40:59.575494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.871 [2024-11-20 16:40:59.575507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.871 [2024-11-20 16:40:59.575519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.871 [2024-11-20 16:40:59.575525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.871 [2024-11-20 16:40:59.575538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.871 qpair failed and we were unable to recover it. 00:29:13.871 [2024-11-20 16:40:59.585415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.871 [2024-11-20 16:40:59.585476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.871 [2024-11-20 16:40:59.585490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.871 [2024-11-20 16:40:59.585498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.871 [2024-11-20 16:40:59.585504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.871 [2024-11-20 16:40:59.585517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.871 qpair failed and we were unable to recover it. 00:29:13.871 [2024-11-20 16:40:59.595434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.871 [2024-11-20 16:40:59.595484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.871 [2024-11-20 16:40:59.595498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.871 [2024-11-20 16:40:59.595505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.871 [2024-11-20 16:40:59.595511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.871 [2024-11-20 16:40:59.595525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.871 qpair failed and we were unable to recover it. 00:29:13.871 [2024-11-20 16:40:59.605322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.871 [2024-11-20 16:40:59.605372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.871 [2024-11-20 16:40:59.605386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.871 [2024-11-20 16:40:59.605393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.871 [2024-11-20 16:40:59.605400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.871 [2024-11-20 16:40:59.605414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.871 qpair failed and we were unable to recover it. 00:29:13.871 [2024-11-20 16:40:59.615534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.871 [2024-11-20 16:40:59.615587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.871 [2024-11-20 16:40:59.615602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.871 [2024-11-20 16:40:59.615609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.871 [2024-11-20 16:40:59.615616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.871 [2024-11-20 16:40:59.615634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.871 qpair failed and we were unable to recover it. 00:29:13.871 [2024-11-20 16:40:59.625531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.871 [2024-11-20 16:40:59.625620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.871 [2024-11-20 16:40:59.625634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.871 [2024-11-20 16:40:59.625642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.871 [2024-11-20 16:40:59.625648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.871 [2024-11-20 16:40:59.625663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.871 qpair failed and we were unable to recover it. 00:29:13.871 [2024-11-20 16:40:59.635536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.871 [2024-11-20 16:40:59.635593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.871 [2024-11-20 16:40:59.635606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.871 [2024-11-20 16:40:59.635613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.871 [2024-11-20 16:40:59.635620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.871 [2024-11-20 16:40:59.635633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.871 qpair failed and we were unable to recover it. 00:29:13.871 [2024-11-20 16:40:59.645555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.871 [2024-11-20 16:40:59.645605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.871 [2024-11-20 16:40:59.645619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.871 [2024-11-20 16:40:59.645626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.871 [2024-11-20 16:40:59.645632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.871 [2024-11-20 16:40:59.645646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.871 qpair failed and we were unable to recover it. 00:29:13.871 [2024-11-20 16:40:59.655594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.871 [2024-11-20 16:40:59.655646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.871 [2024-11-20 16:40:59.655659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.871 [2024-11-20 16:40:59.655667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.871 [2024-11-20 16:40:59.655673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.871 [2024-11-20 16:40:59.655686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.871 qpair failed and we were unable to recover it. 00:29:13.871 [2024-11-20 16:40:59.665631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.871 [2024-11-20 16:40:59.665736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.871 [2024-11-20 16:40:59.665749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.871 [2024-11-20 16:40:59.665756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.871 [2024-11-20 16:40:59.665763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.872 [2024-11-20 16:40:59.665776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.872 qpair failed and we were unable to recover it. 00:29:13.872 [2024-11-20 16:40:59.675664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.872 [2024-11-20 16:40:59.675752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.872 [2024-11-20 16:40:59.675777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.872 [2024-11-20 16:40:59.675785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.872 [2024-11-20 16:40:59.675792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.872 [2024-11-20 16:40:59.675812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.872 qpair failed and we were unable to recover it. 00:29:13.872 [2024-11-20 16:40:59.685700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.872 [2024-11-20 16:40:59.685750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.872 [2024-11-20 16:40:59.685767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.872 [2024-11-20 16:40:59.685774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.872 [2024-11-20 16:40:59.685781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.872 [2024-11-20 16:40:59.685796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.872 qpair failed and we were unable to recover it. 00:29:13.872 [2024-11-20 16:40:59.695604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.872 [2024-11-20 16:40:59.695662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.872 [2024-11-20 16:40:59.695676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.872 [2024-11-20 16:40:59.695684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.872 [2024-11-20 16:40:59.695691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.872 [2024-11-20 16:40:59.695705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.872 qpair failed and we were unable to recover it. 00:29:13.872 [2024-11-20 16:40:59.705634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.872 [2024-11-20 16:40:59.705695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.872 [2024-11-20 16:40:59.705709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.872 [2024-11-20 16:40:59.705721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.872 [2024-11-20 16:40:59.705727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.872 [2024-11-20 16:40:59.705741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.872 qpair failed and we were unable to recover it. 00:29:13.872 [2024-11-20 16:40:59.715760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.872 [2024-11-20 16:40:59.715819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.872 [2024-11-20 16:40:59.715832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.872 [2024-11-20 16:40:59.715839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.872 [2024-11-20 16:40:59.715846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.872 [2024-11-20 16:40:59.715859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.872 qpair failed and we were unable to recover it. 00:29:13.872 [2024-11-20 16:40:59.725709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.872 [2024-11-20 16:40:59.725766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.872 [2024-11-20 16:40:59.725779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.872 [2024-11-20 16:40:59.725786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.872 [2024-11-20 16:40:59.725793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.872 [2024-11-20 16:40:59.725806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.872 qpair failed and we were unable to recover it. 00:29:13.872 [2024-11-20 16:40:59.735825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.872 [2024-11-20 16:40:59.735881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.872 [2024-11-20 16:40:59.735895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.872 [2024-11-20 16:40:59.735902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.872 [2024-11-20 16:40:59.735908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.872 [2024-11-20 16:40:59.735921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.872 qpair failed and we were unable to recover it. 00:29:13.872 [2024-11-20 16:40:59.745918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.872 [2024-11-20 16:40:59.746017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.872 [2024-11-20 16:40:59.746031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.872 [2024-11-20 16:40:59.746038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.872 [2024-11-20 16:40:59.746044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.872 [2024-11-20 16:40:59.746061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.872 qpair failed and we were unable to recover it. 00:29:13.872 [2024-11-20 16:40:59.755883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.872 [2024-11-20 16:40:59.755935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.872 [2024-11-20 16:40:59.755949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.872 [2024-11-20 16:40:59.755956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.872 [2024-11-20 16:40:59.755963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.872 [2024-11-20 16:40:59.755976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.872 qpair failed and we were unable to recover it. 00:29:13.872 [2024-11-20 16:40:59.765890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.872 [2024-11-20 16:40:59.765947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.872 [2024-11-20 16:40:59.765961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.872 [2024-11-20 16:40:59.765968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.872 [2024-11-20 16:40:59.765974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.872 [2024-11-20 16:40:59.765991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.872 qpair failed and we were unable to recover it. 00:29:13.872 [2024-11-20 16:40:59.775938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.872 [2024-11-20 16:40:59.775993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.872 [2024-11-20 16:40:59.776007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.872 [2024-11-20 16:40:59.776014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.872 [2024-11-20 16:40:59.776020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.872 [2024-11-20 16:40:59.776034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.872 qpair failed and we were unable to recover it. 00:29:13.872 [2024-11-20 16:40:59.785987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.872 [2024-11-20 16:40:59.786040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.872 [2024-11-20 16:40:59.786054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.872 [2024-11-20 16:40:59.786061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.872 [2024-11-20 16:40:59.786068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.872 [2024-11-20 16:40:59.786082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.872 qpair failed and we were unable to recover it. 00:29:13.872 [2024-11-20 16:40:59.796003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.872 [2024-11-20 16:40:59.796061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.872 [2024-11-20 16:40:59.796074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.872 [2024-11-20 16:40:59.796081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.872 [2024-11-20 16:40:59.796088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.873 [2024-11-20 16:40:59.796101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.873 qpair failed and we were unable to recover it. 00:29:13.873 [2024-11-20 16:40:59.806027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.873 [2024-11-20 16:40:59.806079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.873 [2024-11-20 16:40:59.806092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.873 [2024-11-20 16:40:59.806099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.873 [2024-11-20 16:40:59.806106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.873 [2024-11-20 16:40:59.806119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.873 qpair failed and we were unable to recover it. 00:29:13.873 [2024-11-20 16:40:59.815958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.873 [2024-11-20 16:40:59.816021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.873 [2024-11-20 16:40:59.816034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.873 [2024-11-20 16:40:59.816042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.873 [2024-11-20 16:40:59.816048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:13.873 [2024-11-20 16:40:59.816062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.873 qpair failed and we were unable to recover it. 00:29:14.134 [2024-11-20 16:40:59.826079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.134 [2024-11-20 16:40:59.826154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.134 [2024-11-20 16:40:59.826167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.134 [2024-11-20 16:40:59.826174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.134 [2024-11-20 16:40:59.826181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.134 [2024-11-20 16:40:59.826194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.134 qpair failed and we were unable to recover it. 00:29:14.134 [2024-11-20 16:40:59.836091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.134 [2024-11-20 16:40:59.836143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.134 [2024-11-20 16:40:59.836157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.134 [2024-11-20 16:40:59.836167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.134 [2024-11-20 16:40:59.836174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.134 [2024-11-20 16:40:59.836187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.134 qpair failed and we were unable to recover it. 00:29:14.134 [2024-11-20 16:40:59.846143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.134 [2024-11-20 16:40:59.846197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.134 [2024-11-20 16:40:59.846210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.134 [2024-11-20 16:40:59.846217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.134 [2024-11-20 16:40:59.846223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.134 [2024-11-20 16:40:59.846237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.134 qpair failed and we were unable to recover it. 00:29:14.134 [2024-11-20 16:40:59.856100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.134 [2024-11-20 16:40:59.856152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.134 [2024-11-20 16:40:59.856165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.134 [2024-11-20 16:40:59.856172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.134 [2024-11-20 16:40:59.856178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.134 [2024-11-20 16:40:59.856192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.135 qpair failed and we were unable to recover it. 00:29:14.135 [2024-11-20 16:40:59.866085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.135 [2024-11-20 16:40:59.866148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.135 [2024-11-20 16:40:59.866161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.135 [2024-11-20 16:40:59.866168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.135 [2024-11-20 16:40:59.866174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.135 [2024-11-20 16:40:59.866188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.135 qpair failed and we were unable to recover it. 00:29:14.135 [2024-11-20 16:40:59.876221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.135 [2024-11-20 16:40:59.876276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.135 [2024-11-20 16:40:59.876290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.135 [2024-11-20 16:40:59.876297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.135 [2024-11-20 16:40:59.876304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.135 [2024-11-20 16:40:59.876321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.135 qpair failed and we were unable to recover it. 00:29:14.135 [2024-11-20 16:40:59.886243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.135 [2024-11-20 16:40:59.886292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.135 [2024-11-20 16:40:59.886306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.135 [2024-11-20 16:40:59.886313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.135 [2024-11-20 16:40:59.886320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.135 [2024-11-20 16:40:59.886333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.135 qpair failed and we were unable to recover it. 00:29:14.135 [2024-11-20 16:40:59.896292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.135 [2024-11-20 16:40:59.896359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.135 [2024-11-20 16:40:59.896372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.135 [2024-11-20 16:40:59.896379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.135 [2024-11-20 16:40:59.896385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.135 [2024-11-20 16:40:59.896399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.135 qpair failed and we were unable to recover it. 00:29:14.135 [2024-11-20 16:40:59.906216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.135 [2024-11-20 16:40:59.906277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.135 [2024-11-20 16:40:59.906292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.135 [2024-11-20 16:40:59.906300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.135 [2024-11-20 16:40:59.906306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.135 [2024-11-20 16:40:59.906320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.135 qpair failed and we were unable to recover it. 00:29:14.135 [2024-11-20 16:40:59.916276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.135 [2024-11-20 16:40:59.916327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.135 [2024-11-20 16:40:59.916340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.135 [2024-11-20 16:40:59.916347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.135 [2024-11-20 16:40:59.916354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.135 [2024-11-20 16:40:59.916367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.135 qpair failed and we were unable to recover it. 00:29:14.135 [2024-11-20 16:40:59.926351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.135 [2024-11-20 16:40:59.926407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.135 [2024-11-20 16:40:59.926421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.135 [2024-11-20 16:40:59.926428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.135 [2024-11-20 16:40:59.926435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.135 [2024-11-20 16:40:59.926448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.135 qpair failed and we were unable to recover it. 00:29:14.135 [2024-11-20 16:40:59.936392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.135 [2024-11-20 16:40:59.936445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.135 [2024-11-20 16:40:59.936459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.135 [2024-11-20 16:40:59.936466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.135 [2024-11-20 16:40:59.936472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.135 [2024-11-20 16:40:59.936485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.135 qpair failed and we were unable to recover it. 00:29:14.135 [2024-11-20 16:40:59.946422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.135 [2024-11-20 16:40:59.946474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.135 [2024-11-20 16:40:59.946487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.135 [2024-11-20 16:40:59.946494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.135 [2024-11-20 16:40:59.946500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.135 [2024-11-20 16:40:59.946513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.135 qpair failed and we were unable to recover it. 00:29:14.135 [2024-11-20 16:40:59.956440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.135 [2024-11-20 16:40:59.956492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.135 [2024-11-20 16:40:59.956506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.135 [2024-11-20 16:40:59.956514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.135 [2024-11-20 16:40:59.956520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.135 [2024-11-20 16:40:59.956536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.135 qpair failed and we were unable to recover it. 00:29:14.135 [2024-11-20 16:40:59.966463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.135 [2024-11-20 16:40:59.966512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.135 [2024-11-20 16:40:59.966527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.135 [2024-11-20 16:40:59.966537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.136 [2024-11-20 16:40:59.966544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.136 [2024-11-20 16:40:59.966557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.136 qpair failed and we were unable to recover it. 00:29:14.136 [2024-11-20 16:40:59.976497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.136 [2024-11-20 16:40:59.976552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.136 [2024-11-20 16:40:59.976565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.136 [2024-11-20 16:40:59.976572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.136 [2024-11-20 16:40:59.976578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.136 [2024-11-20 16:40:59.976592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.136 qpair failed and we were unable to recover it. 00:29:14.136 [2024-11-20 16:40:59.986447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.136 [2024-11-20 16:40:59.986499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.136 [2024-11-20 16:40:59.986513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.136 [2024-11-20 16:40:59.986520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.136 [2024-11-20 16:40:59.986527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.136 [2024-11-20 16:40:59.986540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.136 qpair failed and we were unable to recover it. 00:29:14.136 [2024-11-20 16:40:59.996456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.136 [2024-11-20 16:40:59.996561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.136 [2024-11-20 16:40:59.996574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.136 [2024-11-20 16:40:59.996581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.136 [2024-11-20 16:40:59.996588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.136 [2024-11-20 16:40:59.996601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.136 qpair failed and we were unable to recover it. 00:29:14.136 [2024-11-20 16:41:00.006601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.136 [2024-11-20 16:41:00.006654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.136 [2024-11-20 16:41:00.006669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.136 [2024-11-20 16:41:00.006676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.136 [2024-11-20 16:41:00.006682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.136 [2024-11-20 16:41:00.006700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.136 qpair failed and we were unable to recover it. 00:29:14.136 [2024-11-20 16:41:00.016644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.136 [2024-11-20 16:41:00.016703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.136 [2024-11-20 16:41:00.016716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.136 [2024-11-20 16:41:00.016724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.136 [2024-11-20 16:41:00.016731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.136 [2024-11-20 16:41:00.016744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.136 qpair failed and we were unable to recover it. 00:29:14.136 [2024-11-20 16:41:00.026597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.136 [2024-11-20 16:41:00.026646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.136 [2024-11-20 16:41:00.026660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.136 [2024-11-20 16:41:00.026667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.136 [2024-11-20 16:41:00.026673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.136 [2024-11-20 16:41:00.026687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.136 qpair failed and we were unable to recover it. 00:29:14.136 [2024-11-20 16:41:00.036649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.136 [2024-11-20 16:41:00.036700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.136 [2024-11-20 16:41:00.036714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.136 [2024-11-20 16:41:00.036721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.136 [2024-11-20 16:41:00.036727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.136 [2024-11-20 16:41:00.036741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.136 qpair failed and we were unable to recover it. 00:29:14.136 [2024-11-20 16:41:00.046703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.136 [2024-11-20 16:41:00.046759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.136 [2024-11-20 16:41:00.046773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.136 [2024-11-20 16:41:00.046780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.136 [2024-11-20 16:41:00.046787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.136 [2024-11-20 16:41:00.046800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.136 qpair failed and we were unable to recover it. 00:29:14.136 [2024-11-20 16:41:00.056666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.136 [2024-11-20 16:41:00.056751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.136 [2024-11-20 16:41:00.056764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.136 [2024-11-20 16:41:00.056771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.136 [2024-11-20 16:41:00.056778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.136 [2024-11-20 16:41:00.056791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.136 qpair failed and we were unable to recover it. 00:29:14.136 [2024-11-20 16:41:00.066811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.136 [2024-11-20 16:41:00.066867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.136 [2024-11-20 16:41:00.066880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.136 [2024-11-20 16:41:00.066887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.136 [2024-11-20 16:41:00.066894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.136 [2024-11-20 16:41:00.066907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.136 qpair failed and we were unable to recover it. 00:29:14.136 [2024-11-20 16:41:00.076774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.136 [2024-11-20 16:41:00.076832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.136 [2024-11-20 16:41:00.076845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.136 [2024-11-20 16:41:00.076852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.136 [2024-11-20 16:41:00.076858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.136 [2024-11-20 16:41:00.076872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.136 qpair failed and we were unable to recover it. 00:29:14.136 [2024-11-20 16:41:00.086806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.136 [2024-11-20 16:41:00.086860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.136 [2024-11-20 16:41:00.086874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.136 [2024-11-20 16:41:00.086881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.136 [2024-11-20 16:41:00.086887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.137 [2024-11-20 16:41:00.086901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.137 qpair failed and we were unable to recover it. 00:29:14.398 [2024-11-20 16:41:00.096770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.398 [2024-11-20 16:41:00.096836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.398 [2024-11-20 16:41:00.096850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.398 [2024-11-20 16:41:00.096861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.398 [2024-11-20 16:41:00.096867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.398 [2024-11-20 16:41:00.096881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.398 qpair failed and we were unable to recover it. 00:29:14.398 [2024-11-20 16:41:00.106837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.398 [2024-11-20 16:41:00.106895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.398 [2024-11-20 16:41:00.106909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.398 [2024-11-20 16:41:00.106916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.398 [2024-11-20 16:41:00.106922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.398 [2024-11-20 16:41:00.106937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.398 qpair failed and we were unable to recover it. 00:29:14.398 [2024-11-20 16:41:00.116914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.398 [2024-11-20 16:41:00.116971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.398 [2024-11-20 16:41:00.116988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.398 [2024-11-20 16:41:00.116996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.398 [2024-11-20 16:41:00.117002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.398 [2024-11-20 16:41:00.117016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.398 qpair failed and we were unable to recover it. 00:29:14.398 [2024-11-20 16:41:00.126927] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.398 [2024-11-20 16:41:00.126988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.398 [2024-11-20 16:41:00.127002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.398 [2024-11-20 16:41:00.127009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.398 [2024-11-20 16:41:00.127016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.398 [2024-11-20 16:41:00.127030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.398 qpair failed and we were unable to recover it. 00:29:14.398 [2024-11-20 16:41:00.136918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.398 [2024-11-20 16:41:00.136985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.399 [2024-11-20 16:41:00.136998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.399 [2024-11-20 16:41:00.137005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.399 [2024-11-20 16:41:00.137012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.399 [2024-11-20 16:41:00.137029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.399 qpair failed and we were unable to recover it. 00:29:14.399 [2024-11-20 16:41:00.146990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.399 [2024-11-20 16:41:00.147042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.399 [2024-11-20 16:41:00.147056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.399 [2024-11-20 16:41:00.147064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.399 [2024-11-20 16:41:00.147070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.399 [2024-11-20 16:41:00.147084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.399 qpair failed and we were unable to recover it. 00:29:14.399 [2024-11-20 16:41:00.156934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.399 [2024-11-20 16:41:00.156989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.399 [2024-11-20 16:41:00.157003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.399 [2024-11-20 16:41:00.157010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.399 [2024-11-20 16:41:00.157016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.399 [2024-11-20 16:41:00.157030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.399 qpair failed and we were unable to recover it. 00:29:14.399 [2024-11-20 16:41:00.167027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.399 [2024-11-20 16:41:00.167074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.399 [2024-11-20 16:41:00.167088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.399 [2024-11-20 16:41:00.167095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.399 [2024-11-20 16:41:00.167101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.399 [2024-11-20 16:41:00.167115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.399 qpair failed and we were unable to recover it. 00:29:14.399 [2024-11-20 16:41:00.177070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.399 [2024-11-20 16:41:00.177128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.399 [2024-11-20 16:41:00.177141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.399 [2024-11-20 16:41:00.177149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.399 [2024-11-20 16:41:00.177155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.399 [2024-11-20 16:41:00.177169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.399 qpair failed and we were unable to recover it. 00:29:14.399 [2024-11-20 16:41:00.187139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.399 [2024-11-20 16:41:00.187242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.399 [2024-11-20 16:41:00.187256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.399 [2024-11-20 16:41:00.187263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.399 [2024-11-20 16:41:00.187270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.399 [2024-11-20 16:41:00.187284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.399 qpair failed and we were unable to recover it. 00:29:14.399 [2024-11-20 16:41:00.197101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.399 [2024-11-20 16:41:00.197154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.399 [2024-11-20 16:41:00.197168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.399 [2024-11-20 16:41:00.197175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.399 [2024-11-20 16:41:00.197181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.399 [2024-11-20 16:41:00.197195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.399 qpair failed and we were unable to recover it. 00:29:14.399 [2024-11-20 16:41:00.207135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.399 [2024-11-20 16:41:00.207185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.399 [2024-11-20 16:41:00.207198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.399 [2024-11-20 16:41:00.207205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.399 [2024-11-20 16:41:00.207212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.399 [2024-11-20 16:41:00.207225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.399 qpair failed and we were unable to recover it. 00:29:14.399 [2024-11-20 16:41:00.217205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.399 [2024-11-20 16:41:00.217286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.399 [2024-11-20 16:41:00.217299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.399 [2024-11-20 16:41:00.217307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.399 [2024-11-20 16:41:00.217313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.399 [2024-11-20 16:41:00.217326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.399 qpair failed and we were unable to recover it. 00:29:14.399 [2024-11-20 16:41:00.227232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.399 [2024-11-20 16:41:00.227284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.399 [2024-11-20 16:41:00.227297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.399 [2024-11-20 16:41:00.227311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.399 [2024-11-20 16:41:00.227318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.399 [2024-11-20 16:41:00.227331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.399 qpair failed and we were unable to recover it. 00:29:14.399 [2024-11-20 16:41:00.237211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.399 [2024-11-20 16:41:00.237266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.399 [2024-11-20 16:41:00.237279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.399 [2024-11-20 16:41:00.237286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.399 [2024-11-20 16:41:00.237293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.399 [2024-11-20 16:41:00.237306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.399 qpair failed and we were unable to recover it. 00:29:14.399 [2024-11-20 16:41:00.247303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.399 [2024-11-20 16:41:00.247358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.399 [2024-11-20 16:41:00.247371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.399 [2024-11-20 16:41:00.247378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.399 [2024-11-20 16:41:00.247385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.399 [2024-11-20 16:41:00.247398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.399 qpair failed and we were unable to recover it. 00:29:14.399 [2024-11-20 16:41:00.257293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.399 [2024-11-20 16:41:00.257345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.399 [2024-11-20 16:41:00.257358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.399 [2024-11-20 16:41:00.257365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.399 [2024-11-20 16:41:00.257372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.399 [2024-11-20 16:41:00.257385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.399 qpair failed and we were unable to recover it. 00:29:14.399 [2024-11-20 16:41:00.267342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.399 [2024-11-20 16:41:00.267394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.399 [2024-11-20 16:41:00.267408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.399 [2024-11-20 16:41:00.267415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.400 [2024-11-20 16:41:00.267421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.400 [2024-11-20 16:41:00.267437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.400 qpair failed and we were unable to recover it. 00:29:14.400 [2024-11-20 16:41:00.277342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.400 [2024-11-20 16:41:00.277392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.400 [2024-11-20 16:41:00.277405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.400 [2024-11-20 16:41:00.277412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.400 [2024-11-20 16:41:00.277418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.400 [2024-11-20 16:41:00.277431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.400 qpair failed and we were unable to recover it. 00:29:14.400 [2024-11-20 16:41:00.287363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.400 [2024-11-20 16:41:00.287410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.400 [2024-11-20 16:41:00.287424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.400 [2024-11-20 16:41:00.287431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.400 [2024-11-20 16:41:00.287438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.400 [2024-11-20 16:41:00.287451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.400 qpair failed and we were unable to recover it. 00:29:14.400 [2024-11-20 16:41:00.297391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.400 [2024-11-20 16:41:00.297442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.400 [2024-11-20 16:41:00.297455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.400 [2024-11-20 16:41:00.297462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.400 [2024-11-20 16:41:00.297469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.400 [2024-11-20 16:41:00.297482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.400 qpair failed and we were unable to recover it. 00:29:14.400 [2024-11-20 16:41:00.307453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.400 [2024-11-20 16:41:00.307510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.400 [2024-11-20 16:41:00.307523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.400 [2024-11-20 16:41:00.307530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.400 [2024-11-20 16:41:00.307537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.400 [2024-11-20 16:41:00.307550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.400 qpair failed and we were unable to recover it. 00:29:14.400 [2024-11-20 16:41:00.317424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.400 [2024-11-20 16:41:00.317484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.400 [2024-11-20 16:41:00.317499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.400 [2024-11-20 16:41:00.317506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.400 [2024-11-20 16:41:00.317512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.400 [2024-11-20 16:41:00.317525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.400 qpair failed and we were unable to recover it. 00:29:14.400 [2024-11-20 16:41:00.327481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.400 [2024-11-20 16:41:00.327536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.400 [2024-11-20 16:41:00.327549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.400 [2024-11-20 16:41:00.327556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.400 [2024-11-20 16:41:00.327563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.400 [2024-11-20 16:41:00.327576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.400 qpair failed and we were unable to recover it. 00:29:14.400 [2024-11-20 16:41:00.337391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.400 [2024-11-20 16:41:00.337445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.400 [2024-11-20 16:41:00.337458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.400 [2024-11-20 16:41:00.337465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.400 [2024-11-20 16:41:00.337471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.400 [2024-11-20 16:41:00.337484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.400 qpair failed and we were unable to recover it. 00:29:14.400 [2024-11-20 16:41:00.347562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.400 [2024-11-20 16:41:00.347654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.400 [2024-11-20 16:41:00.347667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.400 [2024-11-20 16:41:00.347674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.400 [2024-11-20 16:41:00.347680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.400 [2024-11-20 16:41:00.347693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.400 qpair failed and we were unable to recover it. 00:29:14.661 [2024-11-20 16:41:00.357572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.661 [2024-11-20 16:41:00.357621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.661 [2024-11-20 16:41:00.357634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.661 [2024-11-20 16:41:00.357645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.661 [2024-11-20 16:41:00.357651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.661 [2024-11-20 16:41:00.357665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.661 qpair failed and we were unable to recover it. 00:29:14.661 [2024-11-20 16:41:00.367606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.661 [2024-11-20 16:41:00.367659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.661 [2024-11-20 16:41:00.367672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.661 [2024-11-20 16:41:00.367679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.661 [2024-11-20 16:41:00.367686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.661 [2024-11-20 16:41:00.367699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.661 qpair failed and we were unable to recover it. 00:29:14.661 [2024-11-20 16:41:00.377556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.661 [2024-11-20 16:41:00.377611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.661 [2024-11-20 16:41:00.377624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.661 [2024-11-20 16:41:00.377631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.661 [2024-11-20 16:41:00.377637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.661 [2024-11-20 16:41:00.377650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.661 qpair failed and we were unable to recover it. 00:29:14.661 [2024-11-20 16:41:00.387675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.661 [2024-11-20 16:41:00.387733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.661 [2024-11-20 16:41:00.387747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.661 [2024-11-20 16:41:00.387754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.661 [2024-11-20 16:41:00.387760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.661 [2024-11-20 16:41:00.387773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.661 qpair failed and we were unable to recover it. 00:29:14.661 [2024-11-20 16:41:00.397687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.661 [2024-11-20 16:41:00.397746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.661 [2024-11-20 16:41:00.397771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.661 [2024-11-20 16:41:00.397779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.661 [2024-11-20 16:41:00.397786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.661 [2024-11-20 16:41:00.397809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.661 qpair failed and we were unable to recover it. 00:29:14.661 [2024-11-20 16:41:00.407721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.661 [2024-11-20 16:41:00.407775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.662 [2024-11-20 16:41:00.407800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.662 [2024-11-20 16:41:00.407808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.662 [2024-11-20 16:41:00.407816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.662 [2024-11-20 16:41:00.407834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.662 qpair failed and we were unable to recover it. 00:29:14.662 [2024-11-20 16:41:00.417786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.662 [2024-11-20 16:41:00.417842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.662 [2024-11-20 16:41:00.417857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.662 [2024-11-20 16:41:00.417864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.662 [2024-11-20 16:41:00.417871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.662 [2024-11-20 16:41:00.417885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.662 qpair failed and we were unable to recover it. 00:29:14.662 [2024-11-20 16:41:00.427760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.662 [2024-11-20 16:41:00.427817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.662 [2024-11-20 16:41:00.427831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.662 [2024-11-20 16:41:00.427838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.662 [2024-11-20 16:41:00.427845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.662 [2024-11-20 16:41:00.427859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.662 qpair failed and we were unable to recover it. 00:29:14.662 [2024-11-20 16:41:00.437784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.662 [2024-11-20 16:41:00.437833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.662 [2024-11-20 16:41:00.437847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.662 [2024-11-20 16:41:00.437854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.662 [2024-11-20 16:41:00.437860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.662 [2024-11-20 16:41:00.437874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.662 qpair failed and we were unable to recover it. 00:29:14.662 [2024-11-20 16:41:00.447826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.662 [2024-11-20 16:41:00.447877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.662 [2024-11-20 16:41:00.447891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.662 [2024-11-20 16:41:00.447898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.662 [2024-11-20 16:41:00.447904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.662 [2024-11-20 16:41:00.447917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.662 qpair failed and we were unable to recover it. 00:29:14.662 [2024-11-20 16:41:00.457828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.662 [2024-11-20 16:41:00.457888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.662 [2024-11-20 16:41:00.457901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.662 [2024-11-20 16:41:00.457908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.662 [2024-11-20 16:41:00.457915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.662 [2024-11-20 16:41:00.457928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.662 qpair failed and we were unable to recover it. 00:29:14.662 [2024-11-20 16:41:00.467840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.662 [2024-11-20 16:41:00.467899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.662 [2024-11-20 16:41:00.467912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.662 [2024-11-20 16:41:00.467919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.662 [2024-11-20 16:41:00.467926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.662 [2024-11-20 16:41:00.467939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.662 qpair failed and we were unable to recover it. 00:29:14.662 [2024-11-20 16:41:00.477771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.662 [2024-11-20 16:41:00.477828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.662 [2024-11-20 16:41:00.477841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.662 [2024-11-20 16:41:00.477848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.662 [2024-11-20 16:41:00.477855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.662 [2024-11-20 16:41:00.477868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.662 qpair failed and we were unable to recover it. 00:29:14.662 [2024-11-20 16:41:00.487919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.662 [2024-11-20 16:41:00.487976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.662 [2024-11-20 16:41:00.487994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.662 [2024-11-20 16:41:00.488005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.662 [2024-11-20 16:41:00.488012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.662 [2024-11-20 16:41:00.488026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.662 qpair failed and we were unable to recover it. 00:29:14.662 [2024-11-20 16:41:00.497961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.662 [2024-11-20 16:41:00.498019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.662 [2024-11-20 16:41:00.498032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.662 [2024-11-20 16:41:00.498039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.662 [2024-11-20 16:41:00.498046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.662 [2024-11-20 16:41:00.498060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.662 qpair failed and we were unable to recover it. 00:29:14.662 [2024-11-20 16:41:00.508011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.662 [2024-11-20 16:41:00.508084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.662 [2024-11-20 16:41:00.508099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.662 [2024-11-20 16:41:00.508106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.662 [2024-11-20 16:41:00.508113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.662 [2024-11-20 16:41:00.508127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.662 qpair failed and we were unable to recover it. 00:29:14.662 [2024-11-20 16:41:00.518017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.662 [2024-11-20 16:41:00.518071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.662 [2024-11-20 16:41:00.518085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.662 [2024-11-20 16:41:00.518092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.662 [2024-11-20 16:41:00.518098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.662 [2024-11-20 16:41:00.518112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.662 qpair failed and we were unable to recover it. 00:29:14.662 [2024-11-20 16:41:00.528058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.662 [2024-11-20 16:41:00.528127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.662 [2024-11-20 16:41:00.528140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.662 [2024-11-20 16:41:00.528147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.662 [2024-11-20 16:41:00.528154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.662 [2024-11-20 16:41:00.528171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.662 qpair failed and we were unable to recover it. 00:29:14.662 [2024-11-20 16:41:00.538077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.662 [2024-11-20 16:41:00.538131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.662 [2024-11-20 16:41:00.538144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.663 [2024-11-20 16:41:00.538151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.663 [2024-11-20 16:41:00.538158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.663 [2024-11-20 16:41:00.538171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.663 qpair failed and we were unable to recover it. 00:29:14.663 [2024-11-20 16:41:00.548119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.663 [2024-11-20 16:41:00.548182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.663 [2024-11-20 16:41:00.548195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.663 [2024-11-20 16:41:00.548202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.663 [2024-11-20 16:41:00.548208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.663 [2024-11-20 16:41:00.548222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.663 qpair failed and we were unable to recover it. 00:29:14.663 [2024-11-20 16:41:00.558121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.663 [2024-11-20 16:41:00.558167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.663 [2024-11-20 16:41:00.558181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.663 [2024-11-20 16:41:00.558187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.663 [2024-11-20 16:41:00.558194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.663 [2024-11-20 16:41:00.558208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.663 qpair failed and we were unable to recover it. 00:29:14.663 [2024-11-20 16:41:00.568126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.663 [2024-11-20 16:41:00.568176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.663 [2024-11-20 16:41:00.568189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.663 [2024-11-20 16:41:00.568196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.663 [2024-11-20 16:41:00.568202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.663 [2024-11-20 16:41:00.568216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.663 qpair failed and we were unable to recover it. 00:29:14.663 [2024-11-20 16:41:00.578205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.663 [2024-11-20 16:41:00.578299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.663 [2024-11-20 16:41:00.578313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.663 [2024-11-20 16:41:00.578320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.663 [2024-11-20 16:41:00.578326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.663 [2024-11-20 16:41:00.578339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.663 qpair failed and we were unable to recover it. 00:29:14.663 [2024-11-20 16:41:00.588216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.663 [2024-11-20 16:41:00.588275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.663 [2024-11-20 16:41:00.588289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.663 [2024-11-20 16:41:00.588296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.663 [2024-11-20 16:41:00.588302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.663 [2024-11-20 16:41:00.588315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.663 qpair failed and we were unable to recover it. 00:29:14.663 [2024-11-20 16:41:00.598133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.663 [2024-11-20 16:41:00.598182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.663 [2024-11-20 16:41:00.598195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.663 [2024-11-20 16:41:00.598202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.663 [2024-11-20 16:41:00.598209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.663 [2024-11-20 16:41:00.598222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.663 qpair failed and we were unable to recover it. 00:29:14.663 [2024-11-20 16:41:00.608312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.663 [2024-11-20 16:41:00.608395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.663 [2024-11-20 16:41:00.608410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.663 [2024-11-20 16:41:00.608417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.663 [2024-11-20 16:41:00.608423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.663 [2024-11-20 16:41:00.608438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.663 qpair failed and we were unable to recover it. 00:29:14.924 [2024-11-20 16:41:00.618322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.924 [2024-11-20 16:41:00.618375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.924 [2024-11-20 16:41:00.618389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.924 [2024-11-20 16:41:00.618400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.924 [2024-11-20 16:41:00.618406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.924 [2024-11-20 16:41:00.618420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.924 qpair failed and we were unable to recover it. 00:29:14.924 [2024-11-20 16:41:00.628322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.924 [2024-11-20 16:41:00.628388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.924 [2024-11-20 16:41:00.628402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.924 [2024-11-20 16:41:00.628409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.924 [2024-11-20 16:41:00.628415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.925 [2024-11-20 16:41:00.628428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.925 qpair failed and we were unable to recover it. 00:29:14.925 [2024-11-20 16:41:00.638360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.925 [2024-11-20 16:41:00.638411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.925 [2024-11-20 16:41:00.638425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.925 [2024-11-20 16:41:00.638432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.925 [2024-11-20 16:41:00.638438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.925 [2024-11-20 16:41:00.638451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.925 qpair failed and we were unable to recover it. 00:29:14.925 [2024-11-20 16:41:00.648297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.925 [2024-11-20 16:41:00.648398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.925 [2024-11-20 16:41:00.648411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.925 [2024-11-20 16:41:00.648418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.925 [2024-11-20 16:41:00.648424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.925 [2024-11-20 16:41:00.648438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.925 qpair failed and we were unable to recover it. 00:29:14.925 [2024-11-20 16:41:00.658412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.925 [2024-11-20 16:41:00.658505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.925 [2024-11-20 16:41:00.658518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.925 [2024-11-20 16:41:00.658526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.925 [2024-11-20 16:41:00.658532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.925 [2024-11-20 16:41:00.658548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.925 qpair failed and we were unable to recover it. 00:29:14.925 [2024-11-20 16:41:00.668359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.925 [2024-11-20 16:41:00.668460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.925 [2024-11-20 16:41:00.668474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.925 [2024-11-20 16:41:00.668481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.925 [2024-11-20 16:41:00.668487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.925 [2024-11-20 16:41:00.668500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.925 qpair failed and we were unable to recover it. 00:29:14.925 [2024-11-20 16:41:00.678461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.925 [2024-11-20 16:41:00.678516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.925 [2024-11-20 16:41:00.678529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.925 [2024-11-20 16:41:00.678536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.925 [2024-11-20 16:41:00.678543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.925 [2024-11-20 16:41:00.678556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.925 qpair failed and we were unable to recover it. 00:29:14.925 [2024-11-20 16:41:00.688500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.925 [2024-11-20 16:41:00.688547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.925 [2024-11-20 16:41:00.688561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.925 [2024-11-20 16:41:00.688568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.925 [2024-11-20 16:41:00.688574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.925 [2024-11-20 16:41:00.688587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.925 qpair failed and we were unable to recover it. 00:29:14.925 [2024-11-20 16:41:00.698542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.925 [2024-11-20 16:41:00.698610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.925 [2024-11-20 16:41:00.698623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.925 [2024-11-20 16:41:00.698630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.925 [2024-11-20 16:41:00.698636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.925 [2024-11-20 16:41:00.698650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.925 qpair failed and we were unable to recover it. 00:29:14.925 [2024-11-20 16:41:00.708475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.925 [2024-11-20 16:41:00.708529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.925 [2024-11-20 16:41:00.708543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.925 [2024-11-20 16:41:00.708550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.925 [2024-11-20 16:41:00.708556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.925 [2024-11-20 16:41:00.708569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.925 qpair failed and we were unable to recover it. 00:29:14.925 [2024-11-20 16:41:00.718591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.925 [2024-11-20 16:41:00.718641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.925 [2024-11-20 16:41:00.718654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.925 [2024-11-20 16:41:00.718661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.925 [2024-11-20 16:41:00.718667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.925 [2024-11-20 16:41:00.718681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.925 qpair failed and we were unable to recover it. 00:29:14.925 [2024-11-20 16:41:00.728602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.925 [2024-11-20 16:41:00.728653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.925 [2024-11-20 16:41:00.728667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.925 [2024-11-20 16:41:00.728674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.925 [2024-11-20 16:41:00.728680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.925 [2024-11-20 16:41:00.728694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.925 qpair failed and we were unable to recover it. 00:29:14.925 [2024-11-20 16:41:00.738621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.925 [2024-11-20 16:41:00.738679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.925 [2024-11-20 16:41:00.738692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.925 [2024-11-20 16:41:00.738700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.925 [2024-11-20 16:41:00.738706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.925 [2024-11-20 16:41:00.738719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.925 qpair failed and we were unable to recover it. 00:29:14.925 [2024-11-20 16:41:00.748692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.925 [2024-11-20 16:41:00.748790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.925 [2024-11-20 16:41:00.748804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.925 [2024-11-20 16:41:00.748816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.925 [2024-11-20 16:41:00.748822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.925 [2024-11-20 16:41:00.748835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.925 qpair failed and we were unable to recover it. 00:29:14.925 [2024-11-20 16:41:00.758708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.925 [2024-11-20 16:41:00.758785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.925 [2024-11-20 16:41:00.758798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.925 [2024-11-20 16:41:00.758805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.925 [2024-11-20 16:41:00.758812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.926 [2024-11-20 16:41:00.758825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.926 qpair failed and we were unable to recover it. 00:29:14.926 [2024-11-20 16:41:00.768710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.926 [2024-11-20 16:41:00.768760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.926 [2024-11-20 16:41:00.768773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.926 [2024-11-20 16:41:00.768780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.926 [2024-11-20 16:41:00.768786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.926 [2024-11-20 16:41:00.768799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.926 qpair failed and we were unable to recover it. 00:29:14.926 [2024-11-20 16:41:00.778757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.926 [2024-11-20 16:41:00.778813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.926 [2024-11-20 16:41:00.778829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.926 [2024-11-20 16:41:00.778837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.926 [2024-11-20 16:41:00.778847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.926 [2024-11-20 16:41:00.778864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.926 qpair failed and we were unable to recover it. 00:29:14.926 [2024-11-20 16:41:00.788725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.926 [2024-11-20 16:41:00.788786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.926 [2024-11-20 16:41:00.788799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.926 [2024-11-20 16:41:00.788806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.926 [2024-11-20 16:41:00.788812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.926 [2024-11-20 16:41:00.788829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.926 qpair failed and we were unable to recover it. 00:29:14.926 [2024-11-20 16:41:00.798822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.926 [2024-11-20 16:41:00.798900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.926 [2024-11-20 16:41:00.798917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.926 [2024-11-20 16:41:00.798925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.926 [2024-11-20 16:41:00.798931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.926 [2024-11-20 16:41:00.798945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.926 qpair failed and we were unable to recover it. 00:29:14.926 [2024-11-20 16:41:00.808842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.926 [2024-11-20 16:41:00.808893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.926 [2024-11-20 16:41:00.808907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.926 [2024-11-20 16:41:00.808914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.926 [2024-11-20 16:41:00.808920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.926 [2024-11-20 16:41:00.808934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.926 qpair failed and we were unable to recover it. 00:29:14.926 [2024-11-20 16:41:00.818822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.926 [2024-11-20 16:41:00.818878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.926 [2024-11-20 16:41:00.818891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.926 [2024-11-20 16:41:00.818898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.926 [2024-11-20 16:41:00.818904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.926 [2024-11-20 16:41:00.818918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.926 qpair failed and we were unable to recover it. 00:29:14.926 [2024-11-20 16:41:00.828870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.926 [2024-11-20 16:41:00.828929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.926 [2024-11-20 16:41:00.828943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.926 [2024-11-20 16:41:00.828949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.926 [2024-11-20 16:41:00.828956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.926 [2024-11-20 16:41:00.828969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.926 qpair failed and we were unable to recover it. 00:29:14.926 [2024-11-20 16:41:00.838928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.926 [2024-11-20 16:41:00.838987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.926 [2024-11-20 16:41:00.839002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.926 [2024-11-20 16:41:00.839010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.926 [2024-11-20 16:41:00.839017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.926 [2024-11-20 16:41:00.839034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.926 qpair failed and we were unable to recover it. 00:29:14.926 [2024-11-20 16:41:00.848959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.926 [2024-11-20 16:41:00.849015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.926 [2024-11-20 16:41:00.849029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.926 [2024-11-20 16:41:00.849036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.926 [2024-11-20 16:41:00.849042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.926 [2024-11-20 16:41:00.849056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.926 qpair failed and we were unable to recover it. 00:29:14.926 [2024-11-20 16:41:00.858995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.926 [2024-11-20 16:41:00.859048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.926 [2024-11-20 16:41:00.859061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.926 [2024-11-20 16:41:00.859068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.926 [2024-11-20 16:41:00.859075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.926 [2024-11-20 16:41:00.859088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.926 qpair failed and we were unable to recover it. 00:29:14.926 [2024-11-20 16:41:00.869044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.926 [2024-11-20 16:41:00.869101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.926 [2024-11-20 16:41:00.869114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.926 [2024-11-20 16:41:00.869122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.926 [2024-11-20 16:41:00.869128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.926 [2024-11-20 16:41:00.869141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.926 qpair failed and we were unable to recover it. 00:29:14.926 [2024-11-20 16:41:00.878968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.926 [2024-11-20 16:41:00.879063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.926 [2024-11-20 16:41:00.879078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.926 [2024-11-20 16:41:00.879089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.926 [2024-11-20 16:41:00.879095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:14.926 [2024-11-20 16:41:00.879110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.926 qpair failed and we were unable to recover it. 00:29:15.188 [2024-11-20 16:41:00.889071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.188 [2024-11-20 16:41:00.889128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.188 [2024-11-20 16:41:00.889143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.188 [2024-11-20 16:41:00.889150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.188 [2024-11-20 16:41:00.889157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.188 [2024-11-20 16:41:00.889170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.188 qpair failed and we were unable to recover it. 00:29:15.188 [2024-11-20 16:41:00.899121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.188 [2024-11-20 16:41:00.899189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.188 [2024-11-20 16:41:00.899202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.188 [2024-11-20 16:41:00.899210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.188 [2024-11-20 16:41:00.899217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.188 [2024-11-20 16:41:00.899230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.188 qpair failed and we were unable to recover it. 00:29:15.188 [2024-11-20 16:41:00.909020] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.188 [2024-11-20 16:41:00.909085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.188 [2024-11-20 16:41:00.909100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.188 [2024-11-20 16:41:00.909107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.188 [2024-11-20 16:41:00.909113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.188 [2024-11-20 16:41:00.909127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.188 qpair failed and we were unable to recover it. 00:29:15.188 [2024-11-20 16:41:00.919044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.188 [2024-11-20 16:41:00.919094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.188 [2024-11-20 16:41:00.919108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.188 [2024-11-20 16:41:00.919115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.188 [2024-11-20 16:41:00.919121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.188 [2024-11-20 16:41:00.919138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.188 qpair failed and we were unable to recover it. 00:29:15.188 [2024-11-20 16:41:00.929216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.188 [2024-11-20 16:41:00.929268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.188 [2024-11-20 16:41:00.929282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.188 [2024-11-20 16:41:00.929289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.188 [2024-11-20 16:41:00.929295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.188 [2024-11-20 16:41:00.929309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.188 qpair failed and we were unable to recover it. 00:29:15.188 [2024-11-20 16:41:00.939210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.188 [2024-11-20 16:41:00.939269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.188 [2024-11-20 16:41:00.939282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.188 [2024-11-20 16:41:00.939289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.188 [2024-11-20 16:41:00.939295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.188 [2024-11-20 16:41:00.939308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.188 qpair failed and we were unable to recover it. 00:29:15.188 [2024-11-20 16:41:00.949265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.188 [2024-11-20 16:41:00.949321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.188 [2024-11-20 16:41:00.949334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.188 [2024-11-20 16:41:00.949341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.188 [2024-11-20 16:41:00.949348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.188 [2024-11-20 16:41:00.949361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.188 qpair failed and we were unable to recover it. 00:29:15.188 [2024-11-20 16:41:00.959274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.188 [2024-11-20 16:41:00.959326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.188 [2024-11-20 16:41:00.959340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.188 [2024-11-20 16:41:00.959347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.188 [2024-11-20 16:41:00.959353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.189 [2024-11-20 16:41:00.959367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.189 qpair failed and we were unable to recover it. 00:29:15.189 [2024-11-20 16:41:00.969293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.189 [2024-11-20 16:41:00.969374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.189 [2024-11-20 16:41:00.969387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.189 [2024-11-20 16:41:00.969394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.189 [2024-11-20 16:41:00.969400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.189 [2024-11-20 16:41:00.969414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.189 qpair failed and we were unable to recover it. 00:29:15.189 [2024-11-20 16:41:00.979334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.189 [2024-11-20 16:41:00.979414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.189 [2024-11-20 16:41:00.979428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.189 [2024-11-20 16:41:00.979435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.189 [2024-11-20 16:41:00.979441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.189 [2024-11-20 16:41:00.979455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.189 qpair failed and we were unable to recover it. 00:29:15.189 [2024-11-20 16:41:00.989368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.189 [2024-11-20 16:41:00.989419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.189 [2024-11-20 16:41:00.989434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.189 [2024-11-20 16:41:00.989440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.189 [2024-11-20 16:41:00.989447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.189 [2024-11-20 16:41:00.989460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.189 qpair failed and we were unable to recover it. 00:29:15.189 [2024-11-20 16:41:00.999343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.189 [2024-11-20 16:41:00.999395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.189 [2024-11-20 16:41:00.999408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.189 [2024-11-20 16:41:00.999415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.189 [2024-11-20 16:41:00.999421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.189 [2024-11-20 16:41:00.999434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.189 qpair failed and we were unable to recover it. 00:29:15.189 [2024-11-20 16:41:01.009401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.189 [2024-11-20 16:41:01.009447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.189 [2024-11-20 16:41:01.009461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.189 [2024-11-20 16:41:01.009472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.189 [2024-11-20 16:41:01.009478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.189 [2024-11-20 16:41:01.009491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.189 qpair failed and we were unable to recover it. 00:29:15.189 [2024-11-20 16:41:01.019441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.189 [2024-11-20 16:41:01.019500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.189 [2024-11-20 16:41:01.019514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.189 [2024-11-20 16:41:01.019521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.189 [2024-11-20 16:41:01.019528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.189 [2024-11-20 16:41:01.019541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.189 qpair failed and we were unable to recover it. 00:29:15.189 [2024-11-20 16:41:01.029441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.189 [2024-11-20 16:41:01.029498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.189 [2024-11-20 16:41:01.029511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.189 [2024-11-20 16:41:01.029518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.189 [2024-11-20 16:41:01.029525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.189 [2024-11-20 16:41:01.029538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.189 qpair failed and we were unable to recover it. 00:29:15.189 [2024-11-20 16:41:01.039498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.189 [2024-11-20 16:41:01.039546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.189 [2024-11-20 16:41:01.039559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.189 [2024-11-20 16:41:01.039566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.189 [2024-11-20 16:41:01.039572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.189 [2024-11-20 16:41:01.039585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.189 qpair failed and we were unable to recover it. 00:29:15.189 [2024-11-20 16:41:01.049417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.189 [2024-11-20 16:41:01.049465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.189 [2024-11-20 16:41:01.049478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.189 [2024-11-20 16:41:01.049486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.189 [2024-11-20 16:41:01.049492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.189 [2024-11-20 16:41:01.049509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.189 qpair failed and we were unable to recover it. 00:29:15.189 [2024-11-20 16:41:01.059551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.189 [2024-11-20 16:41:01.059608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.189 [2024-11-20 16:41:01.059622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.189 [2024-11-20 16:41:01.059629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.189 [2024-11-20 16:41:01.059635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.189 [2024-11-20 16:41:01.059648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.189 qpair failed and we were unable to recover it. 00:29:15.189 [2024-11-20 16:41:01.069580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.189 [2024-11-20 16:41:01.069633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.189 [2024-11-20 16:41:01.069647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.189 [2024-11-20 16:41:01.069654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.189 [2024-11-20 16:41:01.069660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.189 [2024-11-20 16:41:01.069674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.189 qpair failed and we were unable to recover it. 00:29:15.189 [2024-11-20 16:41:01.079610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.189 [2024-11-20 16:41:01.079663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.189 [2024-11-20 16:41:01.079677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.189 [2024-11-20 16:41:01.079684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.189 [2024-11-20 16:41:01.079690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.189 [2024-11-20 16:41:01.079703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.189 qpair failed and we were unable to recover it. 00:29:15.189 [2024-11-20 16:41:01.089601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.189 [2024-11-20 16:41:01.089701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.189 [2024-11-20 16:41:01.089715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.189 [2024-11-20 16:41:01.089723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.189 [2024-11-20 16:41:01.089729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.189 [2024-11-20 16:41:01.089742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.189 qpair failed and we were unable to recover it. 00:29:15.189 [2024-11-20 16:41:01.099573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.190 [2024-11-20 16:41:01.099668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.190 [2024-11-20 16:41:01.099682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.190 [2024-11-20 16:41:01.099689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.190 [2024-11-20 16:41:01.099695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.190 [2024-11-20 16:41:01.099708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.190 qpair failed and we were unable to recover it. 00:29:15.190 [2024-11-20 16:41:01.109564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.190 [2024-11-20 16:41:01.109619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.190 [2024-11-20 16:41:01.109632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.190 [2024-11-20 16:41:01.109640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.190 [2024-11-20 16:41:01.109646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.190 [2024-11-20 16:41:01.109659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.190 qpair failed and we were unable to recover it. 00:29:15.190 [2024-11-20 16:41:01.119711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.190 [2024-11-20 16:41:01.119769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.190 [2024-11-20 16:41:01.119782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.190 [2024-11-20 16:41:01.119789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.190 [2024-11-20 16:41:01.119795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.190 [2024-11-20 16:41:01.119809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.190 qpair failed and we were unable to recover it. 00:29:15.190 [2024-11-20 16:41:01.129617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.190 [2024-11-20 16:41:01.129684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.190 [2024-11-20 16:41:01.129698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.190 [2024-11-20 16:41:01.129705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.190 [2024-11-20 16:41:01.129711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.190 [2024-11-20 16:41:01.129724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.190 qpair failed and we were unable to recover it. 00:29:15.190 [2024-11-20 16:41:01.139756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.190 [2024-11-20 16:41:01.139809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.190 [2024-11-20 16:41:01.139822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.190 [2024-11-20 16:41:01.139833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.190 [2024-11-20 16:41:01.139840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.190 [2024-11-20 16:41:01.139853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.190 qpair failed and we were unable to recover it. 00:29:15.452 [2024-11-20 16:41:01.149808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.452 [2024-11-20 16:41:01.149864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.452 [2024-11-20 16:41:01.149878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.452 [2024-11-20 16:41:01.149885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.452 [2024-11-20 16:41:01.149892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.452 [2024-11-20 16:41:01.149905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.452 qpair failed and we were unable to recover it. 00:29:15.452 [2024-11-20 16:41:01.159828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.452 [2024-11-20 16:41:01.159916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.452 [2024-11-20 16:41:01.159929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.452 [2024-11-20 16:41:01.159937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.452 [2024-11-20 16:41:01.159943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.452 [2024-11-20 16:41:01.159956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.452 qpair failed and we were unable to recover it. 00:29:15.452 [2024-11-20 16:41:01.169851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.452 [2024-11-20 16:41:01.169903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.452 [2024-11-20 16:41:01.169917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.452 [2024-11-20 16:41:01.169924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.452 [2024-11-20 16:41:01.169930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.452 [2024-11-20 16:41:01.169944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.452 qpair failed and we were unable to recover it. 00:29:15.452 [2024-11-20 16:41:01.179870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.452 [2024-11-20 16:41:01.179925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.452 [2024-11-20 16:41:01.179939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.452 [2024-11-20 16:41:01.179946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.452 [2024-11-20 16:41:01.179952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.452 [2024-11-20 16:41:01.179976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.452 qpair failed and we were unable to recover it. 00:29:15.452 [2024-11-20 16:41:01.189925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.452 [2024-11-20 16:41:01.189989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.452 [2024-11-20 16:41:01.190004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.452 [2024-11-20 16:41:01.190011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.452 [2024-11-20 16:41:01.190017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.452 [2024-11-20 16:41:01.190031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.452 qpair failed and we were unable to recover it. 00:29:15.452 [2024-11-20 16:41:01.199913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.452 [2024-11-20 16:41:01.199962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.452 [2024-11-20 16:41:01.199976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.452 [2024-11-20 16:41:01.199994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.452 [2024-11-20 16:41:01.200001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.452 [2024-11-20 16:41:01.200015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.452 qpair failed and we were unable to recover it. 00:29:15.452 [2024-11-20 16:41:01.209967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.452 [2024-11-20 16:41:01.210019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.452 [2024-11-20 16:41:01.210033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.452 [2024-11-20 16:41:01.210040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.452 [2024-11-20 16:41:01.210047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.452 [2024-11-20 16:41:01.210060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.452 qpair failed and we were unable to recover it. 00:29:15.452 [2024-11-20 16:41:01.219994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.452 [2024-11-20 16:41:01.220053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.452 [2024-11-20 16:41:01.220066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.452 [2024-11-20 16:41:01.220073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.452 [2024-11-20 16:41:01.220079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.452 [2024-11-20 16:41:01.220092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.452 qpair failed and we were unable to recover it. 00:29:15.452 [2024-11-20 16:41:01.230037] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.452 [2024-11-20 16:41:01.230091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.452 [2024-11-20 16:41:01.230105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.452 [2024-11-20 16:41:01.230112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.452 [2024-11-20 16:41:01.230118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.452 [2024-11-20 16:41:01.230132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.452 qpair failed and we were unable to recover it. 00:29:15.452 [2024-11-20 16:41:01.240065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.452 [2024-11-20 16:41:01.240119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.452 [2024-11-20 16:41:01.240132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.452 [2024-11-20 16:41:01.240139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.453 [2024-11-20 16:41:01.240146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.453 [2024-11-20 16:41:01.240159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.453 qpair failed and we were unable to recover it. 00:29:15.453 [2024-11-20 16:41:01.250091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.453 [2024-11-20 16:41:01.250148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.453 [2024-11-20 16:41:01.250162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.453 [2024-11-20 16:41:01.250169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.453 [2024-11-20 16:41:01.250175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.453 [2024-11-20 16:41:01.250188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.453 qpair failed and we were unable to recover it. 00:29:15.453 [2024-11-20 16:41:01.260091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.453 [2024-11-20 16:41:01.260143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.453 [2024-11-20 16:41:01.260156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.453 [2024-11-20 16:41:01.260163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.453 [2024-11-20 16:41:01.260169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.453 [2024-11-20 16:41:01.260183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.453 qpair failed and we were unable to recover it. 00:29:15.453 [2024-11-20 16:41:01.270119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.453 [2024-11-20 16:41:01.270188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.453 [2024-11-20 16:41:01.270201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.453 [2024-11-20 16:41:01.270212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.453 [2024-11-20 16:41:01.270218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.453 [2024-11-20 16:41:01.270232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.453 qpair failed and we were unable to recover it. 00:29:15.453 [2024-11-20 16:41:01.280180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.453 [2024-11-20 16:41:01.280231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.453 [2024-11-20 16:41:01.280244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.453 [2024-11-20 16:41:01.280251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.453 [2024-11-20 16:41:01.280257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.453 [2024-11-20 16:41:01.280271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.453 qpair failed and we were unable to recover it. 00:29:15.453 [2024-11-20 16:41:01.290221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.453 [2024-11-20 16:41:01.290276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.453 [2024-11-20 16:41:01.290290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.453 [2024-11-20 16:41:01.290297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.453 [2024-11-20 16:41:01.290304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.453 [2024-11-20 16:41:01.290317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.453 qpair failed and we were unable to recover it. 00:29:15.453 [2024-11-20 16:41:01.300233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.453 [2024-11-20 16:41:01.300284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.453 [2024-11-20 16:41:01.300298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.453 [2024-11-20 16:41:01.300305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.453 [2024-11-20 16:41:01.300311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.453 [2024-11-20 16:41:01.300324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.453 qpair failed and we were unable to recover it. 00:29:15.453 [2024-11-20 16:41:01.310276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.453 [2024-11-20 16:41:01.310331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.453 [2024-11-20 16:41:01.310344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.453 [2024-11-20 16:41:01.310351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.453 [2024-11-20 16:41:01.310357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.453 [2024-11-20 16:41:01.310374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.453 qpair failed and we were unable to recover it. 00:29:15.453 [2024-11-20 16:41:01.320258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.453 [2024-11-20 16:41:01.320312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.453 [2024-11-20 16:41:01.320325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.453 [2024-11-20 16:41:01.320332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.453 [2024-11-20 16:41:01.320339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.453 [2024-11-20 16:41:01.320352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.453 qpair failed and we were unable to recover it. 00:29:15.453 [2024-11-20 16:41:01.330289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.453 [2024-11-20 16:41:01.330339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.453 [2024-11-20 16:41:01.330352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.453 [2024-11-20 16:41:01.330358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.453 [2024-11-20 16:41:01.330365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.453 [2024-11-20 16:41:01.330378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.453 qpair failed and we were unable to recover it. 00:29:15.453 [2024-11-20 16:41:01.340347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.453 [2024-11-20 16:41:01.340416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.453 [2024-11-20 16:41:01.340429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.453 [2024-11-20 16:41:01.340436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.453 [2024-11-20 16:41:01.340442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.453 [2024-11-20 16:41:01.340455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.453 qpair failed and we were unable to recover it. 00:29:15.454 [2024-11-20 16:41:01.350356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.454 [2024-11-20 16:41:01.350415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.454 [2024-11-20 16:41:01.350428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.454 [2024-11-20 16:41:01.350435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.454 [2024-11-20 16:41:01.350442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.454 [2024-11-20 16:41:01.350455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.454 qpair failed and we were unable to recover it. 00:29:15.454 [2024-11-20 16:41:01.360407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.454 [2024-11-20 16:41:01.360463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.454 [2024-11-20 16:41:01.360477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.454 [2024-11-20 16:41:01.360484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.454 [2024-11-20 16:41:01.360491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.454 [2024-11-20 16:41:01.360504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.454 qpair failed and we were unable to recover it. 00:29:15.454 [2024-11-20 16:41:01.370428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.454 [2024-11-20 16:41:01.370480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.454 [2024-11-20 16:41:01.370493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.454 [2024-11-20 16:41:01.370500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.454 [2024-11-20 16:41:01.370506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.454 [2024-11-20 16:41:01.370519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.454 qpair failed and we were unable to recover it. 00:29:15.454 [2024-11-20 16:41:01.380442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.454 [2024-11-20 16:41:01.380496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.454 [2024-11-20 16:41:01.380509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.454 [2024-11-20 16:41:01.380516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.454 [2024-11-20 16:41:01.380522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.454 [2024-11-20 16:41:01.380536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.454 qpair failed and we were unable to recover it. 00:29:15.454 [2024-11-20 16:41:01.390503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.454 [2024-11-20 16:41:01.390555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.454 [2024-11-20 16:41:01.390569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.454 [2024-11-20 16:41:01.390576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.454 [2024-11-20 16:41:01.390583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.454 [2024-11-20 16:41:01.390596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.454 qpair failed and we were unable to recover it. 00:29:15.454 [2024-11-20 16:41:01.400492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.454 [2024-11-20 16:41:01.400551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.454 [2024-11-20 16:41:01.400565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.454 [2024-11-20 16:41:01.400576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.454 [2024-11-20 16:41:01.400582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.454 [2024-11-20 16:41:01.400595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.454 qpair failed and we were unable to recover it. 00:29:15.716 [2024-11-20 16:41:01.410414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.716 [2024-11-20 16:41:01.410461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.716 [2024-11-20 16:41:01.410475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.716 [2024-11-20 16:41:01.410483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.716 [2024-11-20 16:41:01.410489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.716 [2024-11-20 16:41:01.410504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.716 qpair failed and we were unable to recover it. 00:29:15.716 [2024-11-20 16:41:01.420587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.716 [2024-11-20 16:41:01.420663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.716 [2024-11-20 16:41:01.420676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.717 [2024-11-20 16:41:01.420683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.717 [2024-11-20 16:41:01.420690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.717 [2024-11-20 16:41:01.420703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.717 qpair failed and we were unable to recover it. 00:29:15.717 [2024-11-20 16:41:01.430607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.717 [2024-11-20 16:41:01.430660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.717 [2024-11-20 16:41:01.430674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.717 [2024-11-20 16:41:01.430681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.717 [2024-11-20 16:41:01.430688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.717 [2024-11-20 16:41:01.430701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.717 qpair failed and we were unable to recover it. 00:29:15.717 [2024-11-20 16:41:01.440600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.717 [2024-11-20 16:41:01.440656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.717 [2024-11-20 16:41:01.440669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.717 [2024-11-20 16:41:01.440676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.717 [2024-11-20 16:41:01.440683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.717 [2024-11-20 16:41:01.440700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.717 qpair failed and we were unable to recover it. 00:29:15.717 [2024-11-20 16:41:01.450629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.717 [2024-11-20 16:41:01.450681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.717 [2024-11-20 16:41:01.450694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.717 [2024-11-20 16:41:01.450701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.717 [2024-11-20 16:41:01.450707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.717 [2024-11-20 16:41:01.450721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.717 qpair failed and we were unable to recover it. 00:29:15.717 [2024-11-20 16:41:01.460666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.717 [2024-11-20 16:41:01.460719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.717 [2024-11-20 16:41:01.460732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.717 [2024-11-20 16:41:01.460739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.717 [2024-11-20 16:41:01.460745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.717 [2024-11-20 16:41:01.460758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.717 qpair failed and we were unable to recover it. 00:29:15.717 [2024-11-20 16:41:01.470711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.717 [2024-11-20 16:41:01.470763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.717 [2024-11-20 16:41:01.470778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.717 [2024-11-20 16:41:01.470785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.717 [2024-11-20 16:41:01.470791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.717 [2024-11-20 16:41:01.470805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.717 qpair failed and we were unable to recover it. 00:29:15.717 [2024-11-20 16:41:01.480754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.717 [2024-11-20 16:41:01.480804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.717 [2024-11-20 16:41:01.480817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.717 [2024-11-20 16:41:01.480824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.717 [2024-11-20 16:41:01.480830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.717 [2024-11-20 16:41:01.480843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.717 qpair failed and we were unable to recover it. 00:29:15.717 [2024-11-20 16:41:01.490762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.717 [2024-11-20 16:41:01.490875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.717 [2024-11-20 16:41:01.490889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.717 [2024-11-20 16:41:01.490897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.717 [2024-11-20 16:41:01.490903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.717 [2024-11-20 16:41:01.490916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.717 qpair failed and we were unable to recover it. 00:29:15.717 [2024-11-20 16:41:01.500830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.717 [2024-11-20 16:41:01.500885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.717 [2024-11-20 16:41:01.500899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.717 [2024-11-20 16:41:01.500906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.717 [2024-11-20 16:41:01.500912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.717 [2024-11-20 16:41:01.500926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.717 qpair failed and we were unable to recover it. 00:29:15.717 [2024-11-20 16:41:01.510871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.717 [2024-11-20 16:41:01.510923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.717 [2024-11-20 16:41:01.510936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.717 [2024-11-20 16:41:01.510943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.717 [2024-11-20 16:41:01.510949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.717 [2024-11-20 16:41:01.510963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.717 qpair failed and we were unable to recover it. 00:29:15.717 [2024-11-20 16:41:01.520917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.717 [2024-11-20 16:41:01.520971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.717 [2024-11-20 16:41:01.520988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.717 [2024-11-20 16:41:01.520995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.717 [2024-11-20 16:41:01.521002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.718 [2024-11-20 16:41:01.521015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.718 qpair failed and we were unable to recover it. 00:29:15.718 [2024-11-20 16:41:01.530872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.718 [2024-11-20 16:41:01.530923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.718 [2024-11-20 16:41:01.530937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.718 [2024-11-20 16:41:01.530948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.718 [2024-11-20 16:41:01.530954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.718 [2024-11-20 16:41:01.530967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.718 qpair failed and we were unable to recover it. 00:29:15.718 [2024-11-20 16:41:01.540877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.718 [2024-11-20 16:41:01.540935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.718 [2024-11-20 16:41:01.540950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.718 [2024-11-20 16:41:01.540957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.718 [2024-11-20 16:41:01.540963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.718 [2024-11-20 16:41:01.540977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.718 qpair failed and we were unable to recover it. 00:29:15.718 [2024-11-20 16:41:01.550925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.718 [2024-11-20 16:41:01.550979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.718 [2024-11-20 16:41:01.550996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.718 [2024-11-20 16:41:01.551003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.718 [2024-11-20 16:41:01.551010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.718 [2024-11-20 16:41:01.551023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.718 qpair failed and we were unable to recover it. 00:29:15.718 [2024-11-20 16:41:01.560848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.718 [2024-11-20 16:41:01.560902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.718 [2024-11-20 16:41:01.560916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.718 [2024-11-20 16:41:01.560923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.718 [2024-11-20 16:41:01.560930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.718 [2024-11-20 16:41:01.560943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.718 qpair failed and we were unable to recover it. 00:29:15.718 [2024-11-20 16:41:01.570992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.718 [2024-11-20 16:41:01.571043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.718 [2024-11-20 16:41:01.571057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.718 [2024-11-20 16:41:01.571064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.718 [2024-11-20 16:41:01.571070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.718 [2024-11-20 16:41:01.571084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.718 qpair failed and we were unable to recover it. 00:29:15.718 [2024-11-20 16:41:01.581015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.718 [2024-11-20 16:41:01.581072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.718 [2024-11-20 16:41:01.581085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.718 [2024-11-20 16:41:01.581092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.718 [2024-11-20 16:41:01.581098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.718 [2024-11-20 16:41:01.581111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.718 qpair failed and we were unable to recover it. 00:29:15.718 [2024-11-20 16:41:01.591039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.718 [2024-11-20 16:41:01.591107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.718 [2024-11-20 16:41:01.591121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.718 [2024-11-20 16:41:01.591128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.718 [2024-11-20 16:41:01.591134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.718 [2024-11-20 16:41:01.591147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.718 qpair failed and we were unable to recover it. 00:29:15.718 [2024-11-20 16:41:01.600977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.718 [2024-11-20 16:41:01.601037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.718 [2024-11-20 16:41:01.601050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.718 [2024-11-20 16:41:01.601057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.718 [2024-11-20 16:41:01.601064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.718 [2024-11-20 16:41:01.601078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.718 qpair failed and we were unable to recover it. 00:29:15.718 [2024-11-20 16:41:01.611046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.718 [2024-11-20 16:41:01.611095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.718 [2024-11-20 16:41:01.611110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.718 [2024-11-20 16:41:01.611117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.718 [2024-11-20 16:41:01.611123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.718 [2024-11-20 16:41:01.611137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.718 qpair failed and we were unable to recover it. 00:29:15.718 [2024-11-20 16:41:01.621006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.718 [2024-11-20 16:41:01.621063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.718 [2024-11-20 16:41:01.621077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.718 [2024-11-20 16:41:01.621084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.718 [2024-11-20 16:41:01.621090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.718 [2024-11-20 16:41:01.621104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.719 qpair failed and we were unable to recover it. 00:29:15.719 [2024-11-20 16:41:01.631127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.719 [2024-11-20 16:41:01.631198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.719 [2024-11-20 16:41:01.631211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.719 [2024-11-20 16:41:01.631218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.719 [2024-11-20 16:41:01.631225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.719 [2024-11-20 16:41:01.631238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.719 qpair failed and we were unable to recover it. 00:29:15.719 [2024-11-20 16:41:01.641181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.719 [2024-11-20 16:41:01.641231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.719 [2024-11-20 16:41:01.641244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.719 [2024-11-20 16:41:01.641251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.719 [2024-11-20 16:41:01.641257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.719 [2024-11-20 16:41:01.641271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.719 qpair failed and we were unable to recover it. 00:29:15.719 [2024-11-20 16:41:01.651141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.719 [2024-11-20 16:41:01.651187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.719 [2024-11-20 16:41:01.651200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.719 [2024-11-20 16:41:01.651207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.719 [2024-11-20 16:41:01.651213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.719 [2024-11-20 16:41:01.651226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.719 qpair failed and we were unable to recover it. 00:29:15.719 [2024-11-20 16:41:01.661219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.719 [2024-11-20 16:41:01.661273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.719 [2024-11-20 16:41:01.661286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.719 [2024-11-20 16:41:01.661296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.719 [2024-11-20 16:41:01.661303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.719 [2024-11-20 16:41:01.661316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.719 qpair failed and we were unable to recover it. 00:29:15.719 [2024-11-20 16:41:01.671283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.719 [2024-11-20 16:41:01.671338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.719 [2024-11-20 16:41:01.671351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.719 [2024-11-20 16:41:01.671358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.719 [2024-11-20 16:41:01.671364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.719 [2024-11-20 16:41:01.671377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.719 qpair failed and we were unable to recover it. 00:29:15.981 [2024-11-20 16:41:01.681293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.981 [2024-11-20 16:41:01.681345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.981 [2024-11-20 16:41:01.681358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.981 [2024-11-20 16:41:01.681365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.981 [2024-11-20 16:41:01.681372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.981 [2024-11-20 16:41:01.681385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.981 qpair failed and we were unable to recover it. 00:29:15.981 [2024-11-20 16:41:01.691287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.981 [2024-11-20 16:41:01.691332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.981 [2024-11-20 16:41:01.691346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.981 [2024-11-20 16:41:01.691353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.981 [2024-11-20 16:41:01.691360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.981 [2024-11-20 16:41:01.691373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.981 qpair failed and we were unable to recover it. 00:29:15.981 [2024-11-20 16:41:01.701338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.981 [2024-11-20 16:41:01.701394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.981 [2024-11-20 16:41:01.701407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.981 [2024-11-20 16:41:01.701414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.981 [2024-11-20 16:41:01.701421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.981 [2024-11-20 16:41:01.701434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.981 qpair failed and we were unable to recover it. 00:29:15.981 [2024-11-20 16:41:01.711392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.981 [2024-11-20 16:41:01.711443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.981 [2024-11-20 16:41:01.711456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.981 [2024-11-20 16:41:01.711463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.981 [2024-11-20 16:41:01.711469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.981 [2024-11-20 16:41:01.711483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.981 qpair failed and we were unable to recover it. 00:29:15.981 [2024-11-20 16:41:01.721341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.981 [2024-11-20 16:41:01.721393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.981 [2024-11-20 16:41:01.721406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.981 [2024-11-20 16:41:01.721413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.981 [2024-11-20 16:41:01.721419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.981 [2024-11-20 16:41:01.721432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.981 qpair failed and we were unable to recover it. 00:29:15.981 [2024-11-20 16:41:01.731384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.981 [2024-11-20 16:41:01.731427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.981 [2024-11-20 16:41:01.731440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.981 [2024-11-20 16:41:01.731447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.981 [2024-11-20 16:41:01.731453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.981 [2024-11-20 16:41:01.731466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.981 qpair failed and we were unable to recover it. 00:29:15.981 [2024-11-20 16:41:01.741448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.981 [2024-11-20 16:41:01.741514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.981 [2024-11-20 16:41:01.741528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.981 [2024-11-20 16:41:01.741535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.981 [2024-11-20 16:41:01.741541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.981 [2024-11-20 16:41:01.741554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.981 qpair failed and we were unable to recover it. 00:29:15.981 [2024-11-20 16:41:01.751367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.981 [2024-11-20 16:41:01.751426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.981 [2024-11-20 16:41:01.751439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.981 [2024-11-20 16:41:01.751446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.981 [2024-11-20 16:41:01.751452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.981 [2024-11-20 16:41:01.751466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.981 qpair failed and we were unable to recover it. 00:29:15.981 [2024-11-20 16:41:01.761456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.981 [2024-11-20 16:41:01.761503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.981 [2024-11-20 16:41:01.761516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.981 [2024-11-20 16:41:01.761523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.981 [2024-11-20 16:41:01.761529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.981 [2024-11-20 16:41:01.761542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.981 qpair failed and we were unable to recover it. 00:29:15.981 [2024-11-20 16:41:01.771497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.981 [2024-11-20 16:41:01.771543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.981 [2024-11-20 16:41:01.771557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.981 [2024-11-20 16:41:01.771564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.981 [2024-11-20 16:41:01.771570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.982 [2024-11-20 16:41:01.771583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.982 qpair failed and we were unable to recover it. 00:29:15.982 [2024-11-20 16:41:01.781603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.982 [2024-11-20 16:41:01.781687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.982 [2024-11-20 16:41:01.781700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.982 [2024-11-20 16:41:01.781707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.982 [2024-11-20 16:41:01.781714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.982 [2024-11-20 16:41:01.781727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.982 qpair failed and we were unable to recover it. 00:29:15.982 [2024-11-20 16:41:01.791600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.982 [2024-11-20 16:41:01.791651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.982 [2024-11-20 16:41:01.791665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.982 [2024-11-20 16:41:01.791675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.982 [2024-11-20 16:41:01.791682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.982 [2024-11-20 16:41:01.791696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.982 qpair failed and we were unable to recover it. 00:29:15.982 [2024-11-20 16:41:01.801552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.982 [2024-11-20 16:41:01.801599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.982 [2024-11-20 16:41:01.801613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.982 [2024-11-20 16:41:01.801620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.982 [2024-11-20 16:41:01.801626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.982 [2024-11-20 16:41:01.801639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.982 qpair failed and we were unable to recover it. 00:29:15.982 [2024-11-20 16:41:01.811598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.982 [2024-11-20 16:41:01.811660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.982 [2024-11-20 16:41:01.811673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.982 [2024-11-20 16:41:01.811680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.982 [2024-11-20 16:41:01.811687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.982 [2024-11-20 16:41:01.811700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.982 qpair failed and we were unable to recover it. 00:29:15.982 [2024-11-20 16:41:01.821668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.982 [2024-11-20 16:41:01.821754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.982 [2024-11-20 16:41:01.821767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.982 [2024-11-20 16:41:01.821774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.982 [2024-11-20 16:41:01.821781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.982 [2024-11-20 16:41:01.821794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.982 qpair failed and we were unable to recover it. 00:29:15.982 [2024-11-20 16:41:01.831709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.982 [2024-11-20 16:41:01.831793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.982 [2024-11-20 16:41:01.831806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.982 [2024-11-20 16:41:01.831814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.982 [2024-11-20 16:41:01.831820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.982 [2024-11-20 16:41:01.831833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.982 qpair failed and we were unable to recover it. 00:29:15.982 [2024-11-20 16:41:01.841671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.982 [2024-11-20 16:41:01.841719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.982 [2024-11-20 16:41:01.841733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.982 [2024-11-20 16:41:01.841740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.982 [2024-11-20 16:41:01.841746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.982 [2024-11-20 16:41:01.841759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.982 qpair failed and we were unable to recover it. 00:29:15.982 [2024-11-20 16:41:01.851706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.982 [2024-11-20 16:41:01.851751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.982 [2024-11-20 16:41:01.851764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.982 [2024-11-20 16:41:01.851771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.982 [2024-11-20 16:41:01.851777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.982 [2024-11-20 16:41:01.851790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.982 qpair failed and we were unable to recover it. 00:29:15.982 [2024-11-20 16:41:01.861772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.982 [2024-11-20 16:41:01.861825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.982 [2024-11-20 16:41:01.861838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.982 [2024-11-20 16:41:01.861846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.982 [2024-11-20 16:41:01.861852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.982 [2024-11-20 16:41:01.861865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.982 qpair failed and we were unable to recover it. 00:29:15.982 [2024-11-20 16:41:01.871786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.982 [2024-11-20 16:41:01.871842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.982 [2024-11-20 16:41:01.871856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.982 [2024-11-20 16:41:01.871863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.982 [2024-11-20 16:41:01.871869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.982 [2024-11-20 16:41:01.871883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.982 qpair failed and we were unable to recover it. 00:29:15.982 [2024-11-20 16:41:01.881795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.982 [2024-11-20 16:41:01.881846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.982 [2024-11-20 16:41:01.881860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.982 [2024-11-20 16:41:01.881867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.982 [2024-11-20 16:41:01.881873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.982 [2024-11-20 16:41:01.881886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.982 qpair failed and we were unable to recover it. 00:29:15.982 [2024-11-20 16:41:01.891818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.982 [2024-11-20 16:41:01.891907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.982 [2024-11-20 16:41:01.891921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.982 [2024-11-20 16:41:01.891928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.982 [2024-11-20 16:41:01.891934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.982 [2024-11-20 16:41:01.891947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.982 qpair failed and we were unable to recover it. 00:29:15.982 [2024-11-20 16:41:01.901890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.982 [2024-11-20 16:41:01.901942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.982 [2024-11-20 16:41:01.901956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.982 [2024-11-20 16:41:01.901963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.982 [2024-11-20 16:41:01.901969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.982 [2024-11-20 16:41:01.901985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.982 qpair failed and we were unable to recover it. 00:29:15.983 [2024-11-20 16:41:01.911864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.983 [2024-11-20 16:41:01.911918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.983 [2024-11-20 16:41:01.911931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.983 [2024-11-20 16:41:01.911938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.983 [2024-11-20 16:41:01.911945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.983 [2024-11-20 16:41:01.911958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.983 qpair failed and we were unable to recover it. 00:29:15.983 [2024-11-20 16:41:01.921883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.983 [2024-11-20 16:41:01.921930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.983 [2024-11-20 16:41:01.921943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.983 [2024-11-20 16:41:01.921953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.983 [2024-11-20 16:41:01.921960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.983 [2024-11-20 16:41:01.921973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.983 qpair failed and we were unable to recover it. 00:29:15.983 [2024-11-20 16:41:01.931897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.983 [2024-11-20 16:41:01.931943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.983 [2024-11-20 16:41:01.931956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.983 [2024-11-20 16:41:01.931963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.983 [2024-11-20 16:41:01.931969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:15.983 [2024-11-20 16:41:01.931986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.983 qpair failed and we were unable to recover it. 00:29:16.244 [2024-11-20 16:41:01.941994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.245 [2024-11-20 16:41:01.942050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.245 [2024-11-20 16:41:01.942063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.245 [2024-11-20 16:41:01.942071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.245 [2024-11-20 16:41:01.942077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.245 [2024-11-20 16:41:01.942090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.245 qpair failed and we were unable to recover it. 00:29:16.245 [2024-11-20 16:41:01.951986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.245 [2024-11-20 16:41:01.952067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.245 [2024-11-20 16:41:01.952080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.245 [2024-11-20 16:41:01.952087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.245 [2024-11-20 16:41:01.952093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.245 [2024-11-20 16:41:01.952107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.245 qpair failed and we were unable to recover it. 00:29:16.245 [2024-11-20 16:41:01.962057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.245 [2024-11-20 16:41:01.962124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.245 [2024-11-20 16:41:01.962137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.245 [2024-11-20 16:41:01.962144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.245 [2024-11-20 16:41:01.962150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.245 [2024-11-20 16:41:01.962164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.245 qpair failed and we were unable to recover it. 00:29:16.245 [2024-11-20 16:41:01.972038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.245 [2024-11-20 16:41:01.972086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.245 [2024-11-20 16:41:01.972099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.245 [2024-11-20 16:41:01.972106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.245 [2024-11-20 16:41:01.972113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.245 [2024-11-20 16:41:01.972126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.245 qpair failed and we were unable to recover it. 00:29:16.245 [2024-11-20 16:41:01.982090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.245 [2024-11-20 16:41:01.982149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.245 [2024-11-20 16:41:01.982162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.245 [2024-11-20 16:41:01.982169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.245 [2024-11-20 16:41:01.982176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.245 [2024-11-20 16:41:01.982189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.245 qpair failed and we were unable to recover it. 00:29:16.245 [2024-11-20 16:41:01.991977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.245 [2024-11-20 16:41:01.992031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.245 [2024-11-20 16:41:01.992045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.245 [2024-11-20 16:41:01.992052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.245 [2024-11-20 16:41:01.992058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.245 [2024-11-20 16:41:01.992072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.245 qpair failed and we were unable to recover it. 00:29:16.245 [2024-11-20 16:41:02.002103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.245 [2024-11-20 16:41:02.002150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.245 [2024-11-20 16:41:02.002163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.245 [2024-11-20 16:41:02.002171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.245 [2024-11-20 16:41:02.002177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.245 [2024-11-20 16:41:02.002191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.245 qpair failed and we were unable to recover it. 00:29:16.245 [2024-11-20 16:41:02.012153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.245 [2024-11-20 16:41:02.012207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.245 [2024-11-20 16:41:02.012220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.245 [2024-11-20 16:41:02.012227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.245 [2024-11-20 16:41:02.012234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.245 [2024-11-20 16:41:02.012247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.245 qpair failed and we were unable to recover it. 00:29:16.245 [2024-11-20 16:41:02.022195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.245 [2024-11-20 16:41:02.022252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.245 [2024-11-20 16:41:02.022265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.245 [2024-11-20 16:41:02.022272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.245 [2024-11-20 16:41:02.022278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.245 [2024-11-20 16:41:02.022291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.245 qpair failed and we were unable to recover it. 00:29:16.245 [2024-11-20 16:41:02.032220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.245 [2024-11-20 16:41:02.032272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.245 [2024-11-20 16:41:02.032285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.245 [2024-11-20 16:41:02.032292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.245 [2024-11-20 16:41:02.032298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.245 [2024-11-20 16:41:02.032312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.245 qpair failed and we were unable to recover it. 00:29:16.245 [2024-11-20 16:41:02.042210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.245 [2024-11-20 16:41:02.042254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.245 [2024-11-20 16:41:02.042268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.245 [2024-11-20 16:41:02.042275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.245 [2024-11-20 16:41:02.042281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.245 [2024-11-20 16:41:02.042294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.245 qpair failed and we were unable to recover it. 00:29:16.245 [2024-11-20 16:41:02.052128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.245 [2024-11-20 16:41:02.052187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.245 [2024-11-20 16:41:02.052200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.245 [2024-11-20 16:41:02.052211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.245 [2024-11-20 16:41:02.052217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.245 [2024-11-20 16:41:02.052230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.245 qpair failed and we were unable to recover it. 00:29:16.245 [2024-11-20 16:41:02.062316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.245 [2024-11-20 16:41:02.062370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.245 [2024-11-20 16:41:02.062383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.245 [2024-11-20 16:41:02.062390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.245 [2024-11-20 16:41:02.062396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.245 [2024-11-20 16:41:02.062409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.246 qpair failed and we were unable to recover it. 00:29:16.246 [2024-11-20 16:41:02.072298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.246 [2024-11-20 16:41:02.072345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.246 [2024-11-20 16:41:02.072359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.246 [2024-11-20 16:41:02.072366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.246 [2024-11-20 16:41:02.072372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.246 [2024-11-20 16:41:02.072385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.246 qpair failed and we were unable to recover it. 00:29:16.246 [2024-11-20 16:41:02.082204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.246 [2024-11-20 16:41:02.082255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.246 [2024-11-20 16:41:02.082268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.246 [2024-11-20 16:41:02.082275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.246 [2024-11-20 16:41:02.082281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.246 [2024-11-20 16:41:02.082295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.246 qpair failed and we were unable to recover it. 00:29:16.246 [2024-11-20 16:41:02.092346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.246 [2024-11-20 16:41:02.092400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.246 [2024-11-20 16:41:02.092414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.246 [2024-11-20 16:41:02.092422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.246 [2024-11-20 16:41:02.092430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.246 [2024-11-20 16:41:02.092444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.246 qpair failed and we were unable to recover it. 00:29:16.246 [2024-11-20 16:41:02.102297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.246 [2024-11-20 16:41:02.102358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.246 [2024-11-20 16:41:02.102372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.246 [2024-11-20 16:41:02.102379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.246 [2024-11-20 16:41:02.102386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.246 [2024-11-20 16:41:02.102399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.246 qpair failed and we were unable to recover it. 00:29:16.246 [2024-11-20 16:41:02.112438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.246 [2024-11-20 16:41:02.112491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.246 [2024-11-20 16:41:02.112506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.246 [2024-11-20 16:41:02.112513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.246 [2024-11-20 16:41:02.112519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.246 [2024-11-20 16:41:02.112533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.246 qpair failed and we were unable to recover it. 00:29:16.246 [2024-11-20 16:41:02.122422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.246 [2024-11-20 16:41:02.122467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.246 [2024-11-20 16:41:02.122480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.246 [2024-11-20 16:41:02.122487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.246 [2024-11-20 16:41:02.122494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.246 [2024-11-20 16:41:02.122507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.246 qpair failed and we were unable to recover it. 00:29:16.246 [2024-11-20 16:41:02.132471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.246 [2024-11-20 16:41:02.132566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.246 [2024-11-20 16:41:02.132580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.246 [2024-11-20 16:41:02.132587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.246 [2024-11-20 16:41:02.132593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.246 [2024-11-20 16:41:02.132607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.246 qpair failed and we were unable to recover it. 00:29:16.246 [2024-11-20 16:41:02.142406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.246 [2024-11-20 16:41:02.142466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.246 [2024-11-20 16:41:02.142481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.246 [2024-11-20 16:41:02.142488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.246 [2024-11-20 16:41:02.142494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.246 [2024-11-20 16:41:02.142508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.246 qpair failed and we were unable to recover it. 00:29:16.246 [2024-11-20 16:41:02.152522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.246 [2024-11-20 16:41:02.152572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.246 [2024-11-20 16:41:02.152586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.246 [2024-11-20 16:41:02.152593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.246 [2024-11-20 16:41:02.152599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.246 [2024-11-20 16:41:02.152613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.246 qpair failed and we were unable to recover it. 00:29:16.246 [2024-11-20 16:41:02.162504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.246 [2024-11-20 16:41:02.162551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.246 [2024-11-20 16:41:02.162564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.246 [2024-11-20 16:41:02.162571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.246 [2024-11-20 16:41:02.162578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.246 [2024-11-20 16:41:02.162591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.246 qpair failed and we were unable to recover it. 00:29:16.246 [2024-11-20 16:41:02.172556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.246 [2024-11-20 16:41:02.172652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.246 [2024-11-20 16:41:02.172666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.246 [2024-11-20 16:41:02.172673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.246 [2024-11-20 16:41:02.172679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.246 [2024-11-20 16:41:02.172693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.246 qpair failed and we were unable to recover it. 00:29:16.246 [2024-11-20 16:41:02.182592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.246 [2024-11-20 16:41:02.182647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.246 [2024-11-20 16:41:02.182660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.246 [2024-11-20 16:41:02.182673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.246 [2024-11-20 16:41:02.182680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.246 [2024-11-20 16:41:02.182693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.246 qpair failed and we were unable to recover it. 00:29:16.246 [2024-11-20 16:41:02.192620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.246 [2024-11-20 16:41:02.192672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.246 [2024-11-20 16:41:02.192686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.246 [2024-11-20 16:41:02.192693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.246 [2024-11-20 16:41:02.192699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.246 [2024-11-20 16:41:02.192713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.246 qpair failed and we were unable to recover it. 00:29:16.512 [2024-11-20 16:41:02.202649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.512 [2024-11-20 16:41:02.202699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.512 [2024-11-20 16:41:02.202712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.512 [2024-11-20 16:41:02.202720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.512 [2024-11-20 16:41:02.202726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.512 [2024-11-20 16:41:02.202740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.512 qpair failed and we were unable to recover it. 00:29:16.512 [2024-11-20 16:41:02.212567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.512 [2024-11-20 16:41:02.212613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.512 [2024-11-20 16:41:02.212626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.512 [2024-11-20 16:41:02.212633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.512 [2024-11-20 16:41:02.212640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.512 [2024-11-20 16:41:02.212653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.512 qpair failed and we were unable to recover it. 00:29:16.512 [2024-11-20 16:41:02.222752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.512 [2024-11-20 16:41:02.222806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.512 [2024-11-20 16:41:02.222820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.512 [2024-11-20 16:41:02.222827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.512 [2024-11-20 16:41:02.222833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.512 [2024-11-20 16:41:02.222846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.512 qpair failed and we were unable to recover it. 00:29:16.512 [2024-11-20 16:41:02.232751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.512 [2024-11-20 16:41:02.232809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.512 [2024-11-20 16:41:02.232834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.512 [2024-11-20 16:41:02.232842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.512 [2024-11-20 16:41:02.232849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.512 [2024-11-20 16:41:02.232868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.512 qpair failed and we were unable to recover it. 00:29:16.512 [2024-11-20 16:41:02.242751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.512 [2024-11-20 16:41:02.242796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.512 [2024-11-20 16:41:02.242811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.512 [2024-11-20 16:41:02.242818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.512 [2024-11-20 16:41:02.242825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.512 [2024-11-20 16:41:02.242840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.512 qpair failed and we were unable to recover it. 00:29:16.512 [2024-11-20 16:41:02.252777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.513 [2024-11-20 16:41:02.252822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.513 [2024-11-20 16:41:02.252836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.513 [2024-11-20 16:41:02.252844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.513 [2024-11-20 16:41:02.252850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.513 [2024-11-20 16:41:02.252864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.513 qpair failed and we were unable to recover it. 00:29:16.513 [2024-11-20 16:41:02.262764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.513 [2024-11-20 16:41:02.262816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.513 [2024-11-20 16:41:02.262829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.513 [2024-11-20 16:41:02.262836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.513 [2024-11-20 16:41:02.262842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.513 [2024-11-20 16:41:02.262856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.513 qpair failed and we were unable to recover it. 00:29:16.513 [2024-11-20 16:41:02.272852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.513 [2024-11-20 16:41:02.272903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.513 [2024-11-20 16:41:02.272917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.513 [2024-11-20 16:41:02.272924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.513 [2024-11-20 16:41:02.272930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.513 [2024-11-20 16:41:02.272943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.513 qpair failed and we were unable to recover it. 00:29:16.513 [2024-11-20 16:41:02.282857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.513 [2024-11-20 16:41:02.282905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.513 [2024-11-20 16:41:02.282918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.513 [2024-11-20 16:41:02.282926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.513 [2024-11-20 16:41:02.282932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.513 [2024-11-20 16:41:02.282945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.513 qpair failed and we were unable to recover it. 00:29:16.513 [2024-11-20 16:41:02.292932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.513 [2024-11-20 16:41:02.293001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.513 [2024-11-20 16:41:02.293017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.513 [2024-11-20 16:41:02.293024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.513 [2024-11-20 16:41:02.293030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.513 [2024-11-20 16:41:02.293044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.513 qpair failed and we were unable to recover it. 00:29:16.513 [2024-11-20 16:41:02.302972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.513 [2024-11-20 16:41:02.303028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.513 [2024-11-20 16:41:02.303043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.513 [2024-11-20 16:41:02.303050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.513 [2024-11-20 16:41:02.303056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.513 [2024-11-20 16:41:02.303070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.513 qpair failed and we were unable to recover it. 00:29:16.513 [2024-11-20 16:41:02.312959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.513 [2024-11-20 16:41:02.313010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.513 [2024-11-20 16:41:02.313024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.513 [2024-11-20 16:41:02.313035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.513 [2024-11-20 16:41:02.313041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.513 [2024-11-20 16:41:02.313055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.513 qpair failed and we were unable to recover it. 00:29:16.514 [2024-11-20 16:41:02.322973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.514 [2024-11-20 16:41:02.323027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.514 [2024-11-20 16:41:02.323041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.514 [2024-11-20 16:41:02.323048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.514 [2024-11-20 16:41:02.323054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.514 [2024-11-20 16:41:02.323068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.514 qpair failed and we were unable to recover it. 00:29:16.514 [2024-11-20 16:41:02.333003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.514 [2024-11-20 16:41:02.333048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.514 [2024-11-20 16:41:02.333061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.514 [2024-11-20 16:41:02.333068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.514 [2024-11-20 16:41:02.333074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.514 [2024-11-20 16:41:02.333088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.514 qpair failed and we were unable to recover it. 00:29:16.514 [2024-11-20 16:41:02.342929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.514 [2024-11-20 16:41:02.343012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.514 [2024-11-20 16:41:02.343025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.514 [2024-11-20 16:41:02.343032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.514 [2024-11-20 16:41:02.343039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.514 [2024-11-20 16:41:02.343053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.514 qpair failed and we were unable to recover it. 00:29:16.514 [2024-11-20 16:41:02.353044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.514 [2024-11-20 16:41:02.353095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.514 [2024-11-20 16:41:02.353108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.514 [2024-11-20 16:41:02.353115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.514 [2024-11-20 16:41:02.353121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.514 [2024-11-20 16:41:02.353135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.514 qpair failed and we were unable to recover it. 00:29:16.514 [2024-11-20 16:41:02.363053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.514 [2024-11-20 16:41:02.363107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.514 [2024-11-20 16:41:02.363122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.514 [2024-11-20 16:41:02.363129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.514 [2024-11-20 16:41:02.363137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.514 [2024-11-20 16:41:02.363154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.514 qpair failed and we were unable to recover it. 00:29:16.514 [2024-11-20 16:41:02.373094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.514 [2024-11-20 16:41:02.373141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.514 [2024-11-20 16:41:02.373155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.514 [2024-11-20 16:41:02.373162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.514 [2024-11-20 16:41:02.373169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.514 [2024-11-20 16:41:02.373183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.514 qpair failed and we were unable to recover it. 00:29:16.514 [2024-11-20 16:41:02.383178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.514 [2024-11-20 16:41:02.383240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.514 [2024-11-20 16:41:02.383253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.514 [2024-11-20 16:41:02.383260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.515 [2024-11-20 16:41:02.383267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.515 [2024-11-20 16:41:02.383280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.515 qpair failed and we were unable to recover it. 00:29:16.515 [2024-11-20 16:41:02.393170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.515 [2024-11-20 16:41:02.393221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.515 [2024-11-20 16:41:02.393234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.515 [2024-11-20 16:41:02.393241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.515 [2024-11-20 16:41:02.393248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.515 [2024-11-20 16:41:02.393261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.515 qpair failed and we were unable to recover it. 00:29:16.515 [2024-11-20 16:41:02.403177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.515 [2024-11-20 16:41:02.403230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.515 [2024-11-20 16:41:02.403244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.516 [2024-11-20 16:41:02.403252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.516 [2024-11-20 16:41:02.403258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.516 [2024-11-20 16:41:02.403271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.516 qpair failed and we were unable to recover it. 00:29:16.516 [2024-11-20 16:41:02.413213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.516 [2024-11-20 16:41:02.413303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.516 [2024-11-20 16:41:02.413317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.516 [2024-11-20 16:41:02.413324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.516 [2024-11-20 16:41:02.413330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.516 [2024-11-20 16:41:02.413344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.516 qpair failed and we were unable to recover it. 00:29:16.516 [2024-11-20 16:41:02.423162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.516 [2024-11-20 16:41:02.423219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.516 [2024-11-20 16:41:02.423234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.516 [2024-11-20 16:41:02.423241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.516 [2024-11-20 16:41:02.423247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.516 [2024-11-20 16:41:02.423261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.516 qpair failed and we were unable to recover it. 00:29:16.516 [2024-11-20 16:41:02.433157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.516 [2024-11-20 16:41:02.433206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.516 [2024-11-20 16:41:02.433221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.516 [2024-11-20 16:41:02.433228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.516 [2024-11-20 16:41:02.433234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.516 [2024-11-20 16:41:02.433248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.516 qpair failed and we were unable to recover it. 00:29:16.516 [2024-11-20 16:41:02.443268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.516 [2024-11-20 16:41:02.443329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.516 [2024-11-20 16:41:02.443343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.516 [2024-11-20 16:41:02.443354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.516 [2024-11-20 16:41:02.443360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.516 [2024-11-20 16:41:02.443374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.516 qpair failed and we were unable to recover it. 00:29:16.516 [2024-11-20 16:41:02.453322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.516 [2024-11-20 16:41:02.453368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.516 [2024-11-20 16:41:02.453381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.516 [2024-11-20 16:41:02.453388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.516 [2024-11-20 16:41:02.453395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.516 [2024-11-20 16:41:02.453408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.516 qpair failed and we were unable to recover it. 00:29:16.516 [2024-11-20 16:41:02.463359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.516 [2024-11-20 16:41:02.463415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.516 [2024-11-20 16:41:02.463429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.516 [2024-11-20 16:41:02.463436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.516 [2024-11-20 16:41:02.463442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.516 [2024-11-20 16:41:02.463455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.516 qpair failed and we were unable to recover it. 00:29:16.779 [2024-11-20 16:41:02.473396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.779 [2024-11-20 16:41:02.473445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.779 [2024-11-20 16:41:02.473458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.779 [2024-11-20 16:41:02.473465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.779 [2024-11-20 16:41:02.473472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.779 [2024-11-20 16:41:02.473485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.779 qpair failed and we were unable to recover it. 00:29:16.779 [2024-11-20 16:41:02.483274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.779 [2024-11-20 16:41:02.483320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.779 [2024-11-20 16:41:02.483333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.779 [2024-11-20 16:41:02.483340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.779 [2024-11-20 16:41:02.483347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.779 [2024-11-20 16:41:02.483360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.779 qpair failed and we were unable to recover it. 00:29:16.779 [2024-11-20 16:41:02.493428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.779 [2024-11-20 16:41:02.493481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.779 [2024-11-20 16:41:02.493494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.779 [2024-11-20 16:41:02.493501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.779 [2024-11-20 16:41:02.493508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.779 [2024-11-20 16:41:02.493522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.779 qpair failed and we were unable to recover it. 00:29:16.779 [2024-11-20 16:41:02.503496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.779 [2024-11-20 16:41:02.503550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.779 [2024-11-20 16:41:02.503564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.779 [2024-11-20 16:41:02.503570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.779 [2024-11-20 16:41:02.503577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.779 [2024-11-20 16:41:02.503590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.779 qpair failed and we were unable to recover it. 00:29:16.779 [2024-11-20 16:41:02.513494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.779 [2024-11-20 16:41:02.513543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.779 [2024-11-20 16:41:02.513556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.779 [2024-11-20 16:41:02.513563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.779 [2024-11-20 16:41:02.513569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.779 [2024-11-20 16:41:02.513583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.779 qpair failed and we were unable to recover it. 00:29:16.779 [2024-11-20 16:41:02.523467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.779 [2024-11-20 16:41:02.523514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.779 [2024-11-20 16:41:02.523528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.779 [2024-11-20 16:41:02.523536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.779 [2024-11-20 16:41:02.523542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.779 [2024-11-20 16:41:02.523555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.779 qpair failed and we were unable to recover it. 00:29:16.779 [2024-11-20 16:41:02.533492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.779 [2024-11-20 16:41:02.533545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.779 [2024-11-20 16:41:02.533558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.780 [2024-11-20 16:41:02.533565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.780 [2024-11-20 16:41:02.533572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.780 [2024-11-20 16:41:02.533585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.780 qpair failed and we were unable to recover it. 00:29:16.780 [2024-11-20 16:41:02.543565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.780 [2024-11-20 16:41:02.543619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.780 [2024-11-20 16:41:02.543632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.780 [2024-11-20 16:41:02.543639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.780 [2024-11-20 16:41:02.543645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.780 [2024-11-20 16:41:02.543658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.780 qpair failed and we were unable to recover it. 00:29:16.780 [2024-11-20 16:41:02.553474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.780 [2024-11-20 16:41:02.553526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.780 [2024-11-20 16:41:02.553539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.780 [2024-11-20 16:41:02.553546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.780 [2024-11-20 16:41:02.553552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.780 [2024-11-20 16:41:02.553565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.780 qpair failed and we were unable to recover it. 00:29:16.780 [2024-11-20 16:41:02.563624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.780 [2024-11-20 16:41:02.563684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.780 [2024-11-20 16:41:02.563698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.780 [2024-11-20 16:41:02.563705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.780 [2024-11-20 16:41:02.563712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.780 [2024-11-20 16:41:02.563725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.780 qpair failed and we were unable to recover it. 00:29:16.780 [2024-11-20 16:41:02.573650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.780 [2024-11-20 16:41:02.573702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.780 [2024-11-20 16:41:02.573716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.780 [2024-11-20 16:41:02.573726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.780 [2024-11-20 16:41:02.573733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.780 [2024-11-20 16:41:02.573746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.780 qpair failed and we were unable to recover it. 00:29:16.780 [2024-11-20 16:41:02.583686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.780 [2024-11-20 16:41:02.583745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.780 [2024-11-20 16:41:02.583759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.780 [2024-11-20 16:41:02.583766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.780 [2024-11-20 16:41:02.583772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.780 [2024-11-20 16:41:02.583785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.780 qpair failed and we were unable to recover it. 00:29:16.780 [2024-11-20 16:41:02.593729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.780 [2024-11-20 16:41:02.593792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.780 [2024-11-20 16:41:02.593806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.780 [2024-11-20 16:41:02.593814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.780 [2024-11-20 16:41:02.593820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.780 [2024-11-20 16:41:02.593834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.780 qpair failed and we were unable to recover it. 00:29:16.780 [2024-11-20 16:41:02.603716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.780 [2024-11-20 16:41:02.603762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.780 [2024-11-20 16:41:02.603775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.780 [2024-11-20 16:41:02.603782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.780 [2024-11-20 16:41:02.603788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.780 [2024-11-20 16:41:02.603801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.780 qpair failed and we were unable to recover it. 00:29:16.780 [2024-11-20 16:41:02.613654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.780 [2024-11-20 16:41:02.613725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.780 [2024-11-20 16:41:02.613742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.780 [2024-11-20 16:41:02.613749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.780 [2024-11-20 16:41:02.613756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.780 [2024-11-20 16:41:02.613771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.780 qpair failed and we were unable to recover it. 00:29:16.780 [2024-11-20 16:41:02.623818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.780 [2024-11-20 16:41:02.623910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.780 [2024-11-20 16:41:02.623924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.780 [2024-11-20 16:41:02.623932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.780 [2024-11-20 16:41:02.623938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.780 [2024-11-20 16:41:02.623952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.780 qpair failed and we were unable to recover it. 00:29:16.780 [2024-11-20 16:41:02.633841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.780 [2024-11-20 16:41:02.633890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.780 [2024-11-20 16:41:02.633904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.781 [2024-11-20 16:41:02.633911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.781 [2024-11-20 16:41:02.633917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.781 [2024-11-20 16:41:02.633930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.781 qpair failed and we were unable to recover it. 00:29:16.781 [2024-11-20 16:41:02.643714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.781 [2024-11-20 16:41:02.643759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.781 [2024-11-20 16:41:02.643773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.781 [2024-11-20 16:41:02.643780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.781 [2024-11-20 16:41:02.643786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.781 [2024-11-20 16:41:02.643800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.781 qpair failed and we were unable to recover it. 00:29:16.781 [2024-11-20 16:41:02.653844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.781 [2024-11-20 16:41:02.653887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.781 [2024-11-20 16:41:02.653901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.781 [2024-11-20 16:41:02.653908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.781 [2024-11-20 16:41:02.653914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.781 [2024-11-20 16:41:02.653928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.781 qpair failed and we were unable to recover it. 00:29:16.781 [2024-11-20 16:41:02.663917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.781 [2024-11-20 16:41:02.663971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.781 [2024-11-20 16:41:02.663994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.781 [2024-11-20 16:41:02.664002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.781 [2024-11-20 16:41:02.664008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.781 [2024-11-20 16:41:02.664022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.781 qpair failed and we were unable to recover it. 00:29:16.781 [2024-11-20 16:41:02.673938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.781 [2024-11-20 16:41:02.673989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.781 [2024-11-20 16:41:02.674003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.781 [2024-11-20 16:41:02.674010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.781 [2024-11-20 16:41:02.674017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.781 [2024-11-20 16:41:02.674030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.781 qpair failed and we were unable to recover it. 00:29:16.781 [2024-11-20 16:41:02.683959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.781 [2024-11-20 16:41:02.684015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.781 [2024-11-20 16:41:02.684028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.781 [2024-11-20 16:41:02.684036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.781 [2024-11-20 16:41:02.684042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.781 [2024-11-20 16:41:02.684055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.781 qpair failed and we were unable to recover it. 00:29:16.781 [2024-11-20 16:41:02.693946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.781 [2024-11-20 16:41:02.693997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.781 [2024-11-20 16:41:02.694011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.781 [2024-11-20 16:41:02.694019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.781 [2024-11-20 16:41:02.694025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.781 [2024-11-20 16:41:02.694039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.781 qpair failed and we were unable to recover it. 00:29:16.781 [2024-11-20 16:41:02.704045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.781 [2024-11-20 16:41:02.704103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.781 [2024-11-20 16:41:02.704116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.781 [2024-11-20 16:41:02.704128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.781 [2024-11-20 16:41:02.704135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.781 [2024-11-20 16:41:02.704148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.781 qpair failed and we were unable to recover it. 00:29:16.781 [2024-11-20 16:41:02.714053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.781 [2024-11-20 16:41:02.714105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.781 [2024-11-20 16:41:02.714119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.781 [2024-11-20 16:41:02.714126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.781 [2024-11-20 16:41:02.714132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.781 [2024-11-20 16:41:02.714146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.781 qpair failed and we were unable to recover it. 00:29:16.781 [2024-11-20 16:41:02.723930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.781 [2024-11-20 16:41:02.723997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.781 [2024-11-20 16:41:02.724011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.781 [2024-11-20 16:41:02.724018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.781 [2024-11-20 16:41:02.724024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.781 [2024-11-20 16:41:02.724038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.781 qpair failed and we were unable to recover it. 00:29:16.781 [2024-11-20 16:41:02.734079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.781 [2024-11-20 16:41:02.734127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.781 [2024-11-20 16:41:02.734140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.782 [2024-11-20 16:41:02.734147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.782 [2024-11-20 16:41:02.734154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:16.782 [2024-11-20 16:41:02.734167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.782 qpair failed and we were unable to recover it. 00:29:17.044 [2024-11-20 16:41:02.744153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.044 [2024-11-20 16:41:02.744207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.044 [2024-11-20 16:41:02.744220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.044 [2024-11-20 16:41:02.744227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.044 [2024-11-20 16:41:02.744234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.044 [2024-11-20 16:41:02.744247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.044 qpair failed and we were unable to recover it. 00:29:17.044 [2024-11-20 16:41:02.754157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.044 [2024-11-20 16:41:02.754213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.044 [2024-11-20 16:41:02.754227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.044 [2024-11-20 16:41:02.754233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.044 [2024-11-20 16:41:02.754240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.044 [2024-11-20 16:41:02.754253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.044 qpair failed and we were unable to recover it. 00:29:17.044 [2024-11-20 16:41:02.764134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.044 [2024-11-20 16:41:02.764184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.044 [2024-11-20 16:41:02.764198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.044 [2024-11-20 16:41:02.764205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.044 [2024-11-20 16:41:02.764211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.044 [2024-11-20 16:41:02.764224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.044 qpair failed and we were unable to recover it. 00:29:17.044 [2024-11-20 16:41:02.774204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.044 [2024-11-20 16:41:02.774249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.044 [2024-11-20 16:41:02.774262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.044 [2024-11-20 16:41:02.774270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.044 [2024-11-20 16:41:02.774276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.044 [2024-11-20 16:41:02.774289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.044 qpair failed and we were unable to recover it. 00:29:17.044 [2024-11-20 16:41:02.784169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.044 [2024-11-20 16:41:02.784270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.044 [2024-11-20 16:41:02.784283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.044 [2024-11-20 16:41:02.784290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.044 [2024-11-20 16:41:02.784296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.044 [2024-11-20 16:41:02.784309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.044 qpair failed and we were unable to recover it. 00:29:17.044 [2024-11-20 16:41:02.794258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.044 [2024-11-20 16:41:02.794355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.044 [2024-11-20 16:41:02.794372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.044 [2024-11-20 16:41:02.794380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.044 [2024-11-20 16:41:02.794386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.044 [2024-11-20 16:41:02.794399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.044 qpair failed and we were unable to recover it. 00:29:17.044 [2024-11-20 16:41:02.804199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.044 [2024-11-20 16:41:02.804252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.044 [2024-11-20 16:41:02.804267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.044 [2024-11-20 16:41:02.804274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.044 [2024-11-20 16:41:02.804281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.044 [2024-11-20 16:41:02.804295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.044 qpair failed and we were unable to recover it. 00:29:17.044 [2024-11-20 16:41:02.814305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.044 [2024-11-20 16:41:02.814397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.044 [2024-11-20 16:41:02.814411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.044 [2024-11-20 16:41:02.814418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.044 [2024-11-20 16:41:02.814425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.044 [2024-11-20 16:41:02.814438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.044 qpair failed and we were unable to recover it. 00:29:17.044 [2024-11-20 16:41:02.824368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.044 [2024-11-20 16:41:02.824453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.044 [2024-11-20 16:41:02.824466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.044 [2024-11-20 16:41:02.824473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.044 [2024-11-20 16:41:02.824479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.044 [2024-11-20 16:41:02.824493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.044 qpair failed and we were unable to recover it. 00:29:17.044 [2024-11-20 16:41:02.834364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.044 [2024-11-20 16:41:02.834453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.045 [2024-11-20 16:41:02.834467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.045 [2024-11-20 16:41:02.834477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.045 [2024-11-20 16:41:02.834483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.045 [2024-11-20 16:41:02.834497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.045 qpair failed and we were unable to recover it. 00:29:17.045 [2024-11-20 16:41:02.844249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.045 [2024-11-20 16:41:02.844295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.045 [2024-11-20 16:41:02.844308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.045 [2024-11-20 16:41:02.844315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.045 [2024-11-20 16:41:02.844322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.045 [2024-11-20 16:41:02.844335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.045 qpair failed and we were unable to recover it. 00:29:17.045 [2024-11-20 16:41:02.854419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.045 [2024-11-20 16:41:02.854470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.045 [2024-11-20 16:41:02.854483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.045 [2024-11-20 16:41:02.854491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.045 [2024-11-20 16:41:02.854497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.045 [2024-11-20 16:41:02.854510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.045 qpair failed and we were unable to recover it. 00:29:17.045 [2024-11-20 16:41:02.864493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.045 [2024-11-20 16:41:02.864546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.045 [2024-11-20 16:41:02.864560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.045 [2024-11-20 16:41:02.864567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.045 [2024-11-20 16:41:02.864573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.045 [2024-11-20 16:41:02.864586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.045 qpair failed and we were unable to recover it. 00:29:17.045 [2024-11-20 16:41:02.874354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.045 [2024-11-20 16:41:02.874405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.045 [2024-11-20 16:41:02.874418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.045 [2024-11-20 16:41:02.874425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.045 [2024-11-20 16:41:02.874431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.045 [2024-11-20 16:41:02.874444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.045 qpair failed and we were unable to recover it. 00:29:17.045 [2024-11-20 16:41:02.884476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.045 [2024-11-20 16:41:02.884529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.045 [2024-11-20 16:41:02.884543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.045 [2024-11-20 16:41:02.884551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.045 [2024-11-20 16:41:02.884557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.045 [2024-11-20 16:41:02.884571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.045 qpair failed and we were unable to recover it. 00:29:17.045 [2024-11-20 16:41:02.894520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.045 [2024-11-20 16:41:02.894565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.045 [2024-11-20 16:41:02.894578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.045 [2024-11-20 16:41:02.894585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.045 [2024-11-20 16:41:02.894591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.045 [2024-11-20 16:41:02.894605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.045 qpair failed and we were unable to recover it. 00:29:17.045 [2024-11-20 16:41:02.904562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.045 [2024-11-20 16:41:02.904623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.045 [2024-11-20 16:41:02.904636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.045 [2024-11-20 16:41:02.904643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.045 [2024-11-20 16:41:02.904649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.045 [2024-11-20 16:41:02.904662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.045 qpair failed and we were unable to recover it. 00:29:17.045 [2024-11-20 16:41:02.914571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.045 [2024-11-20 16:41:02.914636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.045 [2024-11-20 16:41:02.914650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.045 [2024-11-20 16:41:02.914657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.045 [2024-11-20 16:41:02.914663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.045 [2024-11-20 16:41:02.914676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.045 qpair failed and we were unable to recover it. 00:29:17.045 [2024-11-20 16:41:02.924597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.045 [2024-11-20 16:41:02.924643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.045 [2024-11-20 16:41:02.924660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.045 [2024-11-20 16:41:02.924667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.045 [2024-11-20 16:41:02.924674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.045 [2024-11-20 16:41:02.924687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.045 qpair failed and we were unable to recover it. 00:29:17.045 [2024-11-20 16:41:02.934638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.045 [2024-11-20 16:41:02.934694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.045 [2024-11-20 16:41:02.934707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.045 [2024-11-20 16:41:02.934715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.045 [2024-11-20 16:41:02.934721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.045 [2024-11-20 16:41:02.934734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.045 qpair failed and we were unable to recover it. 00:29:17.045 [2024-11-20 16:41:02.944682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.045 [2024-11-20 16:41:02.944736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.045 [2024-11-20 16:41:02.944749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.045 [2024-11-20 16:41:02.944756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.045 [2024-11-20 16:41:02.944762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.045 [2024-11-20 16:41:02.944775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.045 qpair failed and we were unable to recover it. 00:29:17.045 [2024-11-20 16:41:02.954694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.046 [2024-11-20 16:41:02.954741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.046 [2024-11-20 16:41:02.954754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.046 [2024-11-20 16:41:02.954761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.046 [2024-11-20 16:41:02.954768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.046 [2024-11-20 16:41:02.954781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.046 qpair failed and we were unable to recover it. 00:29:17.046 [2024-11-20 16:41:02.964768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.046 [2024-11-20 16:41:02.964830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.046 [2024-11-20 16:41:02.964843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.046 [2024-11-20 16:41:02.964856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.046 [2024-11-20 16:41:02.964862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.046 [2024-11-20 16:41:02.964876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.046 qpair failed and we were unable to recover it. 00:29:17.046 [2024-11-20 16:41:02.974735] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.046 [2024-11-20 16:41:02.974783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.046 [2024-11-20 16:41:02.974797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.046 [2024-11-20 16:41:02.974804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.046 [2024-11-20 16:41:02.974810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.046 [2024-11-20 16:41:02.974823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.046 qpair failed and we were unable to recover it. 00:29:17.046 [2024-11-20 16:41:02.984803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.046 [2024-11-20 16:41:02.984859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.046 [2024-11-20 16:41:02.984873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.046 [2024-11-20 16:41:02.984880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.046 [2024-11-20 16:41:02.984886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.046 [2024-11-20 16:41:02.984900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.046 qpair failed and we were unable to recover it. 00:29:17.046 [2024-11-20 16:41:02.994797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.046 [2024-11-20 16:41:02.994867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.046 [2024-11-20 16:41:02.994881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.046 [2024-11-20 16:41:02.994888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.046 [2024-11-20 16:41:02.994894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.046 [2024-11-20 16:41:02.994907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.046 qpair failed and we were unable to recover it. 00:29:17.308 [2024-11-20 16:41:03.004811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.308 [2024-11-20 16:41:03.004858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.308 [2024-11-20 16:41:03.004872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.308 [2024-11-20 16:41:03.004879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.308 [2024-11-20 16:41:03.004886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.308 [2024-11-20 16:41:03.004899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.308 qpair failed and we were unable to recover it. 00:29:17.308 [2024-11-20 16:41:03.014841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.308 [2024-11-20 16:41:03.014889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.308 [2024-11-20 16:41:03.014903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.308 [2024-11-20 16:41:03.014910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.308 [2024-11-20 16:41:03.014916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.308 [2024-11-20 16:41:03.014930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.308 qpair failed and we were unable to recover it. 00:29:17.308 [2024-11-20 16:41:03.024878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.308 [2024-11-20 16:41:03.024933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.308 [2024-11-20 16:41:03.024947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.308 [2024-11-20 16:41:03.024954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.308 [2024-11-20 16:41:03.024960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.308 [2024-11-20 16:41:03.024973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.308 qpair failed and we were unable to recover it. 00:29:17.308 [2024-11-20 16:41:03.034913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.308 [2024-11-20 16:41:03.034963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.308 [2024-11-20 16:41:03.034977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.308 [2024-11-20 16:41:03.034989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.308 [2024-11-20 16:41:03.034996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.308 [2024-11-20 16:41:03.035009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.308 qpair failed and we were unable to recover it. 00:29:17.308 [2024-11-20 16:41:03.044929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.308 [2024-11-20 16:41:03.044977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.308 [2024-11-20 16:41:03.044995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.308 [2024-11-20 16:41:03.045002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.308 [2024-11-20 16:41:03.045009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.308 [2024-11-20 16:41:03.045022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.308 qpair failed and we were unable to recover it. 00:29:17.308 [2024-11-20 16:41:03.054946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.308 [2024-11-20 16:41:03.055000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.309 [2024-11-20 16:41:03.055016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.309 [2024-11-20 16:41:03.055024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.309 [2024-11-20 16:41:03.055030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.309 [2024-11-20 16:41:03.055044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.309 qpair failed and we were unable to recover it. 00:29:17.309 [2024-11-20 16:41:03.064979] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.309 [2024-11-20 16:41:03.065054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.309 [2024-11-20 16:41:03.065067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.309 [2024-11-20 16:41:03.065075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.309 [2024-11-20 16:41:03.065081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.309 [2024-11-20 16:41:03.065095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.309 qpair failed and we were unable to recover it. 00:29:17.309 [2024-11-20 16:41:03.074974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.309 [2024-11-20 16:41:03.075028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.309 [2024-11-20 16:41:03.075041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.309 [2024-11-20 16:41:03.075049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.309 [2024-11-20 16:41:03.075055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.309 [2024-11-20 16:41:03.075069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.309 qpair failed and we were unable to recover it. 00:29:17.309 [2024-11-20 16:41:03.085409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.309 [2024-11-20 16:41:03.085508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.309 [2024-11-20 16:41:03.085522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.309 [2024-11-20 16:41:03.085530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.309 [2024-11-20 16:41:03.085536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.309 [2024-11-20 16:41:03.085550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.309 qpair failed and we were unable to recover it. 00:29:17.309 [2024-11-20 16:41:03.095065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.309 [2024-11-20 16:41:03.095143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.309 [2024-11-20 16:41:03.095156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.309 [2024-11-20 16:41:03.095167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.309 [2024-11-20 16:41:03.095173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.309 [2024-11-20 16:41:03.095187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.309 qpair failed and we were unable to recover it. 00:29:17.309 [2024-11-20 16:41:03.105119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.309 [2024-11-20 16:41:03.105174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.309 [2024-11-20 16:41:03.105187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.309 [2024-11-20 16:41:03.105195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.309 [2024-11-20 16:41:03.105201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.309 [2024-11-20 16:41:03.105215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.309 qpair failed and we were unable to recover it. 00:29:17.309 [2024-11-20 16:41:03.115104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.309 [2024-11-20 16:41:03.115150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.309 [2024-11-20 16:41:03.115164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.309 [2024-11-20 16:41:03.115171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.309 [2024-11-20 16:41:03.115177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.309 [2024-11-20 16:41:03.115190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.309 qpair failed and we were unable to recover it. 00:29:17.309 [2024-11-20 16:41:03.125152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.309 [2024-11-20 16:41:03.125204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.309 [2024-11-20 16:41:03.125218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.309 [2024-11-20 16:41:03.125225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.309 [2024-11-20 16:41:03.125231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.309 [2024-11-20 16:41:03.125244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.309 qpair failed and we were unable to recover it. 00:29:17.309 [2024-11-20 16:41:03.135087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.309 [2024-11-20 16:41:03.135131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.309 [2024-11-20 16:41:03.135144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.309 [2024-11-20 16:41:03.135151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.309 [2024-11-20 16:41:03.135158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.309 [2024-11-20 16:41:03.135171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.309 qpair failed and we were unable to recover it. 00:29:17.309 [2024-11-20 16:41:03.145252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.309 [2024-11-20 16:41:03.145305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.309 [2024-11-20 16:41:03.145318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.309 [2024-11-20 16:41:03.145325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.309 [2024-11-20 16:41:03.145332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.309 [2024-11-20 16:41:03.145345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.309 qpair failed and we were unable to recover it. 00:29:17.309 [2024-11-20 16:41:03.155206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.309 [2024-11-20 16:41:03.155283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.309 [2024-11-20 16:41:03.155296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.309 [2024-11-20 16:41:03.155303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.309 [2024-11-20 16:41:03.155309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.309 [2024-11-20 16:41:03.155322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.309 qpair failed and we were unable to recover it. 00:29:17.309 [2024-11-20 16:41:03.165166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.309 [2024-11-20 16:41:03.165210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.309 [2024-11-20 16:41:03.165223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.309 [2024-11-20 16:41:03.165230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.309 [2024-11-20 16:41:03.165237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.309 [2024-11-20 16:41:03.165250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.309 qpair failed and we were unable to recover it. 00:29:17.310 [2024-11-20 16:41:03.175259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.310 [2024-11-20 16:41:03.175321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.310 [2024-11-20 16:41:03.175335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.310 [2024-11-20 16:41:03.175342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.310 [2024-11-20 16:41:03.175348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.310 [2024-11-20 16:41:03.175361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.310 qpair failed and we were unable to recover it. 00:29:17.310 [2024-11-20 16:41:03.185375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.310 [2024-11-20 16:41:03.185430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.310 [2024-11-20 16:41:03.185448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.310 [2024-11-20 16:41:03.185455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.310 [2024-11-20 16:41:03.185461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.310 [2024-11-20 16:41:03.185474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.310 qpair failed and we were unable to recover it. 00:29:17.310 [2024-11-20 16:41:03.195209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.310 [2024-11-20 16:41:03.195279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.310 [2024-11-20 16:41:03.195293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.310 [2024-11-20 16:41:03.195299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.310 [2024-11-20 16:41:03.195306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.310 [2024-11-20 16:41:03.195319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.310 qpair failed and we were unable to recover it. 00:29:17.310 [2024-11-20 16:41:03.205324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.310 [2024-11-20 16:41:03.205398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.310 [2024-11-20 16:41:03.205413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.310 [2024-11-20 16:41:03.205420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.310 [2024-11-20 16:41:03.205426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.310 [2024-11-20 16:41:03.205440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.310 qpair failed and we were unable to recover it. 00:29:17.310 [2024-11-20 16:41:03.215377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.310 [2024-11-20 16:41:03.215423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.310 [2024-11-20 16:41:03.215437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.310 [2024-11-20 16:41:03.215444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.310 [2024-11-20 16:41:03.215451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.310 [2024-11-20 16:41:03.215464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.310 qpair failed and we were unable to recover it. 00:29:17.310 [2024-11-20 16:41:03.225436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.310 [2024-11-20 16:41:03.225489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.310 [2024-11-20 16:41:03.225502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.310 [2024-11-20 16:41:03.225513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.310 [2024-11-20 16:41:03.225519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.310 [2024-11-20 16:41:03.225532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.310 qpair failed and we were unable to recover it. 00:29:17.310 [2024-11-20 16:41:03.235422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.310 [2024-11-20 16:41:03.235468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.310 [2024-11-20 16:41:03.235481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.310 [2024-11-20 16:41:03.235488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.310 [2024-11-20 16:41:03.235494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.310 [2024-11-20 16:41:03.235508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.310 qpair failed and we were unable to recover it. 00:29:17.310 [2024-11-20 16:41:03.245464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.310 [2024-11-20 16:41:03.245512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.310 [2024-11-20 16:41:03.245525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.310 [2024-11-20 16:41:03.245532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.310 [2024-11-20 16:41:03.245539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.310 [2024-11-20 16:41:03.245552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.310 qpair failed and we were unable to recover it. 00:29:17.310 [2024-11-20 16:41:03.255491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.310 [2024-11-20 16:41:03.255586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.310 [2024-11-20 16:41:03.255600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.310 [2024-11-20 16:41:03.255607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.310 [2024-11-20 16:41:03.255613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.310 [2024-11-20 16:41:03.255626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.310 qpair failed and we were unable to recover it. 00:29:17.571 [2024-11-20 16:41:03.265558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.572 [2024-11-20 16:41:03.265639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.572 [2024-11-20 16:41:03.265654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.572 [2024-11-20 16:41:03.265661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.572 [2024-11-20 16:41:03.265669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.572 [2024-11-20 16:41:03.265685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.572 qpair failed and we were unable to recover it. 00:29:17.572 [2024-11-20 16:41:03.275553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.572 [2024-11-20 16:41:03.275601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.572 [2024-11-20 16:41:03.275615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.572 [2024-11-20 16:41:03.275622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.572 [2024-11-20 16:41:03.275628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.572 [2024-11-20 16:41:03.275642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.572 qpair failed and we were unable to recover it. 00:29:17.572 [2024-11-20 16:41:03.285555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.572 [2024-11-20 16:41:03.285601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.572 [2024-11-20 16:41:03.285615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.572 [2024-11-20 16:41:03.285622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.572 [2024-11-20 16:41:03.285628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.572 [2024-11-20 16:41:03.285641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.572 qpair failed and we were unable to recover it. 00:29:17.572 [2024-11-20 16:41:03.295605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.572 [2024-11-20 16:41:03.295655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.572 [2024-11-20 16:41:03.295669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.572 [2024-11-20 16:41:03.295676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.572 [2024-11-20 16:41:03.295682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.572 [2024-11-20 16:41:03.295695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.572 qpair failed and we were unable to recover it. 00:29:17.572 [2024-11-20 16:41:03.305687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.572 [2024-11-20 16:41:03.305737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.572 [2024-11-20 16:41:03.305750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.572 [2024-11-20 16:41:03.305757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.572 [2024-11-20 16:41:03.305763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.572 [2024-11-20 16:41:03.305776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.572 qpair failed and we were unable to recover it. 00:29:17.572 [2024-11-20 16:41:03.315670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.572 [2024-11-20 16:41:03.315721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.572 [2024-11-20 16:41:03.315737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.572 [2024-11-20 16:41:03.315744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.572 [2024-11-20 16:41:03.315751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.572 [2024-11-20 16:41:03.315764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.572 qpair failed and we were unable to recover it. 00:29:17.572 [2024-11-20 16:41:03.325699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.572 [2024-11-20 16:41:03.325745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.572 [2024-11-20 16:41:03.325758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.572 [2024-11-20 16:41:03.325765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.572 [2024-11-20 16:41:03.325771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.572 [2024-11-20 16:41:03.325784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.572 qpair failed and we were unable to recover it. 00:29:17.572 [2024-11-20 16:41:03.335686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.572 [2024-11-20 16:41:03.335742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.572 [2024-11-20 16:41:03.335767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.572 [2024-11-20 16:41:03.335776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.572 [2024-11-20 16:41:03.335783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.572 [2024-11-20 16:41:03.335801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.572 qpair failed and we were unable to recover it. 00:29:17.572 [2024-11-20 16:41:03.345823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.572 [2024-11-20 16:41:03.345881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.572 [2024-11-20 16:41:03.345896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.572 [2024-11-20 16:41:03.345903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.572 [2024-11-20 16:41:03.345910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.572 [2024-11-20 16:41:03.345925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.572 qpair failed and we were unable to recover it. 00:29:17.572 [2024-11-20 16:41:03.355760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.572 [2024-11-20 16:41:03.355853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.572 [2024-11-20 16:41:03.355867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.572 [2024-11-20 16:41:03.355879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.572 [2024-11-20 16:41:03.355886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.572 [2024-11-20 16:41:03.355900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.572 qpair failed and we were unable to recover it. 00:29:17.572 [2024-11-20 16:41:03.365792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.572 [2024-11-20 16:41:03.365850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.572 [2024-11-20 16:41:03.365864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.572 [2024-11-20 16:41:03.365871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.572 [2024-11-20 16:41:03.365877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.572 [2024-11-20 16:41:03.365891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.572 qpair failed and we were unable to recover it. 00:29:17.572 [2024-11-20 16:41:03.375817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.572 [2024-11-20 16:41:03.375863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.572 [2024-11-20 16:41:03.375876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.572 [2024-11-20 16:41:03.375883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.572 [2024-11-20 16:41:03.375890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.572 [2024-11-20 16:41:03.375903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.572 qpair failed and we were unable to recover it. 00:29:17.572 [2024-11-20 16:41:03.385865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.572 [2024-11-20 16:41:03.385939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.572 [2024-11-20 16:41:03.385953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.572 [2024-11-20 16:41:03.385960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.572 [2024-11-20 16:41:03.385967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.572 [2024-11-20 16:41:03.385980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.572 qpair failed and we were unable to recover it. 00:29:17.572 [2024-11-20 16:41:03.395891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.573 [2024-11-20 16:41:03.395938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.573 [2024-11-20 16:41:03.395952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.573 [2024-11-20 16:41:03.395959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.573 [2024-11-20 16:41:03.395966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.573 [2024-11-20 16:41:03.395979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.573 qpair failed and we were unable to recover it. 00:29:17.573 [2024-11-20 16:41:03.405808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.573 [2024-11-20 16:41:03.405859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.573 [2024-11-20 16:41:03.405873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.573 [2024-11-20 16:41:03.405881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.573 [2024-11-20 16:41:03.405887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.573 [2024-11-20 16:41:03.405900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.573 qpair failed and we were unable to recover it. 00:29:17.573 [2024-11-20 16:41:03.415917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.573 [2024-11-20 16:41:03.415966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.573 [2024-11-20 16:41:03.415979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.573 [2024-11-20 16:41:03.415990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.573 [2024-11-20 16:41:03.415996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.573 [2024-11-20 16:41:03.416010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.573 qpair failed and we were unable to recover it. 00:29:17.573 [2024-11-20 16:41:03.425988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.573 [2024-11-20 16:41:03.426042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.573 [2024-11-20 16:41:03.426056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.573 [2024-11-20 16:41:03.426063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.573 [2024-11-20 16:41:03.426069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.573 [2024-11-20 16:41:03.426082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.573 qpair failed and we were unable to recover it. 00:29:17.573 [2024-11-20 16:41:03.436003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.573 [2024-11-20 16:41:03.436055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.573 [2024-11-20 16:41:03.436069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.573 [2024-11-20 16:41:03.436076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.573 [2024-11-20 16:41:03.436082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.573 [2024-11-20 16:41:03.436096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.573 qpair failed and we were unable to recover it. 00:29:17.573 [2024-11-20 16:41:03.446006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.573 [2024-11-20 16:41:03.446098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.573 [2024-11-20 16:41:03.446114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.573 [2024-11-20 16:41:03.446121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.573 [2024-11-20 16:41:03.446127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.573 [2024-11-20 16:41:03.446141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.573 qpair failed and we were unable to recover it. 00:29:17.573 [2024-11-20 16:41:03.455897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.573 [2024-11-20 16:41:03.455940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.573 [2024-11-20 16:41:03.455954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.573 [2024-11-20 16:41:03.455962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.573 [2024-11-20 16:41:03.455968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.573 [2024-11-20 16:41:03.455987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.573 qpair failed and we were unable to recover it. 00:29:17.573 [2024-11-20 16:41:03.466096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.573 [2024-11-20 16:41:03.466150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.573 [2024-11-20 16:41:03.466164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.573 [2024-11-20 16:41:03.466171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.573 [2024-11-20 16:41:03.466177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.573 [2024-11-20 16:41:03.466191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.573 qpair failed and we were unable to recover it. 00:29:17.573 [2024-11-20 16:41:03.476083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.573 [2024-11-20 16:41:03.476144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.573 [2024-11-20 16:41:03.476158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.573 [2024-11-20 16:41:03.476165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.573 [2024-11-20 16:41:03.476171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.573 [2024-11-20 16:41:03.476184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.573 qpair failed and we were unable to recover it. 00:29:17.573 [2024-11-20 16:41:03.486115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.573 [2024-11-20 16:41:03.486157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.573 [2024-11-20 16:41:03.486171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.573 [2024-11-20 16:41:03.486181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.573 [2024-11-20 16:41:03.486188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.573 [2024-11-20 16:41:03.486201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.573 qpair failed and we were unable to recover it. 00:29:17.573 [2024-11-20 16:41:03.496100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.573 [2024-11-20 16:41:03.496172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.574 [2024-11-20 16:41:03.496185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.574 [2024-11-20 16:41:03.496192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.574 [2024-11-20 16:41:03.496198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.574 [2024-11-20 16:41:03.496212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.574 qpair failed and we were unable to recover it. 00:29:17.574 [2024-11-20 16:41:03.506205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.574 [2024-11-20 16:41:03.506264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.574 [2024-11-20 16:41:03.506279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.574 [2024-11-20 16:41:03.506286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.574 [2024-11-20 16:41:03.506293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.574 [2024-11-20 16:41:03.506310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.574 qpair failed and we were unable to recover it. 00:29:17.574 [2024-11-20 16:41:03.516234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.574 [2024-11-20 16:41:03.516294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.574 [2024-11-20 16:41:03.516309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.574 [2024-11-20 16:41:03.516316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.574 [2024-11-20 16:41:03.516322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.574 [2024-11-20 16:41:03.516336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.574 qpair failed and we were unable to recover it. 00:29:17.574 [2024-11-20 16:41:03.526227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.574 [2024-11-20 16:41:03.526289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.574 [2024-11-20 16:41:03.526303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.574 [2024-11-20 16:41:03.526310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.574 [2024-11-20 16:41:03.526316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.574 [2024-11-20 16:41:03.526329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.574 qpair failed and we were unable to recover it. 00:29:17.836 [2024-11-20 16:41:03.536264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.836 [2024-11-20 16:41:03.536311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.836 [2024-11-20 16:41:03.536324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.836 [2024-11-20 16:41:03.536331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.836 [2024-11-20 16:41:03.536337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.836 [2024-11-20 16:41:03.536351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.836 qpair failed and we were unable to recover it. 00:29:17.836 [2024-11-20 16:41:03.546306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.836 [2024-11-20 16:41:03.546359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.836 [2024-11-20 16:41:03.546373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.836 [2024-11-20 16:41:03.546380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.836 [2024-11-20 16:41:03.546386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.837 [2024-11-20 16:41:03.546399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.837 qpair failed and we were unable to recover it. 00:29:17.837 [2024-11-20 16:41:03.556317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.837 [2024-11-20 16:41:03.556364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.837 [2024-11-20 16:41:03.556377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.837 [2024-11-20 16:41:03.556385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.837 [2024-11-20 16:41:03.556391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.837 [2024-11-20 16:41:03.556404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.837 qpair failed and we were unable to recover it. 00:29:17.837 [2024-11-20 16:41:03.566312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.837 [2024-11-20 16:41:03.566360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.837 [2024-11-20 16:41:03.566373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.837 [2024-11-20 16:41:03.566380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.837 [2024-11-20 16:41:03.566386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.837 [2024-11-20 16:41:03.566399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.837 qpair failed and we were unable to recover it. 00:29:17.837 [2024-11-20 16:41:03.576361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.837 [2024-11-20 16:41:03.576415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.837 [2024-11-20 16:41:03.576432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.837 [2024-11-20 16:41:03.576440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.837 [2024-11-20 16:41:03.576446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.837 [2024-11-20 16:41:03.576459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.837 qpair failed and we were unable to recover it. 00:29:17.837 [2024-11-20 16:41:03.586430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.837 [2024-11-20 16:41:03.586481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.837 [2024-11-20 16:41:03.586494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.837 [2024-11-20 16:41:03.586501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.837 [2024-11-20 16:41:03.586508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.837 [2024-11-20 16:41:03.586521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.837 qpair failed and we were unable to recover it. 00:29:17.837 [2024-11-20 16:41:03.596423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.837 [2024-11-20 16:41:03.596473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.837 [2024-11-20 16:41:03.596487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.837 [2024-11-20 16:41:03.596494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.837 [2024-11-20 16:41:03.596500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.837 [2024-11-20 16:41:03.596513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.837 qpair failed and we were unable to recover it. 00:29:17.837 [2024-11-20 16:41:03.606434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.837 [2024-11-20 16:41:03.606518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.837 [2024-11-20 16:41:03.606532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.837 [2024-11-20 16:41:03.606539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.837 [2024-11-20 16:41:03.606545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.837 [2024-11-20 16:41:03.606558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.837 qpair failed and we were unable to recover it. 00:29:17.837 [2024-11-20 16:41:03.616455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.837 [2024-11-20 16:41:03.616502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.837 [2024-11-20 16:41:03.616517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.837 [2024-11-20 16:41:03.616531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.837 [2024-11-20 16:41:03.616538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.837 [2024-11-20 16:41:03.616552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.837 qpair failed and we were unable to recover it. 00:29:17.837 [2024-11-20 16:41:03.626539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.837 [2024-11-20 16:41:03.626594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.837 [2024-11-20 16:41:03.626608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.837 [2024-11-20 16:41:03.626615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.837 [2024-11-20 16:41:03.626621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.837 [2024-11-20 16:41:03.626634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.837 qpair failed and we were unable to recover it. 00:29:17.837 [2024-11-20 16:41:03.636533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.837 [2024-11-20 16:41:03.636633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.837 [2024-11-20 16:41:03.636646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.837 [2024-11-20 16:41:03.636653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.837 [2024-11-20 16:41:03.636660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.837 [2024-11-20 16:41:03.636673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.837 qpair failed and we were unable to recover it. 00:29:17.837 [2024-11-20 16:41:03.646540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.837 [2024-11-20 16:41:03.646590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.837 [2024-11-20 16:41:03.646605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.837 [2024-11-20 16:41:03.646612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.837 [2024-11-20 16:41:03.646619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.837 [2024-11-20 16:41:03.646635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.837 qpair failed and we were unable to recover it. 00:29:17.837 [2024-11-20 16:41:03.656560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.837 [2024-11-20 16:41:03.656616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.837 [2024-11-20 16:41:03.656630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.837 [2024-11-20 16:41:03.656637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.837 [2024-11-20 16:41:03.656643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.838 [2024-11-20 16:41:03.656657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.838 qpair failed and we were unable to recover it. 00:29:17.838 [2024-11-20 16:41:03.666607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.838 [2024-11-20 16:41:03.666671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.838 [2024-11-20 16:41:03.666685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.838 [2024-11-20 16:41:03.666692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.838 [2024-11-20 16:41:03.666698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.838 [2024-11-20 16:41:03.666711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.838 qpair failed and we were unable to recover it. 00:29:17.838 [2024-11-20 16:41:03.676641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.838 [2024-11-20 16:41:03.676696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.838 [2024-11-20 16:41:03.676721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.838 [2024-11-20 16:41:03.676729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.838 [2024-11-20 16:41:03.676736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.838 [2024-11-20 16:41:03.676755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.838 qpair failed and we were unable to recover it. 00:29:17.838 [2024-11-20 16:41:03.686525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.838 [2024-11-20 16:41:03.686571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.838 [2024-11-20 16:41:03.686588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.838 [2024-11-20 16:41:03.686596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.838 [2024-11-20 16:41:03.686603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.838 [2024-11-20 16:41:03.686618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.838 qpair failed and we were unable to recover it. 00:29:17.838 [2024-11-20 16:41:03.696660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.838 [2024-11-20 16:41:03.696707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.838 [2024-11-20 16:41:03.696721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.838 [2024-11-20 16:41:03.696728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.838 [2024-11-20 16:41:03.696735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.838 [2024-11-20 16:41:03.696749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.838 qpair failed and we were unable to recover it. 00:29:17.838 [2024-11-20 16:41:03.706731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.838 [2024-11-20 16:41:03.706789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.838 [2024-11-20 16:41:03.706818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.838 [2024-11-20 16:41:03.706826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.838 [2024-11-20 16:41:03.706833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.838 [2024-11-20 16:41:03.706852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.838 qpair failed and we were unable to recover it. 00:29:17.838 [2024-11-20 16:41:03.716733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.838 [2024-11-20 16:41:03.716783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.838 [2024-11-20 16:41:03.716799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.838 [2024-11-20 16:41:03.716806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.838 [2024-11-20 16:41:03.716813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.838 [2024-11-20 16:41:03.716828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.838 qpair failed and we were unable to recover it. 00:29:17.838 [2024-11-20 16:41:03.726742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.838 [2024-11-20 16:41:03.726803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.838 [2024-11-20 16:41:03.726827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.838 [2024-11-20 16:41:03.726836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.838 [2024-11-20 16:41:03.726843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.838 [2024-11-20 16:41:03.726862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.838 qpair failed and we were unable to recover it. 00:29:17.838 [2024-11-20 16:41:03.736767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.838 [2024-11-20 16:41:03.736868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.838 [2024-11-20 16:41:03.736884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.838 [2024-11-20 16:41:03.736891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.838 [2024-11-20 16:41:03.736897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.838 [2024-11-20 16:41:03.736911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.838 qpair failed and we were unable to recover it. 00:29:17.838 [2024-11-20 16:41:03.746837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.838 [2024-11-20 16:41:03.746921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.838 [2024-11-20 16:41:03.746935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.838 [2024-11-20 16:41:03.746946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.838 [2024-11-20 16:41:03.746953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.838 [2024-11-20 16:41:03.746967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.838 qpair failed and we were unable to recover it. 00:29:17.838 [2024-11-20 16:41:03.756832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.838 [2024-11-20 16:41:03.756882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.838 [2024-11-20 16:41:03.756896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.838 [2024-11-20 16:41:03.756903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.838 [2024-11-20 16:41:03.756909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.838 [2024-11-20 16:41:03.756923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.838 qpair failed and we were unable to recover it. 00:29:17.838 [2024-11-20 16:41:03.766813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.838 [2024-11-20 16:41:03.766857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.838 [2024-11-20 16:41:03.766871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.838 [2024-11-20 16:41:03.766878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.838 [2024-11-20 16:41:03.766885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.838 [2024-11-20 16:41:03.766898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.838 qpair failed and we were unable to recover it. 00:29:17.838 [2024-11-20 16:41:03.776884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.838 [2024-11-20 16:41:03.776926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.838 [2024-11-20 16:41:03.776940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.838 [2024-11-20 16:41:03.776947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.838 [2024-11-20 16:41:03.776953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.838 [2024-11-20 16:41:03.776967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.838 qpair failed and we were unable to recover it. 00:29:17.838 [2024-11-20 16:41:03.786961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.838 [2024-11-20 16:41:03.787019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.838 [2024-11-20 16:41:03.787033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.838 [2024-11-20 16:41:03.787041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.838 [2024-11-20 16:41:03.787047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:17.838 [2024-11-20 16:41:03.787061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.839 qpair failed and we were unable to recover it. 00:29:18.102 [2024-11-20 16:41:03.796964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.102 [2024-11-20 16:41:03.797017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.102 [2024-11-20 16:41:03.797031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.102 [2024-11-20 16:41:03.797038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.102 [2024-11-20 16:41:03.797045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.102 [2024-11-20 16:41:03.797058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.102 qpair failed and we were unable to recover it. 00:29:18.102 [2024-11-20 16:41:03.807011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.102 [2024-11-20 16:41:03.807096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.102 [2024-11-20 16:41:03.807110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.102 [2024-11-20 16:41:03.807117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.102 [2024-11-20 16:41:03.807124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.102 [2024-11-20 16:41:03.807137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.102 qpair failed and we were unable to recover it. 00:29:18.102 [2024-11-20 16:41:03.816917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.102 [2024-11-20 16:41:03.816967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.102 [2024-11-20 16:41:03.816980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.102 [2024-11-20 16:41:03.816991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.102 [2024-11-20 16:41:03.816998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.102 [2024-11-20 16:41:03.817011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.102 qpair failed and we were unable to recover it. 00:29:18.102 [2024-11-20 16:41:03.827045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.102 [2024-11-20 16:41:03.827106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.102 [2024-11-20 16:41:03.827120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.102 [2024-11-20 16:41:03.827127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.102 [2024-11-20 16:41:03.827133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.102 [2024-11-20 16:41:03.827146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.102 qpair failed and we were unable to recover it. 00:29:18.102 [2024-11-20 16:41:03.837040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.102 [2024-11-20 16:41:03.837084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.102 [2024-11-20 16:41:03.837101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.102 [2024-11-20 16:41:03.837109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.102 [2024-11-20 16:41:03.837115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.102 [2024-11-20 16:41:03.837129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.102 qpair failed and we were unable to recover it. 00:29:18.102 [2024-11-20 16:41:03.847082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.102 [2024-11-20 16:41:03.847133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.102 [2024-11-20 16:41:03.847146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.102 [2024-11-20 16:41:03.847153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.102 [2024-11-20 16:41:03.847159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.102 [2024-11-20 16:41:03.847173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.102 qpair failed and we were unable to recover it. 00:29:18.102 [2024-11-20 16:41:03.857856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.102 [2024-11-20 16:41:03.857906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.102 [2024-11-20 16:41:03.857919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.102 [2024-11-20 16:41:03.857926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.102 [2024-11-20 16:41:03.857932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.102 [2024-11-20 16:41:03.857946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.102 qpair failed and we were unable to recover it. 00:29:18.102 [2024-11-20 16:41:03.867147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.102 [2024-11-20 16:41:03.867204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.102 [2024-11-20 16:41:03.867217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.102 [2024-11-20 16:41:03.867224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.102 [2024-11-20 16:41:03.867231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.102 [2024-11-20 16:41:03.867244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.102 qpair failed and we were unable to recover it. 00:29:18.103 [2024-11-20 16:41:03.877203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.103 [2024-11-20 16:41:03.877287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.103 [2024-11-20 16:41:03.877300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.103 [2024-11-20 16:41:03.877310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.103 [2024-11-20 16:41:03.877316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.103 [2024-11-20 16:41:03.877330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.103 qpair failed and we were unable to recover it. 00:29:18.103 [2024-11-20 16:41:03.887197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.103 [2024-11-20 16:41:03.887245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.103 [2024-11-20 16:41:03.887259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.103 [2024-11-20 16:41:03.887266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.103 [2024-11-20 16:41:03.887273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.103 [2024-11-20 16:41:03.887286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.103 qpair failed and we were unable to recover it. 00:29:18.103 [2024-11-20 16:41:03.897220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.103 [2024-11-20 16:41:03.897268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.103 [2024-11-20 16:41:03.897281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.103 [2024-11-20 16:41:03.897288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.103 [2024-11-20 16:41:03.897294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.103 [2024-11-20 16:41:03.897308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.103 qpair failed and we were unable to recover it. 00:29:18.103 [2024-11-20 16:41:03.907277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.103 [2024-11-20 16:41:03.907330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.103 [2024-11-20 16:41:03.907343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.103 [2024-11-20 16:41:03.907350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.103 [2024-11-20 16:41:03.907356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.103 [2024-11-20 16:41:03.907370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.103 qpair failed and we were unable to recover it. 00:29:18.103 [2024-11-20 16:41:03.917262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.103 [2024-11-20 16:41:03.917320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.103 [2024-11-20 16:41:03.917333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.103 [2024-11-20 16:41:03.917341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.103 [2024-11-20 16:41:03.917347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.103 [2024-11-20 16:41:03.917360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.103 qpair failed and we were unable to recover it. 00:29:18.103 [2024-11-20 16:41:03.927287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.103 [2024-11-20 16:41:03.927333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.103 [2024-11-20 16:41:03.927346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.103 [2024-11-20 16:41:03.927353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.103 [2024-11-20 16:41:03.927359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.103 [2024-11-20 16:41:03.927373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.103 qpair failed and we were unable to recover it. 00:29:18.103 [2024-11-20 16:41:03.937315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.103 [2024-11-20 16:41:03.937387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.103 [2024-11-20 16:41:03.937400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.103 [2024-11-20 16:41:03.937407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.103 [2024-11-20 16:41:03.937414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.103 [2024-11-20 16:41:03.937427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.103 qpair failed and we were unable to recover it. 00:29:18.103 [2024-11-20 16:41:03.947373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.103 [2024-11-20 16:41:03.947435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.103 [2024-11-20 16:41:03.947448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.103 [2024-11-20 16:41:03.947455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.103 [2024-11-20 16:41:03.947462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.103 [2024-11-20 16:41:03.947475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.103 qpair failed and we were unable to recover it. 00:29:18.103 [2024-11-20 16:41:03.957390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.103 [2024-11-20 16:41:03.957437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.103 [2024-11-20 16:41:03.957451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.103 [2024-11-20 16:41:03.957458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.103 [2024-11-20 16:41:03.957464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.103 [2024-11-20 16:41:03.957477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.103 qpair failed and we were unable to recover it. 00:29:18.103 [2024-11-20 16:41:03.967328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.103 [2024-11-20 16:41:03.967424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.103 [2024-11-20 16:41:03.967440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.103 [2024-11-20 16:41:03.967447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.103 [2024-11-20 16:41:03.967454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.103 [2024-11-20 16:41:03.967467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.103 qpair failed and we were unable to recover it. 00:29:18.103 [2024-11-20 16:41:03.977418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.103 [2024-11-20 16:41:03.977470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.103 [2024-11-20 16:41:03.977483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.103 [2024-11-20 16:41:03.977490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.103 [2024-11-20 16:41:03.977496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.103 [2024-11-20 16:41:03.977509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.103 qpair failed and we were unable to recover it. 00:29:18.104 [2024-11-20 16:41:03.987389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.104 [2024-11-20 16:41:03.987443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.104 [2024-11-20 16:41:03.987457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.104 [2024-11-20 16:41:03.987464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.104 [2024-11-20 16:41:03.987471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.104 [2024-11-20 16:41:03.987484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.104 qpair failed and we were unable to recover it. 00:29:18.104 [2024-11-20 16:41:03.997478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.104 [2024-11-20 16:41:03.997530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.104 [2024-11-20 16:41:03.997543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.104 [2024-11-20 16:41:03.997550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.104 [2024-11-20 16:41:03.997556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.104 [2024-11-20 16:41:03.997570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.104 qpair failed and we were unable to recover it. 00:29:18.104 [2024-11-20 16:41:04.007532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.104 [2024-11-20 16:41:04.007579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.104 [2024-11-20 16:41:04.007593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.104 [2024-11-20 16:41:04.007600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.104 [2024-11-20 16:41:04.007610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.104 [2024-11-20 16:41:04.007623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.104 qpair failed and we were unable to recover it. 00:29:18.104 [2024-11-20 16:41:04.017546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.104 [2024-11-20 16:41:04.017595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.104 [2024-11-20 16:41:04.017609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.104 [2024-11-20 16:41:04.017616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.104 [2024-11-20 16:41:04.017622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.104 [2024-11-20 16:41:04.017636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.104 qpair failed and we were unable to recover it. 00:29:18.104 [2024-11-20 16:41:04.027492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.104 [2024-11-20 16:41:04.027548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.104 [2024-11-20 16:41:04.027561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.104 [2024-11-20 16:41:04.027569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.104 [2024-11-20 16:41:04.027575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.104 [2024-11-20 16:41:04.027588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.104 qpair failed and we were unable to recover it. 00:29:18.104 [2024-11-20 16:41:04.037617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.104 [2024-11-20 16:41:04.037679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.104 [2024-11-20 16:41:04.037692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.104 [2024-11-20 16:41:04.037699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.104 [2024-11-20 16:41:04.037706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.104 [2024-11-20 16:41:04.037719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.104 qpair failed and we were unable to recover it. 00:29:18.104 [2024-11-20 16:41:04.047493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.104 [2024-11-20 16:41:04.047543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.104 [2024-11-20 16:41:04.047556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.104 [2024-11-20 16:41:04.047563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.104 [2024-11-20 16:41:04.047570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.104 [2024-11-20 16:41:04.047583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.104 qpair failed and we were unable to recover it. 00:29:18.366 [2024-11-20 16:41:04.057517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.366 [2024-11-20 16:41:04.057560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.366 [2024-11-20 16:41:04.057574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.366 [2024-11-20 16:41:04.057581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.366 [2024-11-20 16:41:04.057587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.366 [2024-11-20 16:41:04.057601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.366 qpair failed and we were unable to recover it. 00:29:18.366 [2024-11-20 16:41:04.067723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.366 [2024-11-20 16:41:04.067778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.366 [2024-11-20 16:41:04.067791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.366 [2024-11-20 16:41:04.067798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.366 [2024-11-20 16:41:04.067804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.366 [2024-11-20 16:41:04.067817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.366 qpair failed and we were unable to recover it. 00:29:18.366 [2024-11-20 16:41:04.077688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.367 [2024-11-20 16:41:04.077739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.367 [2024-11-20 16:41:04.077753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.367 [2024-11-20 16:41:04.077760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.367 [2024-11-20 16:41:04.077766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.367 [2024-11-20 16:41:04.077779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.367 qpair failed and we were unable to recover it. 00:29:18.367 [2024-11-20 16:41:04.087727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.367 [2024-11-20 16:41:04.087772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.367 [2024-11-20 16:41:04.087786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.367 [2024-11-20 16:41:04.087793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.367 [2024-11-20 16:41:04.087799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.367 [2024-11-20 16:41:04.087813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.367 qpair failed and we were unable to recover it. 00:29:18.367 [2024-11-20 16:41:04.097633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.367 [2024-11-20 16:41:04.097679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.367 [2024-11-20 16:41:04.097696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.367 [2024-11-20 16:41:04.097703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.367 [2024-11-20 16:41:04.097709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.367 [2024-11-20 16:41:04.097722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.367 qpair failed and we were unable to recover it. 00:29:18.367 [2024-11-20 16:41:04.107821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.367 [2024-11-20 16:41:04.107873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.367 [2024-11-20 16:41:04.107886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.367 [2024-11-20 16:41:04.107893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.367 [2024-11-20 16:41:04.107899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.367 [2024-11-20 16:41:04.107912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.367 qpair failed and we were unable to recover it. 00:29:18.367 [2024-11-20 16:41:04.117826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.367 [2024-11-20 16:41:04.117876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.367 [2024-11-20 16:41:04.117889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.367 [2024-11-20 16:41:04.117897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.367 [2024-11-20 16:41:04.117903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.367 [2024-11-20 16:41:04.117916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.367 qpair failed and we were unable to recover it. 00:29:18.367 [2024-11-20 16:41:04.127823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.367 [2024-11-20 16:41:04.127870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.367 [2024-11-20 16:41:04.127883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.367 [2024-11-20 16:41:04.127890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.367 [2024-11-20 16:41:04.127896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.367 [2024-11-20 16:41:04.127910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.367 qpair failed and we were unable to recover it. 00:29:18.367 [2024-11-20 16:41:04.137833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.367 [2024-11-20 16:41:04.137882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.367 [2024-11-20 16:41:04.137895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.367 [2024-11-20 16:41:04.137902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.367 [2024-11-20 16:41:04.137912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.367 [2024-11-20 16:41:04.137925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.367 qpair failed and we were unable to recover it. 00:29:18.367 [2024-11-20 16:41:04.147794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.367 [2024-11-20 16:41:04.147853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.367 [2024-11-20 16:41:04.147866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.367 [2024-11-20 16:41:04.147873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.367 [2024-11-20 16:41:04.147880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.367 [2024-11-20 16:41:04.147893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.367 qpair failed and we were unable to recover it. 00:29:18.367 [2024-11-20 16:41:04.157802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.367 [2024-11-20 16:41:04.157855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.367 [2024-11-20 16:41:04.157869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.367 [2024-11-20 16:41:04.157876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.367 [2024-11-20 16:41:04.157882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.367 [2024-11-20 16:41:04.157896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.367 qpair failed and we were unable to recover it. 00:29:18.367 [2024-11-20 16:41:04.167943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.367 [2024-11-20 16:41:04.168022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.367 [2024-11-20 16:41:04.168037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.367 [2024-11-20 16:41:04.168045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.367 [2024-11-20 16:41:04.168052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.367 [2024-11-20 16:41:04.168067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.367 qpair failed and we were unable to recover it. 00:29:18.367 [2024-11-20 16:41:04.177835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.367 [2024-11-20 16:41:04.177885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.367 [2024-11-20 16:41:04.177898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.367 [2024-11-20 16:41:04.177905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.367 [2024-11-20 16:41:04.177912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.367 [2024-11-20 16:41:04.177925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.367 qpair failed and we were unable to recover it. 00:29:18.367 [2024-11-20 16:41:04.188083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.367 [2024-11-20 16:41:04.188150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.367 [2024-11-20 16:41:04.188164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.367 [2024-11-20 16:41:04.188172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.367 [2024-11-20 16:41:04.188178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.367 [2024-11-20 16:41:04.188191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.367 qpair failed and we were unable to recover it. 00:29:18.367 [2024-11-20 16:41:04.198044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.367 [2024-11-20 16:41:04.198143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.367 [2024-11-20 16:41:04.198157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.367 [2024-11-20 16:41:04.198164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.367 [2024-11-20 16:41:04.198171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.367 [2024-11-20 16:41:04.198184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.367 qpair failed and we were unable to recover it. 00:29:18.367 [2024-11-20 16:41:04.208051] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.367 [2024-11-20 16:41:04.208099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.368 [2024-11-20 16:41:04.208113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.368 [2024-11-20 16:41:04.208120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.368 [2024-11-20 16:41:04.208127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.368 [2024-11-20 16:41:04.208140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.368 qpair failed and we were unable to recover it. 00:29:18.368 [2024-11-20 16:41:04.218031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.368 [2024-11-20 16:41:04.218076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.368 [2024-11-20 16:41:04.218089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.368 [2024-11-20 16:41:04.218097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.368 [2024-11-20 16:41:04.218103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.368 [2024-11-20 16:41:04.218117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.368 qpair failed and we were unable to recover it. 00:29:18.368 [2024-11-20 16:41:04.228140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.368 [2024-11-20 16:41:04.228194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.368 [2024-11-20 16:41:04.228211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.368 [2024-11-20 16:41:04.228218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.368 [2024-11-20 16:41:04.228224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.368 [2024-11-20 16:41:04.228238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.368 qpair failed and we were unable to recover it. 00:29:18.368 [2024-11-20 16:41:04.238127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.368 [2024-11-20 16:41:04.238194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.368 [2024-11-20 16:41:04.238208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.368 [2024-11-20 16:41:04.238215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.368 [2024-11-20 16:41:04.238221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.368 [2024-11-20 16:41:04.238235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.368 qpair failed and we were unable to recover it. 00:29:18.368 [2024-11-20 16:41:04.248053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.368 [2024-11-20 16:41:04.248098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.368 [2024-11-20 16:41:04.248112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.368 [2024-11-20 16:41:04.248119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.368 [2024-11-20 16:41:04.248125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.368 [2024-11-20 16:41:04.248138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.368 qpair failed and we were unable to recover it. 00:29:18.368 [2024-11-20 16:41:04.258082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.368 [2024-11-20 16:41:04.258129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.368 [2024-11-20 16:41:04.258143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.368 [2024-11-20 16:41:04.258150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.368 [2024-11-20 16:41:04.258156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.368 [2024-11-20 16:41:04.258169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.368 qpair failed and we were unable to recover it. 00:29:18.368 [2024-11-20 16:41:04.268282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.368 [2024-11-20 16:41:04.268367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.368 [2024-11-20 16:41:04.268380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.368 [2024-11-20 16:41:04.268387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.368 [2024-11-20 16:41:04.268400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.368 [2024-11-20 16:41:04.268413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.368 qpair failed and we were unable to recover it. 00:29:18.368 [2024-11-20 16:41:04.278233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.368 [2024-11-20 16:41:04.278283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.368 [2024-11-20 16:41:04.278296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.368 [2024-11-20 16:41:04.278303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.368 [2024-11-20 16:41:04.278311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.368 [2024-11-20 16:41:04.278324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.368 qpair failed and we were unable to recover it. 00:29:18.368 [2024-11-20 16:41:04.288253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.368 [2024-11-20 16:41:04.288309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.368 [2024-11-20 16:41:04.288323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.368 [2024-11-20 16:41:04.288330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.368 [2024-11-20 16:41:04.288336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.368 [2024-11-20 16:41:04.288350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.368 qpair failed and we were unable to recover it. 00:29:18.368 [2024-11-20 16:41:04.298238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.368 [2024-11-20 16:41:04.298303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.368 [2024-11-20 16:41:04.298316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.368 [2024-11-20 16:41:04.298323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.368 [2024-11-20 16:41:04.298329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.368 [2024-11-20 16:41:04.298343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.368 qpair failed and we were unable to recover it. 00:29:18.368 [2024-11-20 16:41:04.308354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.368 [2024-11-20 16:41:04.308441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.368 [2024-11-20 16:41:04.308455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.368 [2024-11-20 16:41:04.308462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.368 [2024-11-20 16:41:04.308469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.368 [2024-11-20 16:41:04.308482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.368 qpair failed and we were unable to recover it. 00:29:18.368 [2024-11-20 16:41:04.318224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.368 [2024-11-20 16:41:04.318278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.368 [2024-11-20 16:41:04.318291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.368 [2024-11-20 16:41:04.318298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.368 [2024-11-20 16:41:04.318304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.368 [2024-11-20 16:41:04.318318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.368 qpair failed and we were unable to recover it. 00:29:18.632 [2024-11-20 16:41:04.328355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.632 [2024-11-20 16:41:04.328406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.632 [2024-11-20 16:41:04.328419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.632 [2024-11-20 16:41:04.328426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.632 [2024-11-20 16:41:04.328432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.632 [2024-11-20 16:41:04.328446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.632 qpair failed and we were unable to recover it. 00:29:18.632 [2024-11-20 16:41:04.338389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.632 [2024-11-20 16:41:04.338488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.632 [2024-11-20 16:41:04.338502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.632 [2024-11-20 16:41:04.338509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.632 [2024-11-20 16:41:04.338515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.632 [2024-11-20 16:41:04.338528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.632 qpair failed and we were unable to recover it. 00:29:18.632 [2024-11-20 16:41:04.348464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.632 [2024-11-20 16:41:04.348523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.632 [2024-11-20 16:41:04.348537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.632 [2024-11-20 16:41:04.348544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.632 [2024-11-20 16:41:04.348550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.632 [2024-11-20 16:41:04.348563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.632 qpair failed and we were unable to recover it. 00:29:18.632 [2024-11-20 16:41:04.358463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.632 [2024-11-20 16:41:04.358511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.632 [2024-11-20 16:41:04.358527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.632 [2024-11-20 16:41:04.358534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.632 [2024-11-20 16:41:04.358541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.632 [2024-11-20 16:41:04.358554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.632 qpair failed and we were unable to recover it. 00:29:18.632 [2024-11-20 16:41:04.368500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.632 [2024-11-20 16:41:04.368577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.632 [2024-11-20 16:41:04.368591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.632 [2024-11-20 16:41:04.368598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.632 [2024-11-20 16:41:04.368604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.632 [2024-11-20 16:41:04.368617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.632 qpair failed and we were unable to recover it. 00:29:18.632 [2024-11-20 16:41:04.378494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.632 [2024-11-20 16:41:04.378541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.632 [2024-11-20 16:41:04.378554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.632 [2024-11-20 16:41:04.378561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.632 [2024-11-20 16:41:04.378568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.632 [2024-11-20 16:41:04.378581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.632 qpair failed and we were unable to recover it. 00:29:18.632 [2024-11-20 16:41:04.388627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.632 [2024-11-20 16:41:04.388685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.632 [2024-11-20 16:41:04.388699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.632 [2024-11-20 16:41:04.388706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.632 [2024-11-20 16:41:04.388712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.632 [2024-11-20 16:41:04.388725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.632 qpair failed and we were unable to recover it. 00:29:18.632 [2024-11-20 16:41:04.398566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.632 [2024-11-20 16:41:04.398631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.632 [2024-11-20 16:41:04.398645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.632 [2024-11-20 16:41:04.398652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.632 [2024-11-20 16:41:04.398662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.633 [2024-11-20 16:41:04.398676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.633 qpair failed and we were unable to recover it. 00:29:18.633 [2024-11-20 16:41:04.408581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.633 [2024-11-20 16:41:04.408631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.633 [2024-11-20 16:41:04.408645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.633 [2024-11-20 16:41:04.408652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.633 [2024-11-20 16:41:04.408658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.633 [2024-11-20 16:41:04.408672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.633 qpair failed and we were unable to recover it. 00:29:18.633 [2024-11-20 16:41:04.418607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.633 [2024-11-20 16:41:04.418655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.633 [2024-11-20 16:41:04.418668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.633 [2024-11-20 16:41:04.418675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.633 [2024-11-20 16:41:04.418682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.633 [2024-11-20 16:41:04.418695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.633 qpair failed and we were unable to recover it. 00:29:18.633 [2024-11-20 16:41:04.428667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.633 [2024-11-20 16:41:04.428748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.633 [2024-11-20 16:41:04.428761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.633 [2024-11-20 16:41:04.428768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.633 [2024-11-20 16:41:04.428774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.633 [2024-11-20 16:41:04.428788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.633 qpair failed and we were unable to recover it. 00:29:18.633 [2024-11-20 16:41:04.438616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.633 [2024-11-20 16:41:04.438672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.633 [2024-11-20 16:41:04.438685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.633 [2024-11-20 16:41:04.438693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.633 [2024-11-20 16:41:04.438699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.633 [2024-11-20 16:41:04.438712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.633 qpair failed and we were unable to recover it. 00:29:18.633 [2024-11-20 16:41:04.448654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.633 [2024-11-20 16:41:04.448706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.633 [2024-11-20 16:41:04.448720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.633 [2024-11-20 16:41:04.448727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.633 [2024-11-20 16:41:04.448733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.633 [2024-11-20 16:41:04.448746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.633 qpair failed and we were unable to recover it. 00:29:18.633 [2024-11-20 16:41:04.458698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.633 [2024-11-20 16:41:04.458748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.633 [2024-11-20 16:41:04.458761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.633 [2024-11-20 16:41:04.458768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.633 [2024-11-20 16:41:04.458774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.633 [2024-11-20 16:41:04.458787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.633 qpair failed and we were unable to recover it. 00:29:18.633 [2024-11-20 16:41:04.468783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.633 [2024-11-20 16:41:04.468836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.633 [2024-11-20 16:41:04.468849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.633 [2024-11-20 16:41:04.468856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.633 [2024-11-20 16:41:04.468862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.633 [2024-11-20 16:41:04.468876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.633 qpair failed and we were unable to recover it. 00:29:18.633 [2024-11-20 16:41:04.478760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.633 [2024-11-20 16:41:04.478808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.633 [2024-11-20 16:41:04.478821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.633 [2024-11-20 16:41:04.478828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.633 [2024-11-20 16:41:04.478834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.633 [2024-11-20 16:41:04.478848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.633 qpair failed and we were unable to recover it. 00:29:18.633 [2024-11-20 16:41:04.488787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.633 [2024-11-20 16:41:04.488851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.633 [2024-11-20 16:41:04.488868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.633 [2024-11-20 16:41:04.488875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.633 [2024-11-20 16:41:04.488882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.633 [2024-11-20 16:41:04.488895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.633 qpair failed and we were unable to recover it. 00:29:18.633 [2024-11-20 16:41:04.498822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.633 [2024-11-20 16:41:04.498870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.633 [2024-11-20 16:41:04.498884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.633 [2024-11-20 16:41:04.498891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.633 [2024-11-20 16:41:04.498897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.633 [2024-11-20 16:41:04.498910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.633 qpair failed and we were unable to recover it. 00:29:18.633 [2024-11-20 16:41:04.508877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.633 [2024-11-20 16:41:04.508928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.633 [2024-11-20 16:41:04.508942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.633 [2024-11-20 16:41:04.508949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.633 [2024-11-20 16:41:04.508955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.633 [2024-11-20 16:41:04.508968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.633 qpair failed and we were unable to recover it. 00:29:18.633 [2024-11-20 16:41:04.518856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.633 [2024-11-20 16:41:04.518911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.633 [2024-11-20 16:41:04.518925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.633 [2024-11-20 16:41:04.518932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.633 [2024-11-20 16:41:04.518938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.633 [2024-11-20 16:41:04.518951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.633 qpair failed and we were unable to recover it. 00:29:18.633 [2024-11-20 16:41:04.528885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.633 [2024-11-20 16:41:04.528931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.633 [2024-11-20 16:41:04.528945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.633 [2024-11-20 16:41:04.528952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.633 [2024-11-20 16:41:04.528961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.634 [2024-11-20 16:41:04.528974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.634 qpair failed and we were unable to recover it. 00:29:18.634 [2024-11-20 16:41:04.538854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.634 [2024-11-20 16:41:04.538908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.634 [2024-11-20 16:41:04.538921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.634 [2024-11-20 16:41:04.538929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.634 [2024-11-20 16:41:04.538937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.634 [2024-11-20 16:41:04.538951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.634 qpair failed and we were unable to recover it. 00:29:18.634 [2024-11-20 16:41:04.548976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.634 [2024-11-20 16:41:04.549031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.634 [2024-11-20 16:41:04.549046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.634 [2024-11-20 16:41:04.549053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.634 [2024-11-20 16:41:04.549059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.634 [2024-11-20 16:41:04.549073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.634 qpair failed and we were unable to recover it. 00:29:18.634 [2024-11-20 16:41:04.558989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.634 [2024-11-20 16:41:04.559040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.634 [2024-11-20 16:41:04.559054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.634 [2024-11-20 16:41:04.559061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.634 [2024-11-20 16:41:04.559067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.634 [2024-11-20 16:41:04.559082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.634 qpair failed and we were unable to recover it. 00:29:18.634 [2024-11-20 16:41:04.569022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.634 [2024-11-20 16:41:04.569067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.634 [2024-11-20 16:41:04.569080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.634 [2024-11-20 16:41:04.569087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.634 [2024-11-20 16:41:04.569093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.634 [2024-11-20 16:41:04.569107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.634 qpair failed and we were unable to recover it. 00:29:18.634 [2024-11-20 16:41:04.579024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.634 [2024-11-20 16:41:04.579072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.634 [2024-11-20 16:41:04.579086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.634 [2024-11-20 16:41:04.579093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.634 [2024-11-20 16:41:04.579099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a1010 00:29:18.634 [2024-11-20 16:41:04.579112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.634 qpair failed and we were unable to recover it. 00:29:18.895 [2024-11-20 16:41:04.589064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.895 [2024-11-20 16:41:04.589166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.895 [2024-11-20 16:41:04.589231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.895 [2024-11-20 16:41:04.589258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.895 [2024-11-20 16:41:04.589279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe980000b90 00:29:18.895 [2024-11-20 16:41:04.589334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:18.895 qpair failed and we were unable to recover it. 00:29:18.895 [2024-11-20 16:41:04.599076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.895 [2024-11-20 16:41:04.599148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.895 [2024-11-20 16:41:04.599178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.895 [2024-11-20 16:41:04.599194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.895 [2024-11-20 16:41:04.599208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe980000b90 00:29:18.895 [2024-11-20 16:41:04.599240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:18.895 qpair failed and we were unable to recover it. 00:29:18.895 [2024-11-20 16:41:04.599457] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:29:18.895 A controller has encountered a failure and is being reset. 00:29:18.895 Controller properly reset. 00:29:18.895 Initializing NVMe Controllers 00:29:18.896 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:18.896 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:18.896 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:18.896 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:18.896 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:18.896 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:18.896 Initialization complete. Launching workers. 00:29:18.896 Starting thread on core 1 00:29:18.896 Starting thread on core 2 00:29:18.896 Starting thread on core 3 00:29:18.896 Starting thread on core 0 00:29:18.896 16:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:18.896 00:29:18.896 real 0m11.435s 00:29:18.896 user 0m21.831s 00:29:18.896 sys 0m3.516s 00:29:18.896 16:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:18.896 16:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:18.896 ************************************ 00:29:18.896 END TEST nvmf_target_disconnect_tc2 00:29:18.896 ************************************ 00:29:18.896 16:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:18.896 16:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:18.896 16:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:18.896 16:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:18.896 16:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:29:18.896 16:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:18.896 16:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:29:18.896 16:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:18.896 16:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:18.896 rmmod nvme_tcp 00:29:18.896 rmmod nvme_fabrics 00:29:18.896 rmmod nvme_keyring 00:29:18.896 16:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:18.896 16:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:29:18.896 16:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:29:18.896 16:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 2396178 ']' 00:29:18.896 16:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 2396178 00:29:18.896 16:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2396178 ']' 00:29:18.896 16:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 2396178 00:29:18.896 16:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:29:18.896 16:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:18.896 16:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2396178 00:29:18.896 16:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:29:18.896 16:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:29:18.896 16:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2396178' 00:29:18.896 killing process with pid 2396178 00:29:18.896 16:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 2396178 00:29:18.896 16:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 2396178 00:29:19.157 16:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:19.157 16:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:19.157 16:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:19.157 16:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:29:19.157 16:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:29:19.157 16:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:19.157 16:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:29:19.157 16:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:19.157 16:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:19.157 16:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:19.157 16:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:19.157 16:41:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:21.070 16:41:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:21.070 00:29:21.070 real 0m21.790s 00:29:21.070 user 0m49.598s 00:29:21.070 sys 0m9.692s 00:29:21.070 16:41:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:21.070 16:41:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:21.070 ************************************ 00:29:21.070 END TEST nvmf_target_disconnect 00:29:21.070 ************************************ 00:29:21.332 16:41:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:29:21.332 00:29:21.332 real 6m31.612s 00:29:21.332 user 11m19.230s 00:29:21.332 sys 2m11.507s 00:29:21.332 16:41:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:21.332 16:41:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.332 ************************************ 00:29:21.332 END TEST nvmf_host 00:29:21.332 ************************************ 00:29:21.332 16:41:07 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:29:21.332 16:41:07 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:29:21.332 16:41:07 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:21.332 16:41:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:21.332 16:41:07 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:21.332 16:41:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:21.332 ************************************ 00:29:21.332 START TEST nvmf_target_core_interrupt_mode 00:29:21.332 ************************************ 00:29:21.332 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:21.332 * Looking for test storage... 00:29:21.332 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:21.332 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:21.332 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:29:21.332 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:21.594 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:21.594 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:21.594 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:21.594 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:21.594 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:29:21.594 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:29:21.594 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:29:21.594 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:29:21.594 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:29:21.594 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:29:21.594 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:29:21.594 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:21.594 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:29:21.594 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:29:21.594 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:21.594 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:21.594 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:29:21.594 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:29:21.594 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:21.594 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:29:21.594 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:29:21.594 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:29:21.594 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:29:21.594 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:21.594 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:29:21.594 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:29:21.594 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:21.594 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:21.594 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:29:21.594 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:21.594 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:21.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.594 --rc genhtml_branch_coverage=1 00:29:21.594 --rc genhtml_function_coverage=1 00:29:21.594 --rc genhtml_legend=1 00:29:21.594 --rc geninfo_all_blocks=1 00:29:21.594 --rc geninfo_unexecuted_blocks=1 00:29:21.594 00:29:21.594 ' 00:29:21.594 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:21.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.594 --rc genhtml_branch_coverage=1 00:29:21.594 --rc genhtml_function_coverage=1 00:29:21.594 --rc genhtml_legend=1 00:29:21.594 --rc geninfo_all_blocks=1 00:29:21.594 --rc geninfo_unexecuted_blocks=1 00:29:21.594 00:29:21.594 ' 00:29:21.594 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:21.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.595 --rc genhtml_branch_coverage=1 00:29:21.595 --rc genhtml_function_coverage=1 00:29:21.595 --rc genhtml_legend=1 00:29:21.595 --rc geninfo_all_blocks=1 00:29:21.595 --rc geninfo_unexecuted_blocks=1 00:29:21.595 00:29:21.595 ' 00:29:21.595 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:21.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.595 --rc genhtml_branch_coverage=1 00:29:21.595 --rc genhtml_function_coverage=1 00:29:21.595 --rc genhtml_legend=1 00:29:21.595 --rc geninfo_all_blocks=1 00:29:21.595 --rc geninfo_unexecuted_blocks=1 00:29:21.595 00:29:21.595 ' 00:29:21.595 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:29:21.595 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:29:21.595 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:21.595 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:29:21.595 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:21.595 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:21.595 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:21.595 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:21.595 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:21.595 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:21.595 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:21.595 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:21.595 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:21.595 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:21.595 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:29:21.595 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:29:21.595 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:21.595 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:21.595 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:21.595 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:21.595 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:21.595 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:29:21.595 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:21.595 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:21.595 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:21.595 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.595 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.595 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.595 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:29:21.595 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.595 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:29:21.595 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:21.595 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:21.595 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:21.595 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:21.595 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:21.595 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:21.595 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:21.595 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:21.595 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:21.595 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:21.595 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:21.595 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:29:21.595 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:29:21.595 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:21.595 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:21.595 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:21.595 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:21.595 ************************************ 00:29:21.595 START TEST nvmf_abort 00:29:21.595 ************************************ 00:29:21.595 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:21.595 * Looking for test storage... 00:29:21.595 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:21.595 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:21.595 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:29:21.595 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:21.857 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:21.857 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:21.857 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:21.857 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:21.857 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:29:21.857 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:29:21.857 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:29:21.857 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:29:21.857 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:29:21.857 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:29:21.857 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:29:21.857 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:21.857 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:29:21.857 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:29:21.857 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:21.857 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:21.857 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:29:21.857 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:29:21.857 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:21.857 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:29:21.857 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:29:21.857 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:29:21.857 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:29:21.857 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:21.857 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:29:21.857 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:29:21.857 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:21.857 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:21.857 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:29:21.857 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:21.857 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:21.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.857 --rc genhtml_branch_coverage=1 00:29:21.857 --rc genhtml_function_coverage=1 00:29:21.857 --rc genhtml_legend=1 00:29:21.857 --rc geninfo_all_blocks=1 00:29:21.857 --rc geninfo_unexecuted_blocks=1 00:29:21.857 00:29:21.857 ' 00:29:21.857 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:21.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.857 --rc genhtml_branch_coverage=1 00:29:21.857 --rc genhtml_function_coverage=1 00:29:21.857 --rc genhtml_legend=1 00:29:21.857 --rc geninfo_all_blocks=1 00:29:21.857 --rc geninfo_unexecuted_blocks=1 00:29:21.857 00:29:21.857 ' 00:29:21.857 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:21.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.857 --rc genhtml_branch_coverage=1 00:29:21.857 --rc genhtml_function_coverage=1 00:29:21.857 --rc genhtml_legend=1 00:29:21.857 --rc geninfo_all_blocks=1 00:29:21.857 --rc geninfo_unexecuted_blocks=1 00:29:21.857 00:29:21.857 ' 00:29:21.857 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:21.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.857 --rc genhtml_branch_coverage=1 00:29:21.857 --rc genhtml_function_coverage=1 00:29:21.857 --rc genhtml_legend=1 00:29:21.857 --rc geninfo_all_blocks=1 00:29:21.857 --rc geninfo_unexecuted_blocks=1 00:29:21.857 00:29:21.857 ' 00:29:21.857 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:21.857 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:29:21.857 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:21.857 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:21.857 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:21.857 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:21.857 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:21.857 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:21.857 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:21.857 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:21.857 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:21.857 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:21.857 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:29:21.857 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:29:21.857 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:21.857 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:21.857 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:21.857 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:21.857 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:21.857 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:29:21.858 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:21.858 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:21.858 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:21.858 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.858 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.858 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.858 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:29:21.858 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.858 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:29:21.858 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:21.858 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:21.858 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:21.858 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:21.858 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:21.858 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:21.858 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:21.858 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:21.858 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:21.858 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:21.858 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:21.858 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:29:21.858 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:29:21.858 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:21.858 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:21.858 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:21.858 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:21.858 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:21.858 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:21.858 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:21.858 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:21.858 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:21.858 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:21.858 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:29:21.858 16:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:30.003 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:30.003 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:29:30.003 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:30.003 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:30.003 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:30.003 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:30.003 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:30.003 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:29:30.003 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:30.003 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:29:30.003 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:29:30.003 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:29:30.003 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:29:30.003 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:29:30.003 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:29:30.003 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:30.003 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:30.003 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:30.003 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:30.003 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:30.003 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:30.003 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:30.003 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:30.004 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:30.004 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:30.004 Found net devices under 0000:31:00.0: cvl_0_0 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:30.004 Found net devices under 0000:31:00.1: cvl_0_1 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:30.004 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:30.004 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.522 ms 00:29:30.004 00:29:30.004 --- 10.0.0.2 ping statistics --- 00:29:30.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:30.004 rtt min/avg/max/mdev = 0.522/0.522/0.522/0.000 ms 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:30.004 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:30.004 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:29:30.004 00:29:30.004 --- 10.0.0.1 ping statistics --- 00:29:30.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:30.004 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2401640 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2401640 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:29:30.004 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2401640 ']' 00:29:30.005 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:30.005 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:30.005 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:30.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:30.005 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:30.005 16:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:30.005 [2024-11-20 16:41:14.949236] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:30.005 [2024-11-20 16:41:14.950789] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:29:30.005 [2024-11-20 16:41:14.950854] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:30.005 [2024-11-20 16:41:15.051987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:30.005 [2024-11-20 16:41:15.103246] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:30.005 [2024-11-20 16:41:15.103300] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:30.005 [2024-11-20 16:41:15.103310] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:30.005 [2024-11-20 16:41:15.103317] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:30.005 [2024-11-20 16:41:15.103323] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:30.005 [2024-11-20 16:41:15.105291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:30.005 [2024-11-20 16:41:15.105453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:30.005 [2024-11-20 16:41:15.105455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:30.005 [2024-11-20 16:41:15.181017] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:30.005 [2024-11-20 16:41:15.181084] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:30.005 [2024-11-20 16:41:15.181714] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:30.005 [2024-11-20 16:41:15.182025] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:30.005 16:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:30.005 16:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:29:30.005 16:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:30.005 16:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:30.005 16:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:30.005 16:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:30.005 16:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:29:30.005 16:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.005 16:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:30.005 [2024-11-20 16:41:15.810352] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:30.005 16:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.005 16:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:29:30.005 16:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.005 16:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:30.005 Malloc0 00:29:30.005 16:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.005 16:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:30.005 16:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.005 16:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:30.005 Delay0 00:29:30.005 16:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.005 16:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:30.005 16:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.005 16:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:30.005 16:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.005 16:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:29:30.005 16:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.005 16:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:30.005 16:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.005 16:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:30.005 16:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.005 16:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:30.005 [2024-11-20 16:41:15.914267] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:30.005 16:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.005 16:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:30.005 16:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.005 16:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:30.005 16:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.005 16:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:29:30.265 [2024-11-20 16:41:16.040713] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:29:32.179 Initializing NVMe Controllers 00:29:32.179 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:32.179 controller IO queue size 128 less than required 00:29:32.179 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:29:32.179 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:29:32.179 Initialization complete. Launching workers. 00:29:32.179 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29169 00:29:32.179 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29226, failed to submit 66 00:29:32.179 success 29169, unsuccessful 57, failed 0 00:29:32.179 16:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:32.179 16:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.179 16:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:32.179 16:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.179 16:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:29:32.179 16:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:29:32.179 16:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:32.179 16:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:29:32.440 16:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:32.440 16:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:29:32.440 16:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:32.440 16:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:32.440 rmmod nvme_tcp 00:29:32.440 rmmod nvme_fabrics 00:29:32.440 rmmod nvme_keyring 00:29:32.440 16:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:32.440 16:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:29:32.440 16:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:29:32.440 16:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2401640 ']' 00:29:32.440 16:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2401640 00:29:32.440 16:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2401640 ']' 00:29:32.440 16:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2401640 00:29:32.440 16:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:29:32.440 16:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:32.440 16:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2401640 00:29:32.440 16:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:32.440 16:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:32.440 16:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2401640' 00:29:32.440 killing process with pid 2401640 00:29:32.440 16:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2401640 00:29:32.440 16:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2401640 00:29:32.701 16:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:32.701 16:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:32.701 16:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:32.701 16:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:29:32.701 16:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:29:32.701 16:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:32.701 16:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:29:32.701 16:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:32.701 16:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:32.701 16:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:32.701 16:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:32.701 16:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:34.611 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:34.611 00:29:34.611 real 0m13.077s 00:29:34.611 user 0m10.755s 00:29:34.611 sys 0m6.725s 00:29:34.611 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:34.611 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:34.611 ************************************ 00:29:34.611 END TEST nvmf_abort 00:29:34.611 ************************************ 00:29:34.611 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:29:34.611 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:34.611 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:34.611 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:34.611 ************************************ 00:29:34.611 START TEST nvmf_ns_hotplug_stress 00:29:34.611 ************************************ 00:29:34.611 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:29:34.872 * Looking for test storage... 00:29:34.872 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:34.872 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:34.872 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:29:34.872 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:34.872 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:34.872 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:34.872 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:34.872 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:34.872 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:29:34.872 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:29:34.872 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:29:34.872 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:29:34.872 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:29:34.872 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:29:34.872 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:29:34.872 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:34.872 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:29:34.872 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:29:34.872 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:34.872 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:34.872 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:29:34.872 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:29:34.872 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:34.872 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:29:34.872 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:29:34.872 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:29:34.872 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:29:34.873 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:34.873 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:29:34.873 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:29:34.873 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:34.873 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:34.873 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:29:34.873 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:34.873 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:34.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:34.873 --rc genhtml_branch_coverage=1 00:29:34.873 --rc genhtml_function_coverage=1 00:29:34.873 --rc genhtml_legend=1 00:29:34.873 --rc geninfo_all_blocks=1 00:29:34.873 --rc geninfo_unexecuted_blocks=1 00:29:34.873 00:29:34.873 ' 00:29:34.873 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:34.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:34.873 --rc genhtml_branch_coverage=1 00:29:34.873 --rc genhtml_function_coverage=1 00:29:34.873 --rc genhtml_legend=1 00:29:34.873 --rc geninfo_all_blocks=1 00:29:34.873 --rc geninfo_unexecuted_blocks=1 00:29:34.873 00:29:34.873 ' 00:29:34.873 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:34.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:34.873 --rc genhtml_branch_coverage=1 00:29:34.873 --rc genhtml_function_coverage=1 00:29:34.873 --rc genhtml_legend=1 00:29:34.873 --rc geninfo_all_blocks=1 00:29:34.873 --rc geninfo_unexecuted_blocks=1 00:29:34.873 00:29:34.873 ' 00:29:34.873 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:34.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:34.873 --rc genhtml_branch_coverage=1 00:29:34.873 --rc genhtml_function_coverage=1 00:29:34.873 --rc genhtml_legend=1 00:29:34.873 --rc geninfo_all_blocks=1 00:29:34.873 --rc geninfo_unexecuted_blocks=1 00:29:34.873 00:29:34.873 ' 00:29:34.873 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:34.873 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:29:34.873 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:34.873 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:34.873 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:34.873 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:34.873 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:34.873 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:34.873 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:34.873 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:34.873 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:34.873 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:34.873 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:29:34.873 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:29:34.873 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:34.873 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:34.873 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:34.873 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:34.873 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:34.873 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:29:34.873 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:34.873 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:34.873 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:34.873 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.873 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.873 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.873 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:29:34.873 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.873 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:29:34.873 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:34.873 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:34.873 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:34.873 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:34.873 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:34.873 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:34.873 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:34.873 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:34.873 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:34.873 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:34.873 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:34.874 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:29:34.874 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:34.874 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:34.874 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:34.874 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:34.874 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:34.874 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:34.874 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:34.874 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:34.874 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:34.874 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:34.874 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:29:34.874 16:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:43.071 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:43.071 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:29:43.071 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:43.071 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:43.071 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:43.071 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:43.071 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:43.071 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:29:43.071 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:43.071 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:29:43.071 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:29:43.071 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:29:43.071 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:29:43.071 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:29:43.071 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:29:43.071 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:43.071 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:43.071 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:43.071 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:43.071 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:43.071 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:43.071 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:43.071 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:43.071 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:43.071 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:43.071 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:43.071 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:43.071 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:43.071 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:43.071 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:43.071 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:43.071 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:43.071 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:43.071 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:43.071 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:43.071 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:43.071 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:43.071 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:43.071 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:43.071 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:43.071 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:43.071 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:43.071 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:43.071 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:43.071 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:43.071 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:43.072 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:43.072 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:43.072 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:43.072 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:43.072 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:43.072 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:43.072 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:43.072 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:43.072 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:43.072 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:43.072 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:43.072 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:43.072 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:43.072 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:43.072 Found net devices under 0000:31:00.0: cvl_0_0 00:29:43.072 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:43.072 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:43.072 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:43.072 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:43.072 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:43.072 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:43.072 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:43.072 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:43.072 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:43.072 Found net devices under 0000:31:00.1: cvl_0_1 00:29:43.072 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:43.072 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:43.072 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:29:43.072 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:43.072 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:43.072 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:43.072 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:43.072 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:43.072 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:43.072 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:43.072 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:43.072 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:43.072 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:43.072 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:43.072 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:43.072 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:43.072 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:43.072 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:43.072 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:43.072 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:43.072 16:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:43.072 16:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:43.072 16:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:43.072 16:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:43.072 16:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:43.072 16:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:43.072 16:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:43.072 16:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:43.072 16:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:43.072 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:43.072 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.703 ms 00:29:43.072 00:29:43.072 --- 10.0.0.2 ping statistics --- 00:29:43.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:43.072 rtt min/avg/max/mdev = 0.703/0.703/0.703/0.000 ms 00:29:43.072 16:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:43.072 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:43.072 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.336 ms 00:29:43.072 00:29:43.072 --- 10.0.0.1 ping statistics --- 00:29:43.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:43.072 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:29:43.072 16:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:43.072 16:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:29:43.072 16:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:43.072 16:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:43.072 16:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:43.072 16:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:43.072 16:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:43.072 16:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:43.072 16:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:43.072 16:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:29:43.072 16:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:43.072 16:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:43.072 16:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:43.072 16:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2406491 00:29:43.072 16:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2406491 00:29:43.072 16:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:29:43.072 16:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2406491 ']' 00:29:43.072 16:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:43.072 16:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:43.072 16:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:43.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:43.072 16:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:43.072 16:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:43.072 [2024-11-20 16:41:28.312960] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:43.072 [2024-11-20 16:41:28.314131] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:29:43.072 [2024-11-20 16:41:28.314180] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:43.072 [2024-11-20 16:41:28.413839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:43.072 [2024-11-20 16:41:28.466058] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:43.072 [2024-11-20 16:41:28.466105] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:43.072 [2024-11-20 16:41:28.466114] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:43.072 [2024-11-20 16:41:28.466121] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:43.072 [2024-11-20 16:41:28.466128] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:43.073 [2024-11-20 16:41:28.467957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:43.073 [2024-11-20 16:41:28.468122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:43.073 [2024-11-20 16:41:28.468308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:43.073 [2024-11-20 16:41:28.544575] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:43.073 [2024-11-20 16:41:28.544636] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:43.073 [2024-11-20 16:41:28.545210] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:43.073 [2024-11-20 16:41:28.545517] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:43.336 16:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:43.336 16:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:29:43.336 16:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:43.336 16:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:43.336 16:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:43.336 16:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:43.336 16:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:29:43.336 16:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:43.599 [2024-11-20 16:41:29.333225] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:43.599 16:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:43.599 16:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:43.860 [2024-11-20 16:41:29.690040] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:43.860 16:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:44.122 16:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:29:44.382 Malloc0 00:29:44.382 16:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:44.382 Delay0 00:29:44.382 16:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:44.642 16:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:29:44.901 NULL1 00:29:44.901 16:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:29:44.901 16:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2407052 00:29:44.901 16:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:29:44.901 16:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:29:44.901 16:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:45.161 16:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:45.435 16:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:29:45.435 16:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:29:45.435 true 00:29:45.435 16:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:29:45.435 16:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:45.695 16:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:45.955 16:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:29:45.955 16:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:29:46.217 true 00:29:46.217 16:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:29:46.217 16:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:46.217 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:46.477 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:29:46.477 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:29:46.738 true 00:29:46.738 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:29:46.738 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:46.999 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:46.999 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:29:46.999 16:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:29:47.260 true 00:29:47.260 16:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:29:47.260 16:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:47.521 16:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:47.521 16:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:29:47.522 16:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:29:47.782 true 00:29:47.782 16:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:29:47.782 16:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:48.043 16:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:48.305 16:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:29:48.305 16:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:29:48.305 true 00:29:48.305 16:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:29:48.305 16:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:48.565 16:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:48.826 16:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:29:48.826 16:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:29:48.826 true 00:29:48.826 16:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:29:48.826 16:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:49.087 16:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:49.347 16:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:29:49.347 16:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:29:49.347 true 00:29:49.608 16:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:29:49.608 16:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:49.608 16:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:49.868 16:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:29:49.868 16:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:29:50.129 true 00:29:50.129 16:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:29:50.129 16:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:50.129 16:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:50.389 16:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:29:50.389 16:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:29:50.650 true 00:29:50.650 16:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:29:50.650 16:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:50.911 16:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:50.911 16:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:29:50.911 16:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:29:51.172 true 00:29:51.172 16:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:29:51.172 16:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:51.433 16:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:51.433 16:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:29:51.433 16:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:29:51.693 true 00:29:51.693 16:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:29:51.693 16:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:51.953 16:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:51.953 16:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:29:51.953 16:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:29:52.214 true 00:29:52.214 16:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:29:52.214 16:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:52.475 16:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:52.736 16:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:29:52.736 16:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:29:52.736 true 00:29:52.736 16:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:29:52.736 16:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:52.997 16:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:53.257 16:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:29:53.257 16:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:29:53.257 true 00:29:53.257 16:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:29:53.257 16:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:53.519 16:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:53.780 16:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:29:53.780 16:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:29:53.780 true 00:29:53.780 16:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:29:53.780 16:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:54.041 16:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:54.301 16:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:29:54.301 16:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:29:54.561 true 00:29:54.561 16:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:29:54.561 16:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:54.561 16:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:54.822 16:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:29:54.822 16:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:29:55.083 true 00:29:55.083 16:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:29:55.083 16:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:55.344 16:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:55.344 16:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:29:55.344 16:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:29:55.605 true 00:29:55.605 16:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:29:55.605 16:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:55.865 16:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:55.865 16:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:29:55.865 16:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:29:56.127 true 00:29:56.127 16:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:29:56.127 16:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:56.387 16:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:56.387 16:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:29:56.387 16:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:29:56.648 true 00:29:56.648 16:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:29:56.648 16:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:56.909 16:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:57.170 16:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:29:57.170 16:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:29:57.170 true 00:29:57.170 16:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:29:57.170 16:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:57.430 16:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:57.691 16:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:29:57.691 16:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:29:57.691 true 00:29:57.691 16:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:29:57.691 16:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:57.953 16:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:58.214 16:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:29:58.214 16:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:29:58.214 true 00:29:58.476 16:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:29:58.476 16:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:58.476 16:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:58.737 16:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:29:58.737 16:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:29:58.998 true 00:29:58.998 16:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:29:58.998 16:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:58.998 16:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:59.260 16:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:29:59.260 16:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:29:59.520 true 00:29:59.520 16:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:29:59.520 16:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:59.520 16:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:59.781 16:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:29:59.781 16:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:30:00.041 true 00:30:00.041 16:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:30:00.041 16:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:00.302 16:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:00.302 16:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:30:00.302 16:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:30:00.563 true 00:30:00.563 16:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:30:00.563 16:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:00.824 16:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:00.824 16:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:30:00.824 16:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:30:01.085 true 00:30:01.085 16:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:30:01.085 16:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:01.345 16:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:01.345 16:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:30:01.345 16:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:30:01.606 true 00:30:01.606 16:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:30:01.606 16:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:01.866 16:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:02.127 16:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:30:02.127 16:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:30:02.127 true 00:30:02.127 16:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:30:02.127 16:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:02.388 16:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:02.649 16:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:30:02.649 16:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:30:02.649 true 00:30:02.649 16:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:30:02.649 16:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:02.910 16:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:03.171 16:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:30:03.171 16:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:30:03.171 true 00:30:03.439 16:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:30:03.439 16:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:03.439 16:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:03.701 16:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:30:03.702 16:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:30:03.963 true 00:30:03.963 16:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:30:03.963 16:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:04.223 16:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:04.223 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:30:04.223 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:30:04.484 true 00:30:04.484 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:30:04.484 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:04.745 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:04.745 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:30:04.745 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:30:05.006 true 00:30:05.006 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:30:05.006 16:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:05.267 16:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:05.527 16:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:30:05.527 16:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:30:05.527 true 00:30:05.527 16:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:30:05.527 16:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:05.789 16:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:06.049 16:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:30:06.049 16:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:30:06.049 true 00:30:06.049 16:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:30:06.049 16:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:06.329 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:06.590 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:30:06.590 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:30:06.590 true 00:30:06.590 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:30:06.590 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:06.852 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:07.112 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:30:07.113 16:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:30:07.113 true 00:30:07.113 16:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:30:07.113 16:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:07.374 16:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:07.636 16:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:30:07.636 16:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:30:07.636 true 00:30:07.896 16:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:30:07.896 16:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:07.896 16:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:08.193 16:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:30:08.193 16:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:30:08.193 true 00:30:08.454 16:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:30:08.454 16:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:08.454 16:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:08.715 16:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:30:08.716 16:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:30:08.977 true 00:30:08.977 16:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:30:08.977 16:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:08.977 16:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:09.238 16:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:30:09.238 16:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:30:09.499 true 00:30:09.499 16:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:30:09.499 16:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:09.499 16:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:09.760 16:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:30:09.760 16:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:30:10.021 true 00:30:10.021 16:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:30:10.021 16:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:10.282 16:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:10.282 16:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:30:10.282 16:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:30:10.542 true 00:30:10.542 16:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:30:10.542 16:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:10.803 16:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:10.803 16:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:30:10.803 16:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:30:11.063 true 00:30:11.063 16:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:30:11.063 16:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:11.324 16:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:11.585 16:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:30:11.585 16:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:30:11.585 true 00:30:11.585 16:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:30:11.585 16:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:11.846 16:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:12.107 16:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:30:12.107 16:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:30:12.107 true 00:30:12.107 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:30:12.107 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:12.369 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:12.630 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:30:12.630 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:30:12.630 true 00:30:12.630 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:30:12.630 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:12.892 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:13.153 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:30:13.153 16:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:30:13.412 true 00:30:13.412 16:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:30:13.412 16:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:13.412 16:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:13.672 16:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:30:13.672 16:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:30:13.933 true 00:30:13.933 16:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:30:13.933 16:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:13.933 16:41:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:14.193 16:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:30:14.193 16:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:30:14.453 true 00:30:14.453 16:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:30:14.453 16:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:14.714 16:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:14.714 16:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:30:14.714 16:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:30:14.975 true 00:30:14.975 16:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:30:14.975 16:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:15.235 16:42:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:15.235 16:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:30:15.235 16:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:30:15.235 Initializing NVMe Controllers 00:30:15.235 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:15.235 Controller IO queue size 128, less than required. 00:30:15.235 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:15.235 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:15.235 Initialization complete. Launching workers. 00:30:15.235 ======================================================== 00:30:15.235 Latency(us) 00:30:15.235 Device Information : IOPS MiB/s Average min max 00:30:15.235 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30032.22 14.66 4262.10 1485.55 11473.19 00:30:15.235 ======================================================== 00:30:15.235 Total : 30032.22 14.66 4262.10 1485.55 11473.19 00:30:15.235 00:30:15.496 true 00:30:15.496 16:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2407052 00:30:15.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2407052) - No such process 00:30:15.496 16:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2407052 00:30:15.496 16:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:15.757 16:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:15.757 16:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:30:15.757 16:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:30:15.757 16:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:30:15.757 16:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:15.757 16:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:30:16.019 null0 00:30:16.019 16:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:16.019 16:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:16.019 16:42:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:30:16.281 null1 00:30:16.281 16:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:16.281 16:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:16.281 16:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:30:16.281 null2 00:30:16.281 16:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:16.281 16:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:16.281 16:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:30:16.542 null3 00:30:16.542 16:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:16.542 16:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:16.542 16:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:30:16.804 null4 00:30:16.804 16:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:16.804 16:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:16.804 16:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:30:16.804 null5 00:30:16.804 16:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:16.804 16:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:16.804 16:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:30:17.065 null6 00:30:17.065 16:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:17.065 16:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:17.066 16:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:30:17.066 null7 00:30:17.066 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:17.066 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:17.066 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2413350 2413351 2413353 2413356 2413357 2413359 2413361 2413363 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:17.328 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:17.329 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:17.329 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:17.590 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:17.590 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:17.590 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.590 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.590 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:17.590 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.590 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.590 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:17.590 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.590 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.590 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:17.590 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.590 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.590 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:17.590 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.590 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.590 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:17.590 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.590 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.590 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:17.590 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.590 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.590 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:17.590 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.590 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.590 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:17.851 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:17.851 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:17.851 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:17.851 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:17.851 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:17.851 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:17.851 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:17.851 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.851 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.851 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:17.851 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:17.851 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:17.851 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:17.851 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:18.113 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.113 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.113 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:18.113 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.113 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.113 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.113 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.113 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:18.113 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:18.113 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.113 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.113 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:18.113 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.113 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.113 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:18.113 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:18.113 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.113 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.113 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:18.113 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:18.113 16:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:18.113 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:18.113 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:18.113 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:18.113 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.113 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.113 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:18.113 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:18.375 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:18.375 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.375 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.376 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:18.376 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.376 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.376 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:18.376 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.376 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.376 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:18.376 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.376 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.376 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:18.376 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.376 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.376 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:18.376 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.376 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.376 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:18.376 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.376 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.376 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:18.376 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:18.636 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:18.636 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:18.636 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:18.636 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:18.636 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:18.636 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:18.636 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:18.636 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.636 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.636 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:18.636 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.636 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.636 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:18.636 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.636 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.636 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:18.636 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.636 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.636 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:18.636 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.636 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.636 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:18.897 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.897 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.897 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:18.897 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.897 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.897 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:18.897 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:18.897 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:18.897 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:18.897 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:18.897 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:18.897 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:18.897 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:18.897 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:18.897 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:18.897 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:18.897 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:19.158 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.158 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.158 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:19.158 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.158 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.158 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:19.158 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.158 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.158 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:19.158 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.158 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.158 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.158 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.158 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:19.158 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:19.158 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.158 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.158 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:19.158 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.158 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.158 16:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:19.158 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.158 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.158 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:19.158 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:19.158 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:19.158 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:19.158 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:19.420 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:19.420 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:19.420 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:19.420 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.420 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.420 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:19.420 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:19.420 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.420 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.420 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:19.420 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.420 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.420 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:19.420 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.420 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.420 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:19.421 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.421 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.421 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:19.421 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.421 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.421 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:19.421 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.421 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.421 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:19.682 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.682 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.682 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:19.682 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:19.682 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:19.682 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:19.682 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:19.682 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:19.682 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:19.682 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:19.682 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.682 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.682 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:19.682 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.682 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.682 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:19.682 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:19.682 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.682 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.682 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:19.942 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.942 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.942 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:19.943 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.943 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.943 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:19.943 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.943 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.943 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:19.943 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:19.943 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.943 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.943 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:19.943 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:19.943 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:19.943 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:19.943 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:19.943 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:19.943 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:19.943 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:19.943 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:20.203 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.204 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.204 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:20.204 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.204 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.204 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:20.204 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:20.204 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.204 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.204 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:20.204 16:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:20.204 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.204 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.204 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.204 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:20.204 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.204 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:20.204 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:20.204 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.204 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.204 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:20.204 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:20.204 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:20.204 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.204 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.204 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:20.204 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.204 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.204 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:20.464 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:20.464 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.464 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.464 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:20.465 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:20.465 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:20.465 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:20.465 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.465 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.465 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:20.465 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.465 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.465 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:20.465 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:20.465 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.465 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.465 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:20.725 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:20.725 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.725 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.725 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:20.725 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.725 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.725 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:20.725 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.725 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.725 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:20.725 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.725 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.725 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:20.725 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:20.725 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:20.725 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:20.725 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.725 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.725 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:20.725 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:20.987 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:20.987 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.987 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.987 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:20.987 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.987 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.987 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.987 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.987 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.987 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.987 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.987 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.987 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.987 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.987 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:20.987 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:20.987 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:30:20.987 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:30:20.987 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:20.987 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:30:20.987 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:20.987 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:30:20.987 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:20.987 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:20.987 rmmod nvme_tcp 00:30:20.987 rmmod nvme_fabrics 00:30:20.987 rmmod nvme_keyring 00:30:21.248 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:21.248 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:30:21.248 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:30:21.248 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2406491 ']' 00:30:21.248 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2406491 00:30:21.248 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2406491 ']' 00:30:21.248 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2406491 00:30:21.248 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:30:21.248 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:21.248 16:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2406491 00:30:21.248 16:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:21.248 16:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:21.248 16:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2406491' 00:30:21.248 killing process with pid 2406491 00:30:21.248 16:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2406491 00:30:21.248 16:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2406491 00:30:21.248 16:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:21.248 16:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:21.248 16:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:21.248 16:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:30:21.248 16:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:30:21.248 16:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:21.248 16:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:30:21.248 16:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:21.248 16:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:21.248 16:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:21.248 16:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:21.248 16:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:23.796 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:23.796 00:30:23.796 real 0m48.683s 00:30:23.796 user 3m3.325s 00:30:23.796 sys 0m21.580s 00:30:23.796 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:23.796 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:23.796 ************************************ 00:30:23.796 END TEST nvmf_ns_hotplug_stress 00:30:23.796 ************************************ 00:30:23.796 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:23.796 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:23.796 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:23.796 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:23.796 ************************************ 00:30:23.796 START TEST nvmf_delete_subsystem 00:30:23.796 ************************************ 00:30:23.796 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:23.796 * Looking for test storage... 00:30:23.796 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:23.796 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:23.796 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:30:23.796 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:23.796 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:23.796 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:23.796 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:23.796 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:23.796 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:30:23.796 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:30:23.796 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:30:23.796 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:30:23.796 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:30:23.796 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:30:23.796 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:30:23.796 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:23.796 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:30:23.796 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:30:23.796 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:23.796 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:23.796 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:30:23.796 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:30:23.796 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:23.796 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:30:23.796 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:30:23.796 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:30:23.796 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:30:23.796 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:23.796 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:30:23.796 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:30:23.796 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:23.796 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:23.796 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:30:23.796 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:23.796 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:23.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.796 --rc genhtml_branch_coverage=1 00:30:23.796 --rc genhtml_function_coverage=1 00:30:23.796 --rc genhtml_legend=1 00:30:23.796 --rc geninfo_all_blocks=1 00:30:23.796 --rc geninfo_unexecuted_blocks=1 00:30:23.796 00:30:23.796 ' 00:30:23.796 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:23.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.796 --rc genhtml_branch_coverage=1 00:30:23.796 --rc genhtml_function_coverage=1 00:30:23.796 --rc genhtml_legend=1 00:30:23.796 --rc geninfo_all_blocks=1 00:30:23.796 --rc geninfo_unexecuted_blocks=1 00:30:23.796 00:30:23.796 ' 00:30:23.796 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:23.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.796 --rc genhtml_branch_coverage=1 00:30:23.796 --rc genhtml_function_coverage=1 00:30:23.796 --rc genhtml_legend=1 00:30:23.796 --rc geninfo_all_blocks=1 00:30:23.796 --rc geninfo_unexecuted_blocks=1 00:30:23.796 00:30:23.796 ' 00:30:23.796 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:23.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.796 --rc genhtml_branch_coverage=1 00:30:23.796 --rc genhtml_function_coverage=1 00:30:23.796 --rc genhtml_legend=1 00:30:23.796 --rc geninfo_all_blocks=1 00:30:23.796 --rc geninfo_unexecuted_blocks=1 00:30:23.796 00:30:23.796 ' 00:30:23.796 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:23.796 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:30:23.797 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:23.797 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:23.797 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:23.797 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:23.797 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:23.797 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:23.797 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:23.797 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:23.797 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:23.797 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:23.797 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:30:23.797 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:30:23.797 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:23.797 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:23.797 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:23.797 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:23.797 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:23.797 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:30:23.797 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:23.797 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:23.797 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:23.797 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.797 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.797 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.797 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:30:23.797 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.797 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:30:23.797 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:23.797 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:23.797 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:23.797 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:23.797 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:23.797 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:23.797 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:23.797 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:23.797 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:23.797 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:23.797 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:30:23.797 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:23.797 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:23.797 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:23.797 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:23.797 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:23.797 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:23.797 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:23.797 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:23.797 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:23.797 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:23.797 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:30:23.797 16:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:32.051 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:32.051 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:30:32.051 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:32.051 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:32.051 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:32.051 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:32.051 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:32.051 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:30:32.051 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:32.051 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:32.052 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:32.052 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:32.052 Found net devices under 0000:31:00.0: cvl_0_0 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:32.052 Found net devices under 0000:31:00.1: cvl_0_1 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:32.052 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:32.052 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:32.052 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:30:32.052 00:30:32.052 --- 10.0.0.2 ping statistics --- 00:30:32.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:32.052 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:30:32.053 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:32.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:32.053 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:30:32.053 00:30:32.053 --- 10.0.0.1 ping statistics --- 00:30:32.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:32.053 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:30:32.053 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:32.053 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:30:32.053 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:32.053 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:32.053 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:32.053 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:32.053 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:32.053 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:32.053 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:32.053 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:30:32.053 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:32.053 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:32.053 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:32.053 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2418936 00:30:32.053 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2418936 00:30:32.053 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:30:32.053 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2418936 ']' 00:30:32.053 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:32.053 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:32.053 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:32.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:32.053 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:32.053 16:42:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:32.053 [2024-11-20 16:42:16.947997] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:32.053 [2024-11-20 16:42:16.949176] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:30:32.053 [2024-11-20 16:42:16.949228] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:32.053 [2024-11-20 16:42:17.051377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:32.053 [2024-11-20 16:42:17.091963] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:32.053 [2024-11-20 16:42:17.092005] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:32.053 [2024-11-20 16:42:17.092013] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:32.053 [2024-11-20 16:42:17.092020] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:32.053 [2024-11-20 16:42:17.092026] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:32.053 [2024-11-20 16:42:17.093268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:32.053 [2024-11-20 16:42:17.093272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:32.053 [2024-11-20 16:42:17.150103] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:32.053 [2024-11-20 16:42:17.150645] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:32.053 [2024-11-20 16:42:17.150969] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:32.053 16:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:32.053 16:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:30:32.053 16:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:32.053 16:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:32.053 16:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:32.053 16:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:32.053 16:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:32.053 16:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.053 16:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:32.053 [2024-11-20 16:42:17.789838] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:32.053 16:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.053 16:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:32.053 16:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.053 16:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:32.053 16:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.053 16:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:32.053 16:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.053 16:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:32.053 [2024-11-20 16:42:17.818698] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:32.053 16:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.053 16:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:30:32.053 16:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.053 16:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:32.053 NULL1 00:30:32.053 16:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.053 16:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:32.053 16:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.053 16:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:32.053 Delay0 00:30:32.053 16:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.053 16:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:32.053 16:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.053 16:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:32.053 16:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.053 16:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2419035 00:30:32.053 16:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:30:32.053 16:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:30:32.053 [2024-11-20 16:42:17.914617] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:33.968 16:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:33.968 16:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.968 16:42:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 starting I/O failed: -6 00:30:34.229 Write completed with error (sct=0, sc=8) 00:30:34.229 Write completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 starting I/O failed: -6 00:30:34.229 Write completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Write completed with error (sct=0, sc=8) 00:30:34.229 starting I/O failed: -6 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Write completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 starting I/O failed: -6 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Write completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 starting I/O failed: -6 00:30:34.229 Write completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Write completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 starting I/O failed: -6 00:30:34.229 Write completed with error (sct=0, sc=8) 00:30:34.229 Write completed with error (sct=0, sc=8) 00:30:34.229 Write completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 starting I/O failed: -6 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 starting I/O failed: -6 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Write completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Write completed with error (sct=0, sc=8) 00:30:34.229 starting I/O failed: -6 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 starting I/O failed: -6 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Write completed with error (sct=0, sc=8) 00:30:34.229 starting I/O failed: -6 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 starting I/O failed: -6 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 [2024-11-20 16:42:19.960395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236ef00 is same with the state(6) to be set 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Write completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Write completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Write completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Write completed with error (sct=0, sc=8) 00:30:34.229 Write completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Write completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Write completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Write completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Write completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Write completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Write completed with error (sct=0, sc=8) 00:30:34.229 Write completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Write completed with error (sct=0, sc=8) 00:30:34.229 Write completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Write completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 starting I/O failed: -6 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Write completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 starting I/O failed: -6 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Write completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 starting I/O failed: -6 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 starting I/O failed: -6 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Write completed with error (sct=0, sc=8) 00:30:34.229 starting I/O failed: -6 00:30:34.229 Write completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 starting I/O failed: -6 00:30:34.229 Write completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 starting I/O failed: -6 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Write completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 starting I/O failed: -6 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 starting I/O failed: -6 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 [2024-11-20 16:42:19.963036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb35c00d4b0 is same with the state(6) to be set 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Write completed with error (sct=0, sc=8) 00:30:34.229 Write completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Write completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Write completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.229 Read completed with error (sct=0, sc=8) 00:30:34.230 Read completed with error (sct=0, sc=8) 00:30:34.230 Read completed with error (sct=0, sc=8) 00:30:34.230 Read completed with error (sct=0, sc=8) 00:30:34.230 Read completed with error (sct=0, sc=8) 00:30:34.230 Read completed with error (sct=0, sc=8) 00:30:34.230 Read completed with error (sct=0, sc=8) 00:30:34.230 Write completed with error (sct=0, sc=8) 00:30:34.230 Read completed with error (sct=0, sc=8) 00:30:34.230 Write completed with error (sct=0, sc=8) 00:30:34.230 Read completed with error (sct=0, sc=8) 00:30:34.230 Write completed with error (sct=0, sc=8) 00:30:34.230 Read completed with error (sct=0, sc=8) 00:30:34.230 Write completed with error (sct=0, sc=8) 00:30:34.230 Read completed with error (sct=0, sc=8) 00:30:34.230 Read completed with error (sct=0, sc=8) 00:30:34.230 Read completed with error (sct=0, sc=8) 00:30:34.230 Read completed with error (sct=0, sc=8) 00:30:34.230 Write completed with error (sct=0, sc=8) 00:30:34.230 Read completed with error (sct=0, sc=8) 00:30:34.230 Write completed with error (sct=0, sc=8) 00:30:34.230 Read completed with error (sct=0, sc=8) 00:30:35.169 [2024-11-20 16:42:20.930902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23705e0 is same with the state(6) to be set 00:30:35.169 Write completed with error (sct=0, sc=8) 00:30:35.169 Write completed with error (sct=0, sc=8) 00:30:35.169 Read completed with error (sct=0, sc=8) 00:30:35.169 Read completed with error (sct=0, sc=8) 00:30:35.169 Read completed with error (sct=0, sc=8) 00:30:35.169 Write completed with error (sct=0, sc=8) 00:30:35.169 Write completed with error (sct=0, sc=8) 00:30:35.169 Read completed with error (sct=0, sc=8) 00:30:35.169 Read completed with error (sct=0, sc=8) 00:30:35.169 Read completed with error (sct=0, sc=8) 00:30:35.169 Write completed with error (sct=0, sc=8) 00:30:35.169 Write completed with error (sct=0, sc=8) 00:30:35.169 Read completed with error (sct=0, sc=8) 00:30:35.169 Read completed with error (sct=0, sc=8) 00:30:35.169 Read completed with error (sct=0, sc=8) 00:30:35.169 Read completed with error (sct=0, sc=8) 00:30:35.169 Read completed with error (sct=0, sc=8) 00:30:35.169 Read completed with error (sct=0, sc=8) 00:30:35.169 Write completed with error (sct=0, sc=8) 00:30:35.169 Read completed with error (sct=0, sc=8) 00:30:35.169 Read completed with error (sct=0, sc=8) 00:30:35.169 Read completed with error (sct=0, sc=8) 00:30:35.169 Read completed with error (sct=0, sc=8) 00:30:35.169 Read completed with error (sct=0, sc=8) 00:30:35.169 Read completed with error (sct=0, sc=8) 00:30:35.169 Read completed with error (sct=0, sc=8) 00:30:35.169 [2024-11-20 16:42:20.963792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236f0e0 is same with the state(6) to be set 00:30:35.169 Read completed with error (sct=0, sc=8) 00:30:35.169 Write completed with error (sct=0, sc=8) 00:30:35.169 Write completed with error (sct=0, sc=8) 00:30:35.169 Read completed with error (sct=0, sc=8) 00:30:35.169 Write completed with error (sct=0, sc=8) 00:30:35.169 Read completed with error (sct=0, sc=8) 00:30:35.169 Write completed with error (sct=0, sc=8) 00:30:35.169 Write completed with error (sct=0, sc=8) 00:30:35.169 Read completed with error (sct=0, sc=8) 00:30:35.169 Read completed with error (sct=0, sc=8) 00:30:35.169 Read completed with error (sct=0, sc=8) 00:30:35.169 Write completed with error (sct=0, sc=8) 00:30:35.169 Write completed with error (sct=0, sc=8) 00:30:35.169 Write completed with error (sct=0, sc=8) 00:30:35.169 Read completed with error (sct=0, sc=8) 00:30:35.169 Read completed with error (sct=0, sc=8) 00:30:35.169 Write completed with error (sct=0, sc=8) 00:30:35.169 Write completed with error (sct=0, sc=8) 00:30:35.169 Read completed with error (sct=0, sc=8) 00:30:35.169 Read completed with error (sct=0, sc=8) 00:30:35.169 Read completed with error (sct=0, sc=8) 00:30:35.169 Read completed with error (sct=0, sc=8) 00:30:35.169 Read completed with error (sct=0, sc=8) 00:30:35.169 Read completed with error (sct=0, sc=8) 00:30:35.169 Read completed with error (sct=0, sc=8) 00:30:35.169 [2024-11-20 16:42:20.964318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236f4a0 is same with the state(6) to be set 00:30:35.169 Read completed with error (sct=0, sc=8) 00:30:35.169 Read completed with error (sct=0, sc=8) 00:30:35.169 Write completed with error (sct=0, sc=8) 00:30:35.169 Write completed with error (sct=0, sc=8) 00:30:35.169 Read completed with error (sct=0, sc=8) 00:30:35.169 Read completed with error (sct=0, sc=8) 00:30:35.169 Read completed with error (sct=0, sc=8) 00:30:35.169 Read completed with error (sct=0, sc=8) 00:30:35.169 Write completed with error (sct=0, sc=8) 00:30:35.169 Read completed with error (sct=0, sc=8) 00:30:35.169 Write completed with error (sct=0, sc=8) 00:30:35.169 Read completed with error (sct=0, sc=8) 00:30:35.169 Read completed with error (sct=0, sc=8) 00:30:35.169 [2024-11-20 16:42:20.965544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb35c00d020 is same with the state(6) to be set 00:30:35.169 Read completed with error (sct=0, sc=8) 00:30:35.169 Write completed with error (sct=0, sc=8) 00:30:35.169 Write completed with error (sct=0, sc=8) 00:30:35.169 Read completed with error (sct=0, sc=8) 00:30:35.169 Read completed with error (sct=0, sc=8) 00:30:35.169 Read completed with error (sct=0, sc=8) 00:30:35.169 Write completed with error (sct=0, sc=8) 00:30:35.169 Read completed with error (sct=0, sc=8) 00:30:35.169 Read completed with error (sct=0, sc=8) 00:30:35.169 Read completed with error (sct=0, sc=8) 00:30:35.169 Write completed with error (sct=0, sc=8) 00:30:35.169 Read completed with error (sct=0, sc=8) 00:30:35.169 Read completed with error (sct=0, sc=8) 00:30:35.169 Read completed with error (sct=0, sc=8) 00:30:35.169 Write completed with error (sct=0, sc=8) 00:30:35.169 Write completed with error (sct=0, sc=8) 00:30:35.169 Read completed with error (sct=0, sc=8) 00:30:35.169 [2024-11-20 16:42:20.965628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb35c00d7e0 is same with the state(6) to be set 00:30:35.169 Initializing NVMe Controllers 00:30:35.169 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:35.169 Controller IO queue size 128, less than required. 00:30:35.169 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:35.169 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:35.169 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:35.169 Initialization complete. Launching workers. 00:30:35.169 ======================================================== 00:30:35.169 Latency(us) 00:30:35.169 Device Information : IOPS MiB/s Average min max 00:30:35.169 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 172.20 0.08 890873.76 254.81 1007577.92 00:30:35.169 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 149.31 0.07 958153.68 240.81 2001056.94 00:30:35.169 ======================================================== 00:30:35.169 Total : 321.51 0.16 922118.31 240.81 2001056.94 00:30:35.169 00:30:35.169 [2024-11-20 16:42:20.966205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23705e0 (9): Bad file descriptor 00:30:35.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:30:35.169 16:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.169 16:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:30:35.169 16:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2419035 00:30:35.170 16:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:30:35.741 16:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:30:35.741 16:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2419035 00:30:35.741 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2419035) - No such process 00:30:35.741 16:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2419035 00:30:35.741 16:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:30:35.741 16:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2419035 00:30:35.741 16:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:30:35.741 16:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:35.741 16:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:30:35.741 16:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:35.741 16:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2419035 00:30:35.741 16:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:30:35.741 16:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:35.741 16:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:35.741 16:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:35.741 16:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:35.741 16:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.741 16:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:35.742 16:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.742 16:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:35.742 16:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.742 16:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:35.742 [2024-11-20 16:42:21.498106] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:35.742 16:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.742 16:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:35.742 16:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.742 16:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:35.742 16:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.742 16:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2419709 00:30:35.742 16:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:30:35.742 16:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:30:35.742 16:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2419709 00:30:35.742 16:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:35.742 [2024-11-20 16:42:21.572413] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:36.313 16:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:36.313 16:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2419709 00:30:36.313 16:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:36.573 16:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:36.573 16:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2419709 00:30:36.573 16:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:37.143 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:37.143 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2419709 00:30:37.143 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:37.713 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:37.714 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2419709 00:30:37.714 16:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:38.284 16:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:38.284 16:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2419709 00:30:38.284 16:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:38.855 16:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:38.855 16:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2419709 00:30:38.855 16:42:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:38.855 Initializing NVMe Controllers 00:30:38.855 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:38.855 Controller IO queue size 128, less than required. 00:30:38.855 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:38.855 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:38.855 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:38.855 Initialization complete. Launching workers. 00:30:38.855 ======================================================== 00:30:38.855 Latency(us) 00:30:38.855 Device Information : IOPS MiB/s Average min max 00:30:38.855 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002241.10 1000136.15 1006084.75 00:30:38.855 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003741.82 1000270.74 1009611.25 00:30:38.855 ======================================================== 00:30:38.855 Total : 256.00 0.12 1002991.46 1000136.15 1009611.25 00:30:38.855 00:30:39.115 16:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:39.115 16:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2419709 00:30:39.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2419709) - No such process 00:30:39.115 16:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2419709 00:30:39.115 16:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:30:39.115 16:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:30:39.115 16:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:39.115 16:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:30:39.115 16:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:39.115 16:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:30:39.115 16:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:39.115 16:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:39.115 rmmod nvme_tcp 00:30:39.376 rmmod nvme_fabrics 00:30:39.376 rmmod nvme_keyring 00:30:39.376 16:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:39.376 16:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:30:39.376 16:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:30:39.376 16:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2418936 ']' 00:30:39.376 16:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2418936 00:30:39.376 16:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2418936 ']' 00:30:39.376 16:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2418936 00:30:39.376 16:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:30:39.376 16:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:39.376 16:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2418936 00:30:39.376 16:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:39.376 16:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:39.376 16:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2418936' 00:30:39.376 killing process with pid 2418936 00:30:39.376 16:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2418936 00:30:39.376 16:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2418936 00:30:39.376 16:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:39.376 16:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:39.376 16:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:39.376 16:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:30:39.376 16:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:30:39.376 16:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:39.376 16:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:30:39.376 16:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:39.376 16:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:39.376 16:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:39.376 16:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:39.376 16:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:41.921 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:41.921 00:30:41.921 real 0m18.058s 00:30:41.921 user 0m26.201s 00:30:41.921 sys 0m7.254s 00:30:41.921 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:41.922 ************************************ 00:30:41.922 END TEST nvmf_delete_subsystem 00:30:41.922 ************************************ 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:41.922 ************************************ 00:30:41.922 START TEST nvmf_host_management 00:30:41.922 ************************************ 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:30:41.922 * Looking for test storage... 00:30:41.922 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:41.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:41.922 --rc genhtml_branch_coverage=1 00:30:41.922 --rc genhtml_function_coverage=1 00:30:41.922 --rc genhtml_legend=1 00:30:41.922 --rc geninfo_all_blocks=1 00:30:41.922 --rc geninfo_unexecuted_blocks=1 00:30:41.922 00:30:41.922 ' 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:41.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:41.922 --rc genhtml_branch_coverage=1 00:30:41.922 --rc genhtml_function_coverage=1 00:30:41.922 --rc genhtml_legend=1 00:30:41.922 --rc geninfo_all_blocks=1 00:30:41.922 --rc geninfo_unexecuted_blocks=1 00:30:41.922 00:30:41.922 ' 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:41.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:41.922 --rc genhtml_branch_coverage=1 00:30:41.922 --rc genhtml_function_coverage=1 00:30:41.922 --rc genhtml_legend=1 00:30:41.922 --rc geninfo_all_blocks=1 00:30:41.922 --rc geninfo_unexecuted_blocks=1 00:30:41.922 00:30:41.922 ' 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:41.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:41.922 --rc genhtml_branch_coverage=1 00:30:41.922 --rc genhtml_function_coverage=1 00:30:41.922 --rc genhtml_legend=1 00:30:41.922 --rc geninfo_all_blocks=1 00:30:41.922 --rc geninfo_unexecuted_blocks=1 00:30:41.922 00:30:41.922 ' 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:41.922 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:41.923 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.923 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.923 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.923 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:30:41.923 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.923 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:30:41.923 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:41.923 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:41.923 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:41.923 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:41.923 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:41.923 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:41.923 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:41.923 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:41.923 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:41.923 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:41.923 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:41.923 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:41.923 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:30:41.923 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:41.923 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:41.923 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:41.923 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:41.923 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:41.923 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:41.923 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:41.923 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:41.923 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:41.923 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:41.923 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:30:41.923 16:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:50.066 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:50.066 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:30:50.066 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:50.066 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:50.066 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:50.066 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:50.066 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:50.066 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:30:50.066 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:50.066 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:30:50.066 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:30:50.066 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:30:50.066 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:30:50.066 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:30:50.066 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:30:50.066 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:50.066 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:50.066 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:50.066 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:50.066 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:50.066 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:50.067 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:50.067 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:50.067 Found net devices under 0000:31:00.0: cvl_0_0 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:50.067 Found net devices under 0000:31:00.1: cvl_0_1 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:50.067 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:50.067 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.666 ms 00:30:50.067 00:30:50.067 --- 10.0.0.2 ping statistics --- 00:30:50.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:50.067 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:50.067 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:50.067 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:30:50.067 00:30:50.067 --- 10.0.0.1 ping statistics --- 00:30:50.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:50.067 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:30:50.067 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:30:50.068 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:30:50.068 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:50.068 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:50.068 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:50.068 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2424636 00:30:50.068 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2424636 00:30:50.068 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:30:50.068 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2424636 ']' 00:30:50.068 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:50.068 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:50.068 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:50.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:50.068 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:50.068 16:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:50.068 [2024-11-20 16:42:35.007902] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:50.068 [2024-11-20 16:42:35.009064] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:30:50.068 [2024-11-20 16:42:35.009114] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:50.068 [2024-11-20 16:42:35.110200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:50.068 [2024-11-20 16:42:35.163076] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:50.068 [2024-11-20 16:42:35.163128] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:50.068 [2024-11-20 16:42:35.163137] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:50.068 [2024-11-20 16:42:35.163144] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:50.068 [2024-11-20 16:42:35.163150] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:50.068 [2024-11-20 16:42:35.165226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:50.068 [2024-11-20 16:42:35.165394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:50.068 [2024-11-20 16:42:35.165447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:50.068 [2024-11-20 16:42:35.165449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:50.068 [2024-11-20 16:42:35.243253] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:50.068 [2024-11-20 16:42:35.243954] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:50.068 [2024-11-20 16:42:35.244737] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:50.068 [2024-11-20 16:42:35.244920] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:50.068 [2024-11-20 16:42:35.245157] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:50.068 16:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:50.068 16:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:30:50.068 16:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:50.068 16:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:50.068 16:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:50.068 16:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:50.068 16:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:50.068 16:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.068 16:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:50.068 [2024-11-20 16:42:35.854511] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:50.068 16:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.068 16:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:30:50.068 16:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:50.068 16:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:50.068 16:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:50.068 16:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:30:50.068 16:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:30:50.068 16:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.068 16:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:50.068 Malloc0 00:30:50.068 [2024-11-20 16:42:35.950787] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:50.068 16:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.068 16:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:30:50.068 16:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:50.068 16:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:50.068 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2424788 00:30:50.068 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2424788 /var/tmp/bdevperf.sock 00:30:50.068 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2424788 ']' 00:30:50.068 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:50.068 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:50.068 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:50.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:50.068 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:50.068 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:30:50.068 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:50.068 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:50.068 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:30:50.068 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:30:50.068 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:50.068 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:50.068 { 00:30:50.068 "params": { 00:30:50.068 "name": "Nvme$subsystem", 00:30:50.068 "trtype": "$TEST_TRANSPORT", 00:30:50.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:50.068 "adrfam": "ipv4", 00:30:50.068 "trsvcid": "$NVMF_PORT", 00:30:50.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:50.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:50.068 "hdgst": ${hdgst:-false}, 00:30:50.068 "ddgst": ${ddgst:-false} 00:30:50.068 }, 00:30:50.068 "method": "bdev_nvme_attach_controller" 00:30:50.068 } 00:30:50.068 EOF 00:30:50.068 )") 00:30:50.068 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:30:50.068 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:30:50.329 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:30:50.329 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:50.329 "params": { 00:30:50.329 "name": "Nvme0", 00:30:50.329 "trtype": "tcp", 00:30:50.329 "traddr": "10.0.0.2", 00:30:50.329 "adrfam": "ipv4", 00:30:50.329 "trsvcid": "4420", 00:30:50.329 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:50.329 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:50.329 "hdgst": false, 00:30:50.329 "ddgst": false 00:30:50.329 }, 00:30:50.329 "method": "bdev_nvme_attach_controller" 00:30:50.329 }' 00:30:50.329 [2024-11-20 16:42:36.065137] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:30:50.329 [2024-11-20 16:42:36.065195] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2424788 ] 00:30:50.329 [2024-11-20 16:42:36.137103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:50.329 [2024-11-20 16:42:36.173464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:50.589 Running I/O for 10 seconds... 00:30:51.163 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:51.163 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:30:51.163 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:51.163 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.163 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:51.163 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.163 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:51.163 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:30:51.163 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:51.163 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:30:51.163 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:30:51.163 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:30:51.163 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:30:51.163 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:30:51.163 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:30:51.163 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:30:51.163 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.163 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:51.163 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.163 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:30:51.163 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:30:51.163 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:30:51.163 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:30:51.163 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:30:51.163 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:30:51.163 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.163 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:51.163 [2024-11-20 16:42:36.922112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce0a80 is same with the state(6) to be set 00:30:51.163 [2024-11-20 16:42:36.922153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce0a80 is same with the state(6) to be set 00:30:51.163 [2024-11-20 16:42:36.922166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce0a80 is same with the state(6) to be set 00:30:51.163 [2024-11-20 16:42:36.922176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce0a80 is same with the state(6) to be set 00:30:51.163 [2024-11-20 16:42:36.922185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce0a80 is same with the state(6) to be set 00:30:51.163 [2024-11-20 16:42:36.922194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce0a80 is same with the state(6) to be set 00:30:51.163 [2024-11-20 16:42:36.922202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce0a80 is same with the state(6) to be set 00:30:51.163 [2024-11-20 16:42:36.922212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce0a80 is same with the state(6) to be set 00:30:51.163 [2024-11-20 16:42:36.922222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce0a80 is same with the state(6) to be set 00:30:51.163 [2024-11-20 16:42:36.922231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce0a80 is same with the state(6) to be set 00:30:51.163 [2024-11-20 16:42:36.922241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce0a80 is same with the state(6) to be set 00:30:51.163 [2024-11-20 16:42:36.922250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce0a80 is same with the state(6) to be set 00:30:51.163 [2024-11-20 16:42:36.922260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce0a80 is same with the state(6) to be set 00:30:51.163 [2024-11-20 16:42:36.922270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce0a80 is same with the state(6) to be set 00:30:51.163 [2024-11-20 16:42:36.922280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce0a80 is same with the state(6) to be set 00:30:51.163 [2024-11-20 16:42:36.922290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce0a80 is same with the state(6) to be set 00:30:51.163 [2024-11-20 16:42:36.922300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce0a80 is same with the state(6) to be set 00:30:51.163 [2024-11-20 16:42:36.922310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce0a80 is same with the state(6) to be set 00:30:51.163 [2024-11-20 16:42:36.922320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce0a80 is same with the state(6) to be set 00:30:51.163 [2024-11-20 16:42:36.922330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce0a80 is same with the state(6) to be set 00:30:51.163 [2024-11-20 16:42:36.922340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce0a80 is same with the state(6) to be set 00:30:51.163 [2024-11-20 16:42:36.922350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce0a80 is same with the state(6) to be set 00:30:51.163 [2024-11-20 16:42:36.922366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce0a80 is same with the state(6) to be set 00:30:51.163 [2024-11-20 16:42:36.922377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce0a80 is same with the state(6) to be set 00:30:51.163 [2024-11-20 16:42:36.922387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce0a80 is same with the state(6) to be set 00:30:51.163 [2024-11-20 16:42:36.922397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce0a80 is same with the state(6) to be set 00:30:51.163 [2024-11-20 16:42:36.922406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce0a80 is same with the state(6) to be set 00:30:51.163 [2024-11-20 16:42:36.922415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce0a80 is same with the state(6) to be set 00:30:51.163 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.163 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:30:51.163 [2024-11-20 16:42:36.927608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:51.163 [2024-11-20 16:42:36.927644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.163 [2024-11-20 16:42:36.927655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:51.163 [2024-11-20 16:42:36.927663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.163 [2024-11-20 16:42:36.927671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:51.163 [2024-11-20 16:42:36.927680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.163 [2024-11-20 16:42:36.927688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:51.163 [2024-11-20 16:42:36.927695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.163 [2024-11-20 16:42:36.927703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189d280 is same with the state(6) to be set 00:30:51.163 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.163 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:51.163 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.163 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:30:51.163 [2024-11-20 16:42:36.942120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189d280 (9): Bad file descriptor 00:30:51.163 [2024-11-20 16:42:36.942207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.163 [2024-11-20 16:42:36.942219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.163 [2024-11-20 16:42:36.942234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.163 [2024-11-20 16:42:36.942245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.163 [2024-11-20 16:42:36.942255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.163 [2024-11-20 16:42:36.942269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.163 [2024-11-20 16:42:36.942278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.163 [2024-11-20 16:42:36.942286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.163 [2024-11-20 16:42:36.942296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.163 [2024-11-20 16:42:36.942303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.163 [2024-11-20 16:42:36.942313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.163 [2024-11-20 16:42:36.942321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.163 [2024-11-20 16:42:36.942330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.163 [2024-11-20 16:42:36.942338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.163 [2024-11-20 16:42:36.942347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.163 [2024-11-20 16:42:36.942355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.163 [2024-11-20 16:42:36.942365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.163 [2024-11-20 16:42:36.942372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.163 [2024-11-20 16:42:36.942382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.163 [2024-11-20 16:42:36.942389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.163 [2024-11-20 16:42:36.942400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-11-20 16:42:36.942407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.164 [2024-11-20 16:42:36.942417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-11-20 16:42:36.942425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.164 [2024-11-20 16:42:36.942435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-11-20 16:42:36.942442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.164 [2024-11-20 16:42:36.942452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-11-20 16:42:36.942460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.164 [2024-11-20 16:42:36.942469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-11-20 16:42:36.942477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.164 [2024-11-20 16:42:36.942489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-11-20 16:42:36.942496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.164 [2024-11-20 16:42:36.942506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-11-20 16:42:36.942513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.164 [2024-11-20 16:42:36.942523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-11-20 16:42:36.942531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.164 [2024-11-20 16:42:36.942541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-11-20 16:42:36.942548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.164 [2024-11-20 16:42:36.942558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-11-20 16:42:36.942565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.164 [2024-11-20 16:42:36.942575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-11-20 16:42:36.942582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.164 [2024-11-20 16:42:36.942592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-11-20 16:42:36.942600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.164 [2024-11-20 16:42:36.942610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-11-20 16:42:36.942617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.164 [2024-11-20 16:42:36.942627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-11-20 16:42:36.942634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.164 [2024-11-20 16:42:36.942644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-11-20 16:42:36.942652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.164 [2024-11-20 16:42:36.942661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-11-20 16:42:36.942669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.164 [2024-11-20 16:42:36.942678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-11-20 16:42:36.942685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.164 [2024-11-20 16:42:36.942695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-11-20 16:42:36.942704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.164 [2024-11-20 16:42:36.942714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-11-20 16:42:36.942722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.164 [2024-11-20 16:42:36.942731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-11-20 16:42:36.942738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.164 [2024-11-20 16:42:36.942748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-11-20 16:42:36.942756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.164 [2024-11-20 16:42:36.942766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-11-20 16:42:36.942773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.164 [2024-11-20 16:42:36.942783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-11-20 16:42:36.942790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.164 [2024-11-20 16:42:36.942800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-11-20 16:42:36.942808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.164 [2024-11-20 16:42:36.942817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-11-20 16:42:36.942825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.164 [2024-11-20 16:42:36.942835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-11-20 16:42:36.942843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.164 [2024-11-20 16:42:36.942852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-11-20 16:42:36.942860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.164 [2024-11-20 16:42:36.942869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-11-20 16:42:36.942876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.164 [2024-11-20 16:42:36.942886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-11-20 16:42:36.942894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.164 [2024-11-20 16:42:36.942904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-11-20 16:42:36.942911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.164 [2024-11-20 16:42:36.942922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-11-20 16:42:36.942930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.164 [2024-11-20 16:42:36.942939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-11-20 16:42:36.942947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.164 [2024-11-20 16:42:36.942956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-11-20 16:42:36.942963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.164 [2024-11-20 16:42:36.942973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-11-20 16:42:36.942986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.164 [2024-11-20 16:42:36.942996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-11-20 16:42:36.943003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.164 [2024-11-20 16:42:36.943013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-11-20 16:42:36.943020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.164 [2024-11-20 16:42:36.943030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-11-20 16:42:36.943038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.164 [2024-11-20 16:42:36.943048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-11-20 16:42:36.943056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.164 [2024-11-20 16:42:36.943066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-11-20 16:42:36.943073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.164 [2024-11-20 16:42:36.943083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-11-20 16:42:36.943091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.164 [2024-11-20 16:42:36.943100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-11-20 16:42:36.943107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.164 [2024-11-20 16:42:36.943117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-11-20 16:42:36.943125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.164 [2024-11-20 16:42:36.943134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-11-20 16:42:36.943143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.164 [2024-11-20 16:42:36.943153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-11-20 16:42:36.943160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.164 [2024-11-20 16:42:36.943170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-11-20 16:42:36.943177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.164 [2024-11-20 16:42:36.943187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-11-20 16:42:36.943195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.164 [2024-11-20 16:42:36.943205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-11-20 16:42:36.943212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.164 [2024-11-20 16:42:36.943221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-11-20 16:42:36.943229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.164 [2024-11-20 16:42:36.943239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-11-20 16:42:36.943246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.164 [2024-11-20 16:42:36.943256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-11-20 16:42:36.943263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.164 [2024-11-20 16:42:36.943273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-11-20 16:42:36.943282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.164 [2024-11-20 16:42:36.943291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-11-20 16:42:36.943299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.164 [2024-11-20 16:42:36.943309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-11-20 16:42:36.943316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.164 [2024-11-20 16:42:36.943326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.164 [2024-11-20 16:42:36.943333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.165 [2024-11-20 16:42:36.944551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:51.165 task offset: 106496 on job bdev=Nvme0n1 fails 00:30:51.165 00:30:51.165 Latency(us) 00:30:51.165 [2024-11-20T15:42:37.124Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:51.165 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:51.165 Job: Nvme0n1 ended in about 0.55 seconds with error 00:30:51.165 Verification LBA range: start 0x0 length 0x400 00:30:51.165 Nvme0n1 : 0.55 1525.68 95.36 117.36 0.00 37967.98 1563.31 37573.97 00:30:51.165 [2024-11-20T15:42:37.124Z] =================================================================================================================== 00:30:51.165 [2024-11-20T15:42:37.124Z] Total : 1525.68 95.36 117.36 0.00 37967.98 1563.31 37573.97 00:30:51.165 [2024-11-20 16:42:36.946536] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:51.165 [2024-11-20 16:42:36.952112] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:30:52.106 16:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2424788 00:30:52.106 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2424788) - No such process 00:30:52.106 16:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:30:52.106 16:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:30:52.106 16:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:30:52.106 16:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:30:52.106 16:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:30:52.106 16:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:30:52.106 16:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:52.106 16:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:52.106 { 00:30:52.106 "params": { 00:30:52.106 "name": "Nvme$subsystem", 00:30:52.106 "trtype": "$TEST_TRANSPORT", 00:30:52.106 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:52.106 "adrfam": "ipv4", 00:30:52.106 "trsvcid": "$NVMF_PORT", 00:30:52.106 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:52.106 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:52.106 "hdgst": ${hdgst:-false}, 00:30:52.106 "ddgst": ${ddgst:-false} 00:30:52.106 }, 00:30:52.106 "method": "bdev_nvme_attach_controller" 00:30:52.106 } 00:30:52.106 EOF 00:30:52.106 )") 00:30:52.106 16:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:30:52.106 16:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:30:52.106 16:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:30:52.106 16:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:52.106 "params": { 00:30:52.106 "name": "Nvme0", 00:30:52.106 "trtype": "tcp", 00:30:52.106 "traddr": "10.0.0.2", 00:30:52.106 "adrfam": "ipv4", 00:30:52.106 "trsvcid": "4420", 00:30:52.106 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:52.106 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:52.106 "hdgst": false, 00:30:52.106 "ddgst": false 00:30:52.106 }, 00:30:52.106 "method": "bdev_nvme_attach_controller" 00:30:52.106 }' 00:30:52.106 [2024-11-20 16:42:38.000171] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:30:52.106 [2024-11-20 16:42:38.000224] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2425132 ] 00:30:52.367 [2024-11-20 16:42:38.072315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.367 [2024-11-20 16:42:38.107415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:52.367 Running I/O for 1 seconds... 00:30:53.568 1719.00 IOPS, 107.44 MiB/s 00:30:53.568 Latency(us) 00:30:53.568 [2024-11-20T15:42:39.527Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:53.568 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:53.568 Verification LBA range: start 0x0 length 0x400 00:30:53.569 Nvme0n1 : 1.02 1753.51 109.59 0.00 0.00 35531.86 4041.39 35607.89 00:30:53.569 [2024-11-20T15:42:39.528Z] =================================================================================================================== 00:30:53.569 [2024-11-20T15:42:39.528Z] Total : 1753.51 109.59 0.00 0.00 35531.86 4041.39 35607.89 00:30:53.569 16:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:30:53.569 16:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:30:53.569 16:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:53.569 16:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:53.569 16:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:30:53.569 16:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:53.569 16:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:30:53.569 16:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:53.569 16:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:30:53.569 16:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:53.569 16:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:53.569 rmmod nvme_tcp 00:30:53.569 rmmod nvme_fabrics 00:30:53.569 rmmod nvme_keyring 00:30:53.569 16:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:53.569 16:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:30:53.569 16:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:30:53.569 16:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2424636 ']' 00:30:53.569 16:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2424636 00:30:53.569 16:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2424636 ']' 00:30:53.569 16:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2424636 00:30:53.569 16:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:30:53.569 16:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:53.569 16:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2424636 00:30:53.829 16:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:53.829 16:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:53.829 16:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2424636' 00:30:53.829 killing process with pid 2424636 00:30:53.829 16:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2424636 00:30:53.829 16:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2424636 00:30:53.829 [2024-11-20 16:42:39.635719] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:30:53.829 16:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:53.829 16:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:53.829 16:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:53.829 16:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:30:53.829 16:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:53.829 16:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:30:53.829 16:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:30:53.829 16:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:53.829 16:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:53.829 16:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:53.829 16:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:53.829 16:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:56.377 16:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:56.377 16:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:30:56.377 00:30:56.377 real 0m14.272s 00:30:56.377 user 0m18.591s 00:30:56.377 sys 0m7.257s 00:30:56.377 16:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:56.377 16:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:56.377 ************************************ 00:30:56.377 END TEST nvmf_host_management 00:30:56.377 ************************************ 00:30:56.377 16:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:30:56.377 16:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:56.377 16:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:56.377 16:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:56.378 ************************************ 00:30:56.378 START TEST nvmf_lvol 00:30:56.378 ************************************ 00:30:56.378 16:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:30:56.378 * Looking for test storage... 00:30:56.378 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:56.378 16:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:56.378 16:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:30:56.378 16:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:56.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.378 --rc genhtml_branch_coverage=1 00:30:56.378 --rc genhtml_function_coverage=1 00:30:56.378 --rc genhtml_legend=1 00:30:56.378 --rc geninfo_all_blocks=1 00:30:56.378 --rc geninfo_unexecuted_blocks=1 00:30:56.378 00:30:56.378 ' 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:56.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.378 --rc genhtml_branch_coverage=1 00:30:56.378 --rc genhtml_function_coverage=1 00:30:56.378 --rc genhtml_legend=1 00:30:56.378 --rc geninfo_all_blocks=1 00:30:56.378 --rc geninfo_unexecuted_blocks=1 00:30:56.378 00:30:56.378 ' 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:56.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.378 --rc genhtml_branch_coverage=1 00:30:56.378 --rc genhtml_function_coverage=1 00:30:56.378 --rc genhtml_legend=1 00:30:56.378 --rc geninfo_all_blocks=1 00:30:56.378 --rc geninfo_unexecuted_blocks=1 00:30:56.378 00:30:56.378 ' 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:56.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.378 --rc genhtml_branch_coverage=1 00:30:56.378 --rc genhtml_function_coverage=1 00:30:56.378 --rc genhtml_legend=1 00:30:56.378 --rc geninfo_all_blocks=1 00:30:56.378 --rc geninfo_unexecuted_blocks=1 00:30:56.378 00:30:56.378 ' 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:30:56.378 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:56.379 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:56.379 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:56.379 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:56.379 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:56.379 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:56.379 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:56.379 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:56.379 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:56.379 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:56.379 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:56.379 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:56.379 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:30:56.379 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:30:56.379 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:56.379 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:30:56.379 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:56.379 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:56.379 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:56.379 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:56.379 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:56.379 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:56.379 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:56.379 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:56.379 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:56.379 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:56.379 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:30:56.379 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:04.523 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:04.523 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:31:04.523 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:04.523 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:04.523 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:04.523 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:04.523 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:04.523 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:31:04.523 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:04.523 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:31:04.523 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:31:04.523 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:31:04.523 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:31:04.523 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:31:04.523 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:31:04.523 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:04.523 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:04.523 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:04.523 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:04.523 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:04.523 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:04.523 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:04.523 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:04.523 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:04.523 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:04.523 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:04.523 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:04.523 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:04.523 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:04.523 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:04.523 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:04.523 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:04.523 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:04.523 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:04.523 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:04.523 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:04.523 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:04.523 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:04.523 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:04.523 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:04.523 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:04.523 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:04.523 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:04.523 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:04.523 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:04.523 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:04.523 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:04.523 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:04.523 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:04.523 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:04.523 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:04.523 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:04.523 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:04.524 Found net devices under 0000:31:00.0: cvl_0_0 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:04.524 Found net devices under 0000:31:00.1: cvl_0_1 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:04.524 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:04.524 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:31:04.524 00:31:04.524 --- 10.0.0.2 ping statistics --- 00:31:04.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:04.524 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:04.524 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:04.524 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:31:04.524 00:31:04.524 --- 10.0.0.1 ping statistics --- 00:31:04.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:04.524 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2429804 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2429804 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2429804 ']' 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:04.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:04.524 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:04.524 [2024-11-20 16:42:49.660978] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:04.524 [2024-11-20 16:42:49.662604] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:31:04.524 [2024-11-20 16:42:49.662674] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:04.524 [2024-11-20 16:42:49.747847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:04.524 [2024-11-20 16:42:49.788998] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:04.524 [2024-11-20 16:42:49.789036] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:04.524 [2024-11-20 16:42:49.789044] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:04.524 [2024-11-20 16:42:49.789051] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:04.524 [2024-11-20 16:42:49.789056] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:04.524 [2024-11-20 16:42:49.790634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:04.524 [2024-11-20 16:42:49.790749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:04.524 [2024-11-20 16:42:49.790752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:04.524 [2024-11-20 16:42:49.847662] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:04.524 [2024-11-20 16:42:49.848126] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:04.524 [2024-11-20 16:42:49.848489] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:04.524 [2024-11-20 16:42:49.848752] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:04.524 16:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:04.524 16:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:31:04.524 16:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:04.524 16:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:04.524 16:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:04.785 16:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:04.785 16:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:04.785 [2024-11-20 16:42:50.671323] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:04.785 16:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:05.046 16:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:31:05.046 16:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:05.307 16:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:31:05.307 16:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:31:05.566 16:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:31:05.566 16:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=f503b15e-f738-4833-9892-4859400e7884 00:31:05.566 16:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f503b15e-f738-4833-9892-4859400e7884 lvol 20 00:31:05.826 16:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=52958e36-9107-4fb7-a2d8-cf30a7a6a338 00:31:05.826 16:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:06.085 16:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 52958e36-9107-4fb7-a2d8-cf30a7a6a338 00:31:06.085 16:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:06.345 [2024-11-20 16:42:52.151421] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:06.345 16:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:06.603 16:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2430202 00:31:06.603 16:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:31:06.603 16:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:31:07.542 16:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 52958e36-9107-4fb7-a2d8-cf30a7a6a338 MY_SNAPSHOT 00:31:07.802 16:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=b5a9d969-1270-495d-b35a-3bd7773f9452 00:31:07.802 16:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 52958e36-9107-4fb7-a2d8-cf30a7a6a338 30 00:31:08.061 16:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone b5a9d969-1270-495d-b35a-3bd7773f9452 MY_CLONE 00:31:08.061 16:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=af72b344-1db3-46a8-bda6-9a2287722526 00:31:08.061 16:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate af72b344-1db3-46a8-bda6-9a2287722526 00:31:08.631 16:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2430202 00:31:18.619 Initializing NVMe Controllers 00:31:18.619 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:31:18.619 Controller IO queue size 128, less than required. 00:31:18.619 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:18.619 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:31:18.619 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:31:18.619 Initialization complete. Launching workers. 00:31:18.619 ======================================================== 00:31:18.619 Latency(us) 00:31:18.619 Device Information : IOPS MiB/s Average min max 00:31:18.619 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12082.40 47.20 10596.07 2664.69 67757.51 00:31:18.619 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16191.00 63.25 7905.90 2279.07 66055.45 00:31:18.619 ======================================================== 00:31:18.619 Total : 28273.40 110.44 9055.52 2279.07 67757.51 00:31:18.619 00:31:18.619 16:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:18.619 16:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 52958e36-9107-4fb7-a2d8-cf30a7a6a338 00:31:18.619 16:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f503b15e-f738-4833-9892-4859400e7884 00:31:18.619 16:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:31:18.619 16:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:31:18.619 16:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:31:18.619 16:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:18.619 16:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:31:18.619 16:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:18.619 16:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:31:18.619 16:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:18.619 16:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:18.619 rmmod nvme_tcp 00:31:18.619 rmmod nvme_fabrics 00:31:18.619 rmmod nvme_keyring 00:31:18.619 16:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:18.619 16:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:31:18.620 16:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:31:18.620 16:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2429804 ']' 00:31:18.620 16:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2429804 00:31:18.620 16:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2429804 ']' 00:31:18.620 16:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2429804 00:31:18.620 16:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:31:18.620 16:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:18.620 16:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2429804 00:31:18.620 16:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:18.620 16:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:18.620 16:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2429804' 00:31:18.620 killing process with pid 2429804 00:31:18.620 16:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2429804 00:31:18.620 16:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2429804 00:31:18.620 16:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:18.620 16:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:18.620 16:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:18.620 16:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:31:18.620 16:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:31:18.620 16:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:18.620 16:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:31:18.620 16:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:18.620 16:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:18.620 16:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:18.620 16:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:18.620 16:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:20.002 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:20.002 00:31:20.002 real 0m23.792s 00:31:20.002 user 0m55.764s 00:31:20.002 sys 0m10.549s 00:31:20.002 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:20.002 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:20.002 ************************************ 00:31:20.002 END TEST nvmf_lvol 00:31:20.002 ************************************ 00:31:20.002 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:31:20.002 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:20.002 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:20.002 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:20.002 ************************************ 00:31:20.002 START TEST nvmf_lvs_grow 00:31:20.002 ************************************ 00:31:20.002 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:31:20.002 * Looking for test storage... 00:31:20.002 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:20.002 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:20.002 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:20.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:20.003 --rc genhtml_branch_coverage=1 00:31:20.003 --rc genhtml_function_coverage=1 00:31:20.003 --rc genhtml_legend=1 00:31:20.003 --rc geninfo_all_blocks=1 00:31:20.003 --rc geninfo_unexecuted_blocks=1 00:31:20.003 00:31:20.003 ' 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:20.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:20.003 --rc genhtml_branch_coverage=1 00:31:20.003 --rc genhtml_function_coverage=1 00:31:20.003 --rc genhtml_legend=1 00:31:20.003 --rc geninfo_all_blocks=1 00:31:20.003 --rc geninfo_unexecuted_blocks=1 00:31:20.003 00:31:20.003 ' 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:20.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:20.003 --rc genhtml_branch_coverage=1 00:31:20.003 --rc genhtml_function_coverage=1 00:31:20.003 --rc genhtml_legend=1 00:31:20.003 --rc geninfo_all_blocks=1 00:31:20.003 --rc geninfo_unexecuted_blocks=1 00:31:20.003 00:31:20.003 ' 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:20.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:20.003 --rc genhtml_branch_coverage=1 00:31:20.003 --rc genhtml_function_coverage=1 00:31:20.003 --rc genhtml_legend=1 00:31:20.003 --rc geninfo_all_blocks=1 00:31:20.003 --rc geninfo_unexecuted_blocks=1 00:31:20.003 00:31:20.003 ' 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:20.003 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.004 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.004 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.004 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:31:20.004 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.004 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:31:20.004 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:20.004 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:20.004 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:20.004 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:20.004 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:20.004 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:20.004 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:20.004 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:20.004 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:20.004 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:20.004 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:20.004 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:20.004 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:31:20.004 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:20.004 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:20.004 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:20.004 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:20.004 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:20.004 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:20.004 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:20.004 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:20.004 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:20.004 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:20.004 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:31:20.004 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:28.141 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:28.141 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:28.141 Found net devices under 0000:31:00.0: cvl_0_0 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:28.141 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:28.142 Found net devices under 0000:31:00.1: cvl_0_1 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:28.142 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:28.142 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:31:28.142 00:31:28.142 --- 10.0.0.2 ping statistics --- 00:31:28.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:28.142 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:28.142 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:28.142 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:31:28.142 00:31:28.142 --- 10.0.0.1 ping statistics --- 00:31:28.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:28.142 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2436564 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2436564 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2436564 ']' 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:28.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:28.142 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:28.142 [2024-11-20 16:43:13.496326] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:28.142 [2024-11-20 16:43:13.497884] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:31:28.142 [2024-11-20 16:43:13.497954] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:28.142 [2024-11-20 16:43:13.581040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:28.142 [2024-11-20 16:43:13.621695] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:28.142 [2024-11-20 16:43:13.621732] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:28.142 [2024-11-20 16:43:13.621740] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:28.142 [2024-11-20 16:43:13.621747] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:28.142 [2024-11-20 16:43:13.621753] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:28.142 [2024-11-20 16:43:13.622345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:28.142 [2024-11-20 16:43:13.679181] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:28.142 [2024-11-20 16:43:13.679431] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:28.403 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:28.403 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:31:28.403 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:28.403 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:28.403 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:28.403 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:28.403 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:28.664 [2024-11-20 16:43:14.494833] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:28.664 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:31:28.664 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:28.664 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:28.664 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:28.664 ************************************ 00:31:28.664 START TEST lvs_grow_clean 00:31:28.664 ************************************ 00:31:28.664 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:31:28.664 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:31:28.664 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:31:28.664 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:31:28.664 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:31:28.664 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:31:28.664 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:31:28.664 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:28.664 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:28.664 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:28.925 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:31:28.925 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:31:29.185 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=d5e1fb13-ca7c-4f14-9536-36f9bc0099c6 00:31:29.185 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5e1fb13-ca7c-4f14-9536-36f9bc0099c6 00:31:29.185 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:31:29.185 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:31:29.185 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:31:29.185 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d5e1fb13-ca7c-4f14-9536-36f9bc0099c6 lvol 150 00:31:29.445 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=2537ad97-d159-4679-8b28-b0e6c4f77616 00:31:29.445 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:29.445 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:31:29.705 [2024-11-20 16:43:15.430783] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:31:29.705 [2024-11-20 16:43:15.430919] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:31:29.705 true 00:31:29.705 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5e1fb13-ca7c-4f14-9536-36f9bc0099c6 00:31:29.705 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:31:29.705 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:31:29.705 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:29.966 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2537ad97-d159-4679-8b28-b0e6c4f77616 00:31:30.227 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:30.227 [2024-11-20 16:43:16.115092] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:30.227 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:30.489 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2437194 00:31:30.489 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:30.489 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:31:30.489 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2437194 /var/tmp/bdevperf.sock 00:31:30.489 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2437194 ']' 00:31:30.489 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:30.489 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:30.489 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:30.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:30.489 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:30.489 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:31:30.489 [2024-11-20 16:43:16.354675] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:31:30.489 [2024-11-20 16:43:16.354727] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2437194 ] 00:31:30.489 [2024-11-20 16:43:16.443061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:30.810 [2024-11-20 16:43:16.480045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:31.485 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:31.485 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:31:31.485 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:31:31.746 Nvme0n1 00:31:31.746 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:31:31.746 [ 00:31:31.746 { 00:31:31.746 "name": "Nvme0n1", 00:31:31.746 "aliases": [ 00:31:31.746 "2537ad97-d159-4679-8b28-b0e6c4f77616" 00:31:31.746 ], 00:31:31.746 "product_name": "NVMe disk", 00:31:31.746 "block_size": 4096, 00:31:31.746 "num_blocks": 38912, 00:31:31.746 "uuid": "2537ad97-d159-4679-8b28-b0e6c4f77616", 00:31:31.746 "numa_id": 0, 00:31:31.746 "assigned_rate_limits": { 00:31:31.746 "rw_ios_per_sec": 0, 00:31:31.746 "rw_mbytes_per_sec": 0, 00:31:31.746 "r_mbytes_per_sec": 0, 00:31:31.746 "w_mbytes_per_sec": 0 00:31:31.746 }, 00:31:31.746 "claimed": false, 00:31:31.746 "zoned": false, 00:31:31.746 "supported_io_types": { 00:31:31.746 "read": true, 00:31:31.746 "write": true, 00:31:31.746 "unmap": true, 00:31:31.746 "flush": true, 00:31:31.746 "reset": true, 00:31:31.746 "nvme_admin": true, 00:31:31.746 "nvme_io": true, 00:31:31.746 "nvme_io_md": false, 00:31:31.746 "write_zeroes": true, 00:31:31.746 "zcopy": false, 00:31:31.746 "get_zone_info": false, 00:31:31.746 "zone_management": false, 00:31:31.746 "zone_append": false, 00:31:31.746 "compare": true, 00:31:31.746 "compare_and_write": true, 00:31:31.746 "abort": true, 00:31:31.746 "seek_hole": false, 00:31:31.746 "seek_data": false, 00:31:31.746 "copy": true, 00:31:31.746 "nvme_iov_md": false 00:31:31.746 }, 00:31:31.746 "memory_domains": [ 00:31:31.746 { 00:31:31.746 "dma_device_id": "system", 00:31:31.746 "dma_device_type": 1 00:31:31.746 } 00:31:31.746 ], 00:31:31.746 "driver_specific": { 00:31:31.746 "nvme": [ 00:31:31.746 { 00:31:31.746 "trid": { 00:31:31.746 "trtype": "TCP", 00:31:31.746 "adrfam": "IPv4", 00:31:31.746 "traddr": "10.0.0.2", 00:31:31.746 "trsvcid": "4420", 00:31:31.746 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:31.746 }, 00:31:31.746 "ctrlr_data": { 00:31:31.746 "cntlid": 1, 00:31:31.746 "vendor_id": "0x8086", 00:31:31.746 "model_number": "SPDK bdev Controller", 00:31:31.746 "serial_number": "SPDK0", 00:31:31.746 "firmware_revision": "25.01", 00:31:31.746 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:31.746 "oacs": { 00:31:31.746 "security": 0, 00:31:31.746 "format": 0, 00:31:31.746 "firmware": 0, 00:31:31.746 "ns_manage": 0 00:31:31.746 }, 00:31:31.746 "multi_ctrlr": true, 00:31:31.746 "ana_reporting": false 00:31:31.746 }, 00:31:31.746 "vs": { 00:31:31.746 "nvme_version": "1.3" 00:31:31.746 }, 00:31:31.746 "ns_data": { 00:31:31.746 "id": 1, 00:31:31.746 "can_share": true 00:31:31.746 } 00:31:31.746 } 00:31:31.746 ], 00:31:31.746 "mp_policy": "active_passive" 00:31:31.746 } 00:31:31.746 } 00:31:31.746 ] 00:31:32.007 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2437358 00:31:32.007 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:31:32.007 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:32.007 Running I/O for 10 seconds... 00:31:32.949 Latency(us) 00:31:32.949 [2024-11-20T15:43:18.908Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:32.949 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:32.949 Nvme0n1 : 1.00 17720.00 69.22 0.00 0.00 0.00 0.00 0.00 00:31:32.949 [2024-11-20T15:43:18.908Z] =================================================================================================================== 00:31:32.949 [2024-11-20T15:43:18.908Z] Total : 17720.00 69.22 0.00 0.00 0.00 0.00 0.00 00:31:32.949 00:31:33.891 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d5e1fb13-ca7c-4f14-9536-36f9bc0099c6 00:31:33.891 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:33.891 Nvme0n1 : 2.00 17813.50 69.58 0.00 0.00 0.00 0.00 0.00 00:31:33.891 [2024-11-20T15:43:19.850Z] =================================================================================================================== 00:31:33.891 [2024-11-20T15:43:19.850Z] Total : 17813.50 69.58 0.00 0.00 0.00 0.00 0.00 00:31:33.891 00:31:34.152 true 00:31:34.152 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5e1fb13-ca7c-4f14-9536-36f9bc0099c6 00:31:34.152 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:31:34.152 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:31:34.152 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:31:34.152 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2437358 00:31:35.093 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:35.093 Nvme0n1 : 3.00 17844.67 69.71 0.00 0.00 0.00 0.00 0.00 00:31:35.093 [2024-11-20T15:43:21.052Z] =================================================================================================================== 00:31:35.093 [2024-11-20T15:43:21.052Z] Total : 17844.67 69.71 0.00 0.00 0.00 0.00 0.00 00:31:35.093 00:31:36.037 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:36.037 Nvme0n1 : 4.00 17860.25 69.77 0.00 0.00 0.00 0.00 0.00 00:31:36.037 [2024-11-20T15:43:21.996Z] =================================================================================================================== 00:31:36.037 [2024-11-20T15:43:21.996Z] Total : 17860.25 69.77 0.00 0.00 0.00 0.00 0.00 00:31:36.037 00:31:36.978 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:36.978 Nvme0n1 : 5.00 17895.00 69.90 0.00 0.00 0.00 0.00 0.00 00:31:36.978 [2024-11-20T15:43:22.937Z] =================================================================================================================== 00:31:36.978 [2024-11-20T15:43:22.937Z] Total : 17895.00 69.90 0.00 0.00 0.00 0.00 0.00 00:31:36.978 00:31:37.920 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:37.920 Nvme0n1 : 6.00 17918.17 69.99 0.00 0.00 0.00 0.00 0.00 00:31:37.920 [2024-11-20T15:43:23.879Z] =================================================================================================================== 00:31:37.920 [2024-11-20T15:43:23.879Z] Total : 17918.17 69.99 0.00 0.00 0.00 0.00 0.00 00:31:37.920 00:31:38.860 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:38.860 Nvme0n1 : 7.00 17934.71 70.06 0.00 0.00 0.00 0.00 0.00 00:31:38.860 [2024-11-20T15:43:24.819Z] =================================================================================================================== 00:31:38.860 [2024-11-20T15:43:24.819Z] Total : 17934.71 70.06 0.00 0.00 0.00 0.00 0.00 00:31:38.860 00:31:40.243 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:40.243 Nvme0n1 : 8.00 17947.12 70.11 0.00 0.00 0.00 0.00 0.00 00:31:40.243 [2024-11-20T15:43:26.202Z] =================================================================================================================== 00:31:40.243 [2024-11-20T15:43:26.202Z] Total : 17947.12 70.11 0.00 0.00 0.00 0.00 0.00 00:31:40.243 00:31:41.183 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:41.183 Nvme0n1 : 9.00 17956.78 70.14 0.00 0.00 0.00 0.00 0.00 00:31:41.183 [2024-11-20T15:43:27.142Z] =================================================================================================================== 00:31:41.183 [2024-11-20T15:43:27.142Z] Total : 17956.78 70.14 0.00 0.00 0.00 0.00 0.00 00:31:41.183 00:31:42.125 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:42.125 Nvme0n1 : 10.00 17964.50 70.17 0.00 0.00 0.00 0.00 0.00 00:31:42.125 [2024-11-20T15:43:28.084Z] =================================================================================================================== 00:31:42.125 [2024-11-20T15:43:28.084Z] Total : 17964.50 70.17 0.00 0.00 0.00 0.00 0.00 00:31:42.125 00:31:42.125 00:31:42.125 Latency(us) 00:31:42.125 [2024-11-20T15:43:28.084Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:42.125 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:42.125 Nvme0n1 : 10.00 17969.17 70.19 0.00 0.00 7120.05 2252.80 13380.27 00:31:42.125 [2024-11-20T15:43:28.084Z] =================================================================================================================== 00:31:42.125 [2024-11-20T15:43:28.084Z] Total : 17969.17 70.19 0.00 0.00 7120.05 2252.80 13380.27 00:31:42.125 { 00:31:42.125 "results": [ 00:31:42.125 { 00:31:42.125 "job": "Nvme0n1", 00:31:42.125 "core_mask": "0x2", 00:31:42.125 "workload": "randwrite", 00:31:42.125 "status": "finished", 00:31:42.125 "queue_depth": 128, 00:31:42.125 "io_size": 4096, 00:31:42.125 "runtime": 10.004524, 00:31:42.125 "iops": 17969.17074715399, 00:31:42.125 "mibps": 70.19207323107027, 00:31:42.125 "io_failed": 0, 00:31:42.125 "io_timeout": 0, 00:31:42.125 "avg_latency_us": 7120.046932075451, 00:31:42.125 "min_latency_us": 2252.8, 00:31:42.125 "max_latency_us": 13380.266666666666 00:31:42.125 } 00:31:42.125 ], 00:31:42.125 "core_count": 1 00:31:42.125 } 00:31:42.125 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2437194 00:31:42.125 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2437194 ']' 00:31:42.125 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2437194 00:31:42.125 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:31:42.125 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:42.125 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2437194 00:31:42.125 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:42.125 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:42.125 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2437194' 00:31:42.125 killing process with pid 2437194 00:31:42.125 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2437194 00:31:42.125 Received shutdown signal, test time was about 10.000000 seconds 00:31:42.125 00:31:42.125 Latency(us) 00:31:42.125 [2024-11-20T15:43:28.084Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:42.125 [2024-11-20T15:43:28.084Z] =================================================================================================================== 00:31:42.125 [2024-11-20T15:43:28.084Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:42.125 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2437194 00:31:42.125 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:42.385 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:42.647 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5e1fb13-ca7c-4f14-9536-36f9bc0099c6 00:31:42.647 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:31:42.647 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:31:42.647 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:31:42.647 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:42.908 [2024-11-20 16:43:28.690855] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:31:42.909 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5e1fb13-ca7c-4f14-9536-36f9bc0099c6 00:31:42.909 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:31:42.909 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5e1fb13-ca7c-4f14-9536-36f9bc0099c6 00:31:42.909 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:42.909 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:42.909 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:42.909 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:42.909 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:42.909 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:42.909 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:42.909 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:31:42.909 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5e1fb13-ca7c-4f14-9536-36f9bc0099c6 00:31:43.170 request: 00:31:43.170 { 00:31:43.170 "uuid": "d5e1fb13-ca7c-4f14-9536-36f9bc0099c6", 00:31:43.170 "method": "bdev_lvol_get_lvstores", 00:31:43.170 "req_id": 1 00:31:43.170 } 00:31:43.170 Got JSON-RPC error response 00:31:43.170 response: 00:31:43.170 { 00:31:43.170 "code": -19, 00:31:43.170 "message": "No such device" 00:31:43.170 } 00:31:43.170 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:31:43.170 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:43.170 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:43.170 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:43.170 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:43.170 aio_bdev 00:31:43.170 16:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 2537ad97-d159-4679-8b28-b0e6c4f77616 00:31:43.170 16:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=2537ad97-d159-4679-8b28-b0e6c4f77616 00:31:43.170 16:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:43.170 16:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:31:43.170 16:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:43.170 16:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:43.170 16:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:43.446 16:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2537ad97-d159-4679-8b28-b0e6c4f77616 -t 2000 00:31:43.446 [ 00:31:43.446 { 00:31:43.446 "name": "2537ad97-d159-4679-8b28-b0e6c4f77616", 00:31:43.446 "aliases": [ 00:31:43.446 "lvs/lvol" 00:31:43.446 ], 00:31:43.446 "product_name": "Logical Volume", 00:31:43.446 "block_size": 4096, 00:31:43.446 "num_blocks": 38912, 00:31:43.446 "uuid": "2537ad97-d159-4679-8b28-b0e6c4f77616", 00:31:43.446 "assigned_rate_limits": { 00:31:43.446 "rw_ios_per_sec": 0, 00:31:43.446 "rw_mbytes_per_sec": 0, 00:31:43.446 "r_mbytes_per_sec": 0, 00:31:43.446 "w_mbytes_per_sec": 0 00:31:43.446 }, 00:31:43.447 "claimed": false, 00:31:43.447 "zoned": false, 00:31:43.447 "supported_io_types": { 00:31:43.447 "read": true, 00:31:43.447 "write": true, 00:31:43.447 "unmap": true, 00:31:43.447 "flush": false, 00:31:43.447 "reset": true, 00:31:43.447 "nvme_admin": false, 00:31:43.447 "nvme_io": false, 00:31:43.447 "nvme_io_md": false, 00:31:43.447 "write_zeroes": true, 00:31:43.447 "zcopy": false, 00:31:43.447 "get_zone_info": false, 00:31:43.447 "zone_management": false, 00:31:43.447 "zone_append": false, 00:31:43.447 "compare": false, 00:31:43.447 "compare_and_write": false, 00:31:43.447 "abort": false, 00:31:43.447 "seek_hole": true, 00:31:43.447 "seek_data": true, 00:31:43.447 "copy": false, 00:31:43.447 "nvme_iov_md": false 00:31:43.447 }, 00:31:43.447 "driver_specific": { 00:31:43.447 "lvol": { 00:31:43.447 "lvol_store_uuid": "d5e1fb13-ca7c-4f14-9536-36f9bc0099c6", 00:31:43.447 "base_bdev": "aio_bdev", 00:31:43.447 "thin_provision": false, 00:31:43.448 "num_allocated_clusters": 38, 00:31:43.448 "snapshot": false, 00:31:43.448 "clone": false, 00:31:43.448 "esnap_clone": false 00:31:43.448 } 00:31:43.448 } 00:31:43.448 } 00:31:43.448 ] 00:31:43.713 16:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:31:43.713 16:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5e1fb13-ca7c-4f14-9536-36f9bc0099c6 00:31:43.713 16:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:31:43.713 16:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:31:43.713 16:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d5e1fb13-ca7c-4f14-9536-36f9bc0099c6 00:31:43.713 16:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:31:43.973 16:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:31:43.973 16:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2537ad97-d159-4679-8b28-b0e6c4f77616 00:31:43.973 16:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d5e1fb13-ca7c-4f14-9536-36f9bc0099c6 00:31:44.234 16:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:44.493 16:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:44.493 00:31:44.493 real 0m15.758s 00:31:44.493 user 0m15.430s 00:31:44.493 sys 0m1.400s 00:31:44.493 16:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:44.493 16:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:31:44.493 ************************************ 00:31:44.493 END TEST lvs_grow_clean 00:31:44.493 ************************************ 00:31:44.493 16:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:31:44.493 16:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:44.493 16:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:44.493 16:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:44.493 ************************************ 00:31:44.493 START TEST lvs_grow_dirty 00:31:44.493 ************************************ 00:31:44.493 16:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:31:44.493 16:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:31:44.493 16:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:31:44.493 16:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:31:44.494 16:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:31:44.494 16:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:31:44.494 16:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:31:44.494 16:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:44.494 16:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:44.494 16:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:44.754 16:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:31:44.754 16:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:31:45.013 16:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=cae765c9-4af5-48b1-9c08-0b65ddbcf7b8 00:31:45.013 16:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cae765c9-4af5-48b1-9c08-0b65ddbcf7b8 00:31:45.013 16:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:31:45.013 16:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:31:45.013 16:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:31:45.013 16:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u cae765c9-4af5-48b1-9c08-0b65ddbcf7b8 lvol 150 00:31:45.273 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=61074eca-2312-4734-96c7-edac928ba798 00:31:45.273 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:45.273 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:31:45.535 [2024-11-20 16:43:31.286791] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:31:45.535 [2024-11-20 16:43:31.286932] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:31:45.535 true 00:31:45.535 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cae765c9-4af5-48b1-9c08-0b65ddbcf7b8 00:31:45.535 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:31:45.535 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:31:45.535 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:45.796 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 61074eca-2312-4734-96c7-edac928ba798 00:31:46.056 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:46.056 [2024-11-20 16:43:31.934944] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:46.056 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:46.317 16:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2440143 00:31:46.317 16:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:46.317 16:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:31:46.317 16:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2440143 /var/tmp/bdevperf.sock 00:31:46.317 16:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2440143 ']' 00:31:46.317 16:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:46.317 16:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:46.317 16:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:46.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:46.317 16:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:46.317 16:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:46.317 [2024-11-20 16:43:32.169128] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:31:46.317 [2024-11-20 16:43:32.169194] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2440143 ] 00:31:46.317 [2024-11-20 16:43:32.258859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:46.578 [2024-11-20 16:43:32.288663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:47.149 16:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:47.149 16:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:31:47.149 16:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:31:47.409 Nvme0n1 00:31:47.409 16:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:31:47.671 [ 00:31:47.671 { 00:31:47.671 "name": "Nvme0n1", 00:31:47.671 "aliases": [ 00:31:47.671 "61074eca-2312-4734-96c7-edac928ba798" 00:31:47.671 ], 00:31:47.671 "product_name": "NVMe disk", 00:31:47.671 "block_size": 4096, 00:31:47.671 "num_blocks": 38912, 00:31:47.671 "uuid": "61074eca-2312-4734-96c7-edac928ba798", 00:31:47.671 "numa_id": 0, 00:31:47.671 "assigned_rate_limits": { 00:31:47.671 "rw_ios_per_sec": 0, 00:31:47.671 "rw_mbytes_per_sec": 0, 00:31:47.671 "r_mbytes_per_sec": 0, 00:31:47.671 "w_mbytes_per_sec": 0 00:31:47.671 }, 00:31:47.671 "claimed": false, 00:31:47.671 "zoned": false, 00:31:47.671 "supported_io_types": { 00:31:47.671 "read": true, 00:31:47.671 "write": true, 00:31:47.671 "unmap": true, 00:31:47.671 "flush": true, 00:31:47.671 "reset": true, 00:31:47.671 "nvme_admin": true, 00:31:47.671 "nvme_io": true, 00:31:47.671 "nvme_io_md": false, 00:31:47.671 "write_zeroes": true, 00:31:47.671 "zcopy": false, 00:31:47.671 "get_zone_info": false, 00:31:47.671 "zone_management": false, 00:31:47.671 "zone_append": false, 00:31:47.671 "compare": true, 00:31:47.671 "compare_and_write": true, 00:31:47.671 "abort": true, 00:31:47.671 "seek_hole": false, 00:31:47.671 "seek_data": false, 00:31:47.671 "copy": true, 00:31:47.671 "nvme_iov_md": false 00:31:47.671 }, 00:31:47.671 "memory_domains": [ 00:31:47.671 { 00:31:47.671 "dma_device_id": "system", 00:31:47.671 "dma_device_type": 1 00:31:47.671 } 00:31:47.671 ], 00:31:47.671 "driver_specific": { 00:31:47.671 "nvme": [ 00:31:47.671 { 00:31:47.671 "trid": { 00:31:47.671 "trtype": "TCP", 00:31:47.671 "adrfam": "IPv4", 00:31:47.671 "traddr": "10.0.0.2", 00:31:47.671 "trsvcid": "4420", 00:31:47.671 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:47.671 }, 00:31:47.671 "ctrlr_data": { 00:31:47.671 "cntlid": 1, 00:31:47.671 "vendor_id": "0x8086", 00:31:47.671 "model_number": "SPDK bdev Controller", 00:31:47.671 "serial_number": "SPDK0", 00:31:47.671 "firmware_revision": "25.01", 00:31:47.671 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:47.671 "oacs": { 00:31:47.671 "security": 0, 00:31:47.671 "format": 0, 00:31:47.671 "firmware": 0, 00:31:47.671 "ns_manage": 0 00:31:47.671 }, 00:31:47.671 "multi_ctrlr": true, 00:31:47.671 "ana_reporting": false 00:31:47.671 }, 00:31:47.671 "vs": { 00:31:47.671 "nvme_version": "1.3" 00:31:47.671 }, 00:31:47.671 "ns_data": { 00:31:47.671 "id": 1, 00:31:47.671 "can_share": true 00:31:47.671 } 00:31:47.671 } 00:31:47.671 ], 00:31:47.671 "mp_policy": "active_passive" 00:31:47.671 } 00:31:47.671 } 00:31:47.671 ] 00:31:47.671 16:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2440365 00:31:47.671 16:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:31:47.671 16:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:47.671 Running I/O for 10 seconds... 00:31:48.614 Latency(us) 00:31:48.614 [2024-11-20T15:43:34.573Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:48.614 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:48.614 Nvme0n1 : 1.00 17656.00 68.97 0.00 0.00 0.00 0.00 0.00 00:31:48.614 [2024-11-20T15:43:34.573Z] =================================================================================================================== 00:31:48.614 [2024-11-20T15:43:34.573Z] Total : 17656.00 68.97 0.00 0.00 0.00 0.00 0.00 00:31:48.614 00:31:49.556 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u cae765c9-4af5-48b1-9c08-0b65ddbcf7b8 00:31:49.556 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:49.556 Nvme0n1 : 2.00 17781.50 69.46 0.00 0.00 0.00 0.00 0.00 00:31:49.556 [2024-11-20T15:43:35.515Z] =================================================================================================================== 00:31:49.556 [2024-11-20T15:43:35.515Z] Total : 17781.50 69.46 0.00 0.00 0.00 0.00 0.00 00:31:49.556 00:31:49.816 true 00:31:49.816 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cae765c9-4af5-48b1-9c08-0b65ddbcf7b8 00:31:49.816 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:31:50.077 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:31:50.077 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:31:50.077 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2440365 00:31:50.646 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:50.646 Nvme0n1 : 3.00 17823.33 69.62 0.00 0.00 0.00 0.00 0.00 00:31:50.646 [2024-11-20T15:43:36.605Z] =================================================================================================================== 00:31:50.646 [2024-11-20T15:43:36.605Z] Total : 17823.33 69.62 0.00 0.00 0.00 0.00 0.00 00:31:50.646 00:31:51.587 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:51.587 Nvme0n1 : 4.00 17876.00 69.83 0.00 0.00 0.00 0.00 0.00 00:31:51.587 [2024-11-20T15:43:37.546Z] =================================================================================================================== 00:31:51.587 [2024-11-20T15:43:37.546Z] Total : 17876.00 69.83 0.00 0.00 0.00 0.00 0.00 00:31:51.587 00:31:52.969 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:52.969 Nvme0n1 : 5.00 17882.20 69.85 0.00 0.00 0.00 0.00 0.00 00:31:52.969 [2024-11-20T15:43:38.928Z] =================================================================================================================== 00:31:52.969 [2024-11-20T15:43:38.928Z] Total : 17882.20 69.85 0.00 0.00 0.00 0.00 0.00 00:31:52.969 00:31:53.910 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:53.910 Nvme0n1 : 6.00 17907.50 69.95 0.00 0.00 0.00 0.00 0.00 00:31:53.910 [2024-11-20T15:43:39.869Z] =================================================================================================================== 00:31:53.910 [2024-11-20T15:43:39.869Z] Total : 17907.50 69.95 0.00 0.00 0.00 0.00 0.00 00:31:53.910 00:31:54.850 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:54.850 Nvme0n1 : 7.00 17925.57 70.02 0.00 0.00 0.00 0.00 0.00 00:31:54.850 [2024-11-20T15:43:40.809Z] =================================================================================================================== 00:31:54.850 [2024-11-20T15:43:40.809Z] Total : 17925.57 70.02 0.00 0.00 0.00 0.00 0.00 00:31:54.850 00:31:55.792 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:55.792 Nvme0n1 : 8.00 17939.12 70.07 0.00 0.00 0.00 0.00 0.00 00:31:55.792 [2024-11-20T15:43:41.751Z] =================================================================================================================== 00:31:55.792 [2024-11-20T15:43:41.751Z] Total : 17939.12 70.07 0.00 0.00 0.00 0.00 0.00 00:31:55.792 00:31:56.731 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:56.731 Nvme0n1 : 9.00 17949.67 70.12 0.00 0.00 0.00 0.00 0.00 00:31:56.731 [2024-11-20T15:43:42.690Z] =================================================================================================================== 00:31:56.731 [2024-11-20T15:43:42.690Z] Total : 17949.67 70.12 0.00 0.00 0.00 0.00 0.00 00:31:56.731 00:31:57.672 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:57.672 Nvme0n1 : 10.00 17958.10 70.15 0.00 0.00 0.00 0.00 0.00 00:31:57.672 [2024-11-20T15:43:43.631Z] =================================================================================================================== 00:31:57.672 [2024-11-20T15:43:43.631Z] Total : 17958.10 70.15 0.00 0.00 0.00 0.00 0.00 00:31:57.672 00:31:57.672 00:31:57.672 Latency(us) 00:31:57.672 [2024-11-20T15:43:43.631Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:57.672 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:57.672 Nvme0n1 : 10.01 17961.25 70.16 0.00 0.00 7123.92 1631.57 13544.11 00:31:57.672 [2024-11-20T15:43:43.631Z] =================================================================================================================== 00:31:57.672 [2024-11-20T15:43:43.631Z] Total : 17961.25 70.16 0.00 0.00 7123.92 1631.57 13544.11 00:31:57.672 { 00:31:57.672 "results": [ 00:31:57.672 { 00:31:57.672 "job": "Nvme0n1", 00:31:57.672 "core_mask": "0x2", 00:31:57.672 "workload": "randwrite", 00:31:57.672 "status": "finished", 00:31:57.672 "queue_depth": 128, 00:31:57.672 "io_size": 4096, 00:31:57.672 "runtime": 10.005372, 00:31:57.672 "iops": 17961.251215846845, 00:31:57.672 "mibps": 70.16113756190174, 00:31:57.672 "io_failed": 0, 00:31:57.672 "io_timeout": 0, 00:31:57.672 "avg_latency_us": 7123.921600365035, 00:31:57.672 "min_latency_us": 1631.5733333333333, 00:31:57.672 "max_latency_us": 13544.106666666667 00:31:57.672 } 00:31:57.672 ], 00:31:57.672 "core_count": 1 00:31:57.672 } 00:31:57.672 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2440143 00:31:57.672 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2440143 ']' 00:31:57.672 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2440143 00:31:57.672 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:31:57.672 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:57.672 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2440143 00:31:57.672 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:57.672 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:57.672 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2440143' 00:31:57.672 killing process with pid 2440143 00:31:57.672 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2440143 00:31:57.672 Received shutdown signal, test time was about 10.000000 seconds 00:31:57.672 00:31:57.672 Latency(us) 00:31:57.672 [2024-11-20T15:43:43.631Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:57.672 [2024-11-20T15:43:43.631Z] =================================================================================================================== 00:31:57.672 [2024-11-20T15:43:43.631Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:57.932 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2440143 00:31:57.932 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:58.192 16:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:58.192 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cae765c9-4af5-48b1-9c08-0b65ddbcf7b8 00:31:58.192 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:31:58.452 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:31:58.452 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:31:58.452 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2436564 00:31:58.453 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2436564 00:31:58.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2436564 Killed "${NVMF_APP[@]}" "$@" 00:31:58.453 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:31:58.453 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:31:58.453 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:58.453 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:58.453 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:58.453 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2442402 00:31:58.453 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:31:58.453 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2442402 00:31:58.453 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2442402 ']' 00:31:58.453 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:58.453 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:58.453 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:58.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:58.453 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:58.453 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:58.453 [2024-11-20 16:43:44.328675] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:58.453 [2024-11-20 16:43:44.329711] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:31:58.453 [2024-11-20 16:43:44.329755] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:58.713 [2024-11-20 16:43:44.410403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:58.713 [2024-11-20 16:43:44.447497] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:58.713 [2024-11-20 16:43:44.447532] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:58.713 [2024-11-20 16:43:44.447540] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:58.713 [2024-11-20 16:43:44.447547] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:58.713 [2024-11-20 16:43:44.447552] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:58.713 [2024-11-20 16:43:44.448132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:58.713 [2024-11-20 16:43:44.503853] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:58.713 [2024-11-20 16:43:44.504111] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:59.283 16:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:59.283 16:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:31:59.283 16:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:59.283 16:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:59.283 16:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:59.283 16:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:59.283 16:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:59.542 [2024-11-20 16:43:45.322941] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:31:59.542 [2024-11-20 16:43:45.323038] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:31:59.542 [2024-11-20 16:43:45.323070] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:31:59.542 16:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:31:59.542 16:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 61074eca-2312-4734-96c7-edac928ba798 00:31:59.542 16:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=61074eca-2312-4734-96c7-edac928ba798 00:31:59.542 16:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:59.542 16:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:31:59.542 16:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:59.542 16:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:59.542 16:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:59.802 16:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 61074eca-2312-4734-96c7-edac928ba798 -t 2000 00:31:59.802 [ 00:31:59.802 { 00:31:59.802 "name": "61074eca-2312-4734-96c7-edac928ba798", 00:31:59.802 "aliases": [ 00:31:59.802 "lvs/lvol" 00:31:59.802 ], 00:31:59.802 "product_name": "Logical Volume", 00:31:59.802 "block_size": 4096, 00:31:59.802 "num_blocks": 38912, 00:31:59.802 "uuid": "61074eca-2312-4734-96c7-edac928ba798", 00:31:59.802 "assigned_rate_limits": { 00:31:59.802 "rw_ios_per_sec": 0, 00:31:59.802 "rw_mbytes_per_sec": 0, 00:31:59.802 "r_mbytes_per_sec": 0, 00:31:59.802 "w_mbytes_per_sec": 0 00:31:59.802 }, 00:31:59.802 "claimed": false, 00:31:59.802 "zoned": false, 00:31:59.802 "supported_io_types": { 00:31:59.802 "read": true, 00:31:59.802 "write": true, 00:31:59.802 "unmap": true, 00:31:59.802 "flush": false, 00:31:59.802 "reset": true, 00:31:59.802 "nvme_admin": false, 00:31:59.802 "nvme_io": false, 00:31:59.802 "nvme_io_md": false, 00:31:59.802 "write_zeroes": true, 00:31:59.802 "zcopy": false, 00:31:59.802 "get_zone_info": false, 00:31:59.802 "zone_management": false, 00:31:59.802 "zone_append": false, 00:31:59.802 "compare": false, 00:31:59.802 "compare_and_write": false, 00:31:59.802 "abort": false, 00:31:59.802 "seek_hole": true, 00:31:59.802 "seek_data": true, 00:31:59.802 "copy": false, 00:31:59.802 "nvme_iov_md": false 00:31:59.802 }, 00:31:59.802 "driver_specific": { 00:31:59.802 "lvol": { 00:31:59.802 "lvol_store_uuid": "cae765c9-4af5-48b1-9c08-0b65ddbcf7b8", 00:31:59.802 "base_bdev": "aio_bdev", 00:31:59.802 "thin_provision": false, 00:31:59.802 "num_allocated_clusters": 38, 00:31:59.802 "snapshot": false, 00:31:59.802 "clone": false, 00:31:59.802 "esnap_clone": false 00:31:59.802 } 00:31:59.802 } 00:31:59.802 } 00:31:59.802 ] 00:31:59.802 16:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:31:59.802 16:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cae765c9-4af5-48b1-9c08-0b65ddbcf7b8 00:31:59.802 16:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:32:00.062 16:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:32:00.062 16:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cae765c9-4af5-48b1-9c08-0b65ddbcf7b8 00:32:00.062 16:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:32:00.323 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:32:00.323 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:00.323 [2024-11-20 16:43:46.196502] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:00.323 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cae765c9-4af5-48b1-9c08-0b65ddbcf7b8 00:32:00.323 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:32:00.323 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cae765c9-4af5-48b1-9c08-0b65ddbcf7b8 00:32:00.323 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:00.323 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:00.323 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:00.323 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:00.323 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:00.323 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:00.323 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:00.323 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:00.323 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cae765c9-4af5-48b1-9c08-0b65ddbcf7b8 00:32:00.584 request: 00:32:00.584 { 00:32:00.584 "uuid": "cae765c9-4af5-48b1-9c08-0b65ddbcf7b8", 00:32:00.584 "method": "bdev_lvol_get_lvstores", 00:32:00.584 "req_id": 1 00:32:00.584 } 00:32:00.584 Got JSON-RPC error response 00:32:00.584 response: 00:32:00.584 { 00:32:00.584 "code": -19, 00:32:00.584 "message": "No such device" 00:32:00.584 } 00:32:00.584 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:32:00.584 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:00.584 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:00.584 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:00.584 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:00.844 aio_bdev 00:32:00.844 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 61074eca-2312-4734-96c7-edac928ba798 00:32:00.844 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=61074eca-2312-4734-96c7-edac928ba798 00:32:00.844 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:00.844 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:32:00.844 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:00.844 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:00.844 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:00.844 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 61074eca-2312-4734-96c7-edac928ba798 -t 2000 00:32:01.105 [ 00:32:01.105 { 00:32:01.105 "name": "61074eca-2312-4734-96c7-edac928ba798", 00:32:01.105 "aliases": [ 00:32:01.105 "lvs/lvol" 00:32:01.105 ], 00:32:01.105 "product_name": "Logical Volume", 00:32:01.105 "block_size": 4096, 00:32:01.105 "num_blocks": 38912, 00:32:01.105 "uuid": "61074eca-2312-4734-96c7-edac928ba798", 00:32:01.105 "assigned_rate_limits": { 00:32:01.105 "rw_ios_per_sec": 0, 00:32:01.105 "rw_mbytes_per_sec": 0, 00:32:01.105 "r_mbytes_per_sec": 0, 00:32:01.105 "w_mbytes_per_sec": 0 00:32:01.105 }, 00:32:01.105 "claimed": false, 00:32:01.105 "zoned": false, 00:32:01.105 "supported_io_types": { 00:32:01.105 "read": true, 00:32:01.105 "write": true, 00:32:01.105 "unmap": true, 00:32:01.105 "flush": false, 00:32:01.105 "reset": true, 00:32:01.105 "nvme_admin": false, 00:32:01.105 "nvme_io": false, 00:32:01.105 "nvme_io_md": false, 00:32:01.105 "write_zeroes": true, 00:32:01.105 "zcopy": false, 00:32:01.105 "get_zone_info": false, 00:32:01.105 "zone_management": false, 00:32:01.105 "zone_append": false, 00:32:01.105 "compare": false, 00:32:01.105 "compare_and_write": false, 00:32:01.105 "abort": false, 00:32:01.105 "seek_hole": true, 00:32:01.105 "seek_data": true, 00:32:01.105 "copy": false, 00:32:01.105 "nvme_iov_md": false 00:32:01.105 }, 00:32:01.105 "driver_specific": { 00:32:01.105 "lvol": { 00:32:01.105 "lvol_store_uuid": "cae765c9-4af5-48b1-9c08-0b65ddbcf7b8", 00:32:01.105 "base_bdev": "aio_bdev", 00:32:01.105 "thin_provision": false, 00:32:01.105 "num_allocated_clusters": 38, 00:32:01.105 "snapshot": false, 00:32:01.105 "clone": false, 00:32:01.105 "esnap_clone": false 00:32:01.105 } 00:32:01.105 } 00:32:01.105 } 00:32:01.105 ] 00:32:01.105 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:32:01.105 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cae765c9-4af5-48b1-9c08-0b65ddbcf7b8 00:32:01.105 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:01.365 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:01.365 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cae765c9-4af5-48b1-9c08-0b65ddbcf7b8 00:32:01.365 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:01.365 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:01.365 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 61074eca-2312-4734-96c7-edac928ba798 00:32:01.636 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cae765c9-4af5-48b1-9c08-0b65ddbcf7b8 00:32:01.636 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:01.895 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:01.895 00:32:01.895 real 0m17.399s 00:32:01.895 user 0m35.362s 00:32:01.895 sys 0m2.915s 00:32:01.895 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:01.895 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:01.895 ************************************ 00:32:01.895 END TEST lvs_grow_dirty 00:32:01.895 ************************************ 00:32:01.895 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:32:01.895 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:32:01.895 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:32:01.895 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:32:01.895 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:32:01.895 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:32:01.895 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:32:01.895 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:32:01.895 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:32:01.895 nvmf_trace.0 00:32:02.155 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:32:02.155 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:32:02.155 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:02.155 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:32:02.155 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:02.155 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:32:02.155 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:02.155 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:02.155 rmmod nvme_tcp 00:32:02.155 rmmod nvme_fabrics 00:32:02.155 rmmod nvme_keyring 00:32:02.155 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:02.155 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:32:02.155 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:32:02.155 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2442402 ']' 00:32:02.155 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2442402 00:32:02.155 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2442402 ']' 00:32:02.155 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2442402 00:32:02.155 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:32:02.155 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:02.155 16:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2442402 00:32:02.155 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:02.155 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:02.155 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2442402' 00:32:02.155 killing process with pid 2442402 00:32:02.155 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2442402 00:32:02.155 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2442402 00:32:02.417 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:02.417 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:02.417 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:02.417 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:32:02.417 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:32:02.417 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:02.417 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:32:02.417 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:02.417 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:02.417 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:02.417 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:02.417 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:04.330 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:04.330 00:32:04.330 real 0m44.531s 00:32:04.330 user 0m53.730s 00:32:04.330 sys 0m10.452s 00:32:04.330 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:04.330 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:04.330 ************************************ 00:32:04.330 END TEST nvmf_lvs_grow 00:32:04.330 ************************************ 00:32:04.330 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:04.330 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:04.330 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:04.330 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:04.592 ************************************ 00:32:04.592 START TEST nvmf_bdev_io_wait 00:32:04.592 ************************************ 00:32:04.592 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:04.592 * Looking for test storage... 00:32:04.592 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:04.592 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:04.592 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:32:04.592 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:04.592 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:04.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:04.593 --rc genhtml_branch_coverage=1 00:32:04.593 --rc genhtml_function_coverage=1 00:32:04.593 --rc genhtml_legend=1 00:32:04.593 --rc geninfo_all_blocks=1 00:32:04.593 --rc geninfo_unexecuted_blocks=1 00:32:04.593 00:32:04.593 ' 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:04.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:04.593 --rc genhtml_branch_coverage=1 00:32:04.593 --rc genhtml_function_coverage=1 00:32:04.593 --rc genhtml_legend=1 00:32:04.593 --rc geninfo_all_blocks=1 00:32:04.593 --rc geninfo_unexecuted_blocks=1 00:32:04.593 00:32:04.593 ' 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:04.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:04.593 --rc genhtml_branch_coverage=1 00:32:04.593 --rc genhtml_function_coverage=1 00:32:04.593 --rc genhtml_legend=1 00:32:04.593 --rc geninfo_all_blocks=1 00:32:04.593 --rc geninfo_unexecuted_blocks=1 00:32:04.593 00:32:04.593 ' 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:04.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:04.593 --rc genhtml_branch_coverage=1 00:32:04.593 --rc genhtml_function_coverage=1 00:32:04.593 --rc genhtml_legend=1 00:32:04.593 --rc geninfo_all_blocks=1 00:32:04.593 --rc geninfo_unexecuted_blocks=1 00:32:04.593 00:32:04.593 ' 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:04.593 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:04.594 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:04.594 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:04.594 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:04.854 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:04.854 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:04.854 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:32:04.854 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:04.854 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:04.854 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:04.854 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:04.854 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:04.854 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:04.854 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:04.854 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:04.854 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:04.854 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:04.854 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:32:04.854 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:12.997 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:12.997 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:12.997 Found net devices under 0000:31:00.0: cvl_0_0 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:12.997 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:12.998 Found net devices under 0000:31:00.1: cvl_0_1 00:32:12.998 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:12.998 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:12.998 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:32:12.998 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:12.998 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:12.998 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:12.998 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:12.998 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:12.998 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:12.998 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:12.998 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:12.998 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:12.998 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:12.998 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:12.998 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:12.998 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:12.998 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:12.998 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:12.998 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:12.998 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:12.998 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:12.998 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:12.998 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:12.998 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:12.998 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:12.998 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:12.998 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:12.998 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:12.998 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:12.998 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:12.998 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:32:12.998 00:32:12.998 --- 10.0.0.2 ping statistics --- 00:32:12.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:12.998 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:32:12.998 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:12.998 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:12.998 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:32:12.998 00:32:12.998 --- 10.0.0.1 ping statistics --- 00:32:12.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:12.998 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:32:12.998 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:12.998 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:32:12.998 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:12.998 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:12.998 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:12.998 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:12.998 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:12.998 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:12.998 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:12.998 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:32:12.998 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:12.998 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:12.998 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:12.998 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2447483 00:32:12.998 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2447483 00:32:12.998 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2447483 ']' 00:32:12.998 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:12.998 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:12.998 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:12.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:12.998 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:12.998 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:12.998 16:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:32:12.998 [2024-11-20 16:43:57.922204] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:12.998 [2024-11-20 16:43:57.923379] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:32:12.998 [2024-11-20 16:43:57.923430] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:12.998 [2024-11-20 16:43:58.007196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:12.998 [2024-11-20 16:43:58.049785] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:12.998 [2024-11-20 16:43:58.049821] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:12.998 [2024-11-20 16:43:58.049829] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:12.998 [2024-11-20 16:43:58.049836] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:12.998 [2024-11-20 16:43:58.049842] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:12.998 [2024-11-20 16:43:58.051667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:12.998 [2024-11-20 16:43:58.051781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:12.998 [2024-11-20 16:43:58.051936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:12.998 [2024-11-20 16:43:58.051937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:12.998 [2024-11-20 16:43:58.052213] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:12.998 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:12.998 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:32:12.998 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:12.998 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:12.998 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:12.998 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:12.998 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:32:12.998 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.998 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:12.998 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.998 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:32:12.998 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.998 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:12.998 [2024-11-20 16:43:58.797990] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:12.998 [2024-11-20 16:43:58.798350] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:12.998 [2024-11-20 16:43:58.799116] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:12.998 [2024-11-20 16:43:58.799201] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:12.998 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.998 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:12.998 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.998 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:12.999 [2024-11-20 16:43:58.804389] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:12.999 Malloc0 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:12.999 [2024-11-20 16:43:58.856560] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2447518 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2447521 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:12.999 { 00:32:12.999 "params": { 00:32:12.999 "name": "Nvme$subsystem", 00:32:12.999 "trtype": "$TEST_TRANSPORT", 00:32:12.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:12.999 "adrfam": "ipv4", 00:32:12.999 "trsvcid": "$NVMF_PORT", 00:32:12.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:12.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:12.999 "hdgst": ${hdgst:-false}, 00:32:12.999 "ddgst": ${ddgst:-false} 00:32:12.999 }, 00:32:12.999 "method": "bdev_nvme_attach_controller" 00:32:12.999 } 00:32:12.999 EOF 00:32:12.999 )") 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2447523 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2447526 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:12.999 { 00:32:12.999 "params": { 00:32:12.999 "name": "Nvme$subsystem", 00:32:12.999 "trtype": "$TEST_TRANSPORT", 00:32:12.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:12.999 "adrfam": "ipv4", 00:32:12.999 "trsvcid": "$NVMF_PORT", 00:32:12.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:12.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:12.999 "hdgst": ${hdgst:-false}, 00:32:12.999 "ddgst": ${ddgst:-false} 00:32:12.999 }, 00:32:12.999 "method": "bdev_nvme_attach_controller" 00:32:12.999 } 00:32:12.999 EOF 00:32:12.999 )") 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:12.999 { 00:32:12.999 "params": { 00:32:12.999 "name": "Nvme$subsystem", 00:32:12.999 "trtype": "$TEST_TRANSPORT", 00:32:12.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:12.999 "adrfam": "ipv4", 00:32:12.999 "trsvcid": "$NVMF_PORT", 00:32:12.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:12.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:12.999 "hdgst": ${hdgst:-false}, 00:32:12.999 "ddgst": ${ddgst:-false} 00:32:12.999 }, 00:32:12.999 "method": "bdev_nvme_attach_controller" 00:32:12.999 } 00:32:12.999 EOF 00:32:12.999 )") 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:12.999 { 00:32:12.999 "params": { 00:32:12.999 "name": "Nvme$subsystem", 00:32:12.999 "trtype": "$TEST_TRANSPORT", 00:32:12.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:12.999 "adrfam": "ipv4", 00:32:12.999 "trsvcid": "$NVMF_PORT", 00:32:12.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:12.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:12.999 "hdgst": ${hdgst:-false}, 00:32:12.999 "ddgst": ${ddgst:-false} 00:32:12.999 }, 00:32:12.999 "method": "bdev_nvme_attach_controller" 00:32:12.999 } 00:32:12.999 EOF 00:32:12.999 )") 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2447518 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:12.999 "params": { 00:32:12.999 "name": "Nvme1", 00:32:12.999 "trtype": "tcp", 00:32:12.999 "traddr": "10.0.0.2", 00:32:12.999 "adrfam": "ipv4", 00:32:12.999 "trsvcid": "4420", 00:32:12.999 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:12.999 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:12.999 "hdgst": false, 00:32:12.999 "ddgst": false 00:32:12.999 }, 00:32:12.999 "method": "bdev_nvme_attach_controller" 00:32:12.999 }' 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:12.999 "params": { 00:32:12.999 "name": "Nvme1", 00:32:12.999 "trtype": "tcp", 00:32:12.999 "traddr": "10.0.0.2", 00:32:12.999 "adrfam": "ipv4", 00:32:12.999 "trsvcid": "4420", 00:32:12.999 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:12.999 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:12.999 "hdgst": false, 00:32:12.999 "ddgst": false 00:32:12.999 }, 00:32:12.999 "method": "bdev_nvme_attach_controller" 00:32:12.999 }' 00:32:12.999 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:13.000 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:13.000 "params": { 00:32:13.000 "name": "Nvme1", 00:32:13.000 "trtype": "tcp", 00:32:13.000 "traddr": "10.0.0.2", 00:32:13.000 "adrfam": "ipv4", 00:32:13.000 "trsvcid": "4420", 00:32:13.000 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:13.000 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:13.000 "hdgst": false, 00:32:13.000 "ddgst": false 00:32:13.000 }, 00:32:13.000 "method": "bdev_nvme_attach_controller" 00:32:13.000 }' 00:32:13.000 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:13.000 16:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:13.000 "params": { 00:32:13.000 "name": "Nvme1", 00:32:13.000 "trtype": "tcp", 00:32:13.000 "traddr": "10.0.0.2", 00:32:13.000 "adrfam": "ipv4", 00:32:13.000 "trsvcid": "4420", 00:32:13.000 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:13.000 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:13.000 "hdgst": false, 00:32:13.000 "ddgst": false 00:32:13.000 }, 00:32:13.000 "method": "bdev_nvme_attach_controller" 00:32:13.000 }' 00:32:13.000 [2024-11-20 16:43:58.911891] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:32:13.000 [2024-11-20 16:43:58.911944] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:32:13.000 [2024-11-20 16:43:58.913033] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:32:13.000 [2024-11-20 16:43:58.913081] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:32:13.000 [2024-11-20 16:43:58.915454] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:32:13.000 [2024-11-20 16:43:58.915501] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:32:13.000 [2024-11-20 16:43:58.927996] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:32:13.000 [2024-11-20 16:43:58.928042] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:32:13.260 [2024-11-20 16:43:59.069584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:13.260 [2024-11-20 16:43:59.098668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:13.260 [2024-11-20 16:43:59.128896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:13.260 [2024-11-20 16:43:59.158524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:13.260 [2024-11-20 16:43:59.173916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:13.260 [2024-11-20 16:43:59.202733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:13.520 [2024-11-20 16:43:59.223072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:13.520 [2024-11-20 16:43:59.250850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:32:13.520 Running I/O for 1 seconds... 00:32:13.520 Running I/O for 1 seconds... 00:32:13.520 Running I/O for 1 seconds... 00:32:13.779 Running I/O for 1 seconds... 00:32:14.350 177632.00 IOPS, 693.88 MiB/s 00:32:14.350 Latency(us) 00:32:14.350 [2024-11-20T15:44:00.309Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:14.350 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:32:14.350 Nvme1n1 : 1.00 177275.17 692.48 0.00 0.00 717.88 307.20 1993.39 00:32:14.350 [2024-11-20T15:44:00.309Z] =================================================================================================================== 00:32:14.350 [2024-11-20T15:44:00.309Z] Total : 177275.17 692.48 0.00 0.00 717.88 307.20 1993.39 00:32:14.610 9231.00 IOPS, 36.06 MiB/s 00:32:14.610 Latency(us) 00:32:14.610 [2024-11-20T15:44:00.569Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:14.610 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:32:14.610 Nvme1n1 : 1.02 9213.86 35.99 0.00 0.00 13773.72 4860.59 24794.45 00:32:14.610 [2024-11-20T15:44:00.569Z] =================================================================================================================== 00:32:14.610 [2024-11-20T15:44:00.569Z] Total : 9213.86 35.99 0.00 0.00 13773.72 4860.59 24794.45 00:32:14.610 19245.00 IOPS, 75.18 MiB/s 00:32:14.610 Latency(us) 00:32:14.610 [2024-11-20T15:44:00.569Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:14.610 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:32:14.610 Nvme1n1 : 1.01 19306.60 75.42 0.00 0.00 6611.36 2157.23 11031.89 00:32:14.610 [2024-11-20T15:44:00.569Z] =================================================================================================================== 00:32:14.610 [2024-11-20T15:44:00.569Z] Total : 19306.60 75.42 0.00 0.00 6611.36 2157.23 11031.89 00:32:14.610 9648.00 IOPS, 37.69 MiB/s [2024-11-20T15:44:00.569Z] 16:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2447521 00:32:14.610 16:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2447523 00:32:14.610 16:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2447526 00:32:14.610 00:32:14.610 Latency(us) 00:32:14.610 [2024-11-20T15:44:00.569Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:14.610 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:32:14.610 Nvme1n1 : 1.01 9769.02 38.16 0.00 0.00 13069.63 3290.45 29928.11 00:32:14.610 [2024-11-20T15:44:00.569Z] =================================================================================================================== 00:32:14.610 [2024-11-20T15:44:00.569Z] Total : 9769.02 38.16 0.00 0.00 13069.63 3290.45 29928.11 00:32:14.871 16:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:14.871 16:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.871 16:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:14.871 16:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.871 16:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:32:14.871 16:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:32:14.871 16:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:14.871 16:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:32:14.871 16:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:14.871 16:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:32:14.871 16:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:14.871 16:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:14.871 rmmod nvme_tcp 00:32:14.871 rmmod nvme_fabrics 00:32:14.871 rmmod nvme_keyring 00:32:14.871 16:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:14.871 16:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:32:14.871 16:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:32:14.872 16:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2447483 ']' 00:32:14.872 16:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2447483 00:32:14.872 16:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2447483 ']' 00:32:14.872 16:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2447483 00:32:14.872 16:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:32:14.872 16:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:14.872 16:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2447483 00:32:14.872 16:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:14.872 16:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:14.872 16:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2447483' 00:32:14.872 killing process with pid 2447483 00:32:14.872 16:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2447483 00:32:14.872 16:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2447483 00:32:15.133 16:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:15.133 16:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:15.133 16:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:15.133 16:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:32:15.133 16:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:32:15.133 16:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:15.133 16:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:32:15.133 16:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:15.133 16:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:15.133 16:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:15.133 16:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:15.133 16:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:17.042 16:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:17.042 00:32:17.042 real 0m12.661s 00:32:17.042 user 0m15.241s 00:32:17.042 sys 0m7.080s 00:32:17.042 16:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:17.042 16:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:17.042 ************************************ 00:32:17.042 END TEST nvmf_bdev_io_wait 00:32:17.042 ************************************ 00:32:17.304 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:32:17.304 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:17.304 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:17.304 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:17.304 ************************************ 00:32:17.304 START TEST nvmf_queue_depth 00:32:17.304 ************************************ 00:32:17.304 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:32:17.304 * Looking for test storage... 00:32:17.304 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:17.304 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:17.304 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:32:17.304 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:17.304 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:17.304 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:17.304 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:17.304 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:17.304 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:32:17.304 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:32:17.304 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:32:17.304 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:32:17.304 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:32:17.304 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:32:17.304 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:32:17.304 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:17.304 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:32:17.304 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:32:17.304 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:17.304 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:17.304 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:32:17.304 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:32:17.304 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:17.304 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:32:17.304 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:32:17.304 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:32:17.304 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:32:17.304 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:17.304 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:32:17.304 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:32:17.304 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:17.304 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:17.304 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:32:17.304 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:17.304 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:17.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:17.304 --rc genhtml_branch_coverage=1 00:32:17.304 --rc genhtml_function_coverage=1 00:32:17.304 --rc genhtml_legend=1 00:32:17.304 --rc geninfo_all_blocks=1 00:32:17.304 --rc geninfo_unexecuted_blocks=1 00:32:17.304 00:32:17.304 ' 00:32:17.304 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:17.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:17.304 --rc genhtml_branch_coverage=1 00:32:17.304 --rc genhtml_function_coverage=1 00:32:17.304 --rc genhtml_legend=1 00:32:17.304 --rc geninfo_all_blocks=1 00:32:17.304 --rc geninfo_unexecuted_blocks=1 00:32:17.304 00:32:17.304 ' 00:32:17.304 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:17.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:17.304 --rc genhtml_branch_coverage=1 00:32:17.304 --rc genhtml_function_coverage=1 00:32:17.304 --rc genhtml_legend=1 00:32:17.304 --rc geninfo_all_blocks=1 00:32:17.304 --rc geninfo_unexecuted_blocks=1 00:32:17.304 00:32:17.304 ' 00:32:17.304 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:17.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:17.304 --rc genhtml_branch_coverage=1 00:32:17.304 --rc genhtml_function_coverage=1 00:32:17.304 --rc genhtml_legend=1 00:32:17.304 --rc geninfo_all_blocks=1 00:32:17.304 --rc geninfo_unexecuted_blocks=1 00:32:17.304 00:32:17.304 ' 00:32:17.304 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:17.304 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:32:17.565 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:17.565 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:17.565 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:17.565 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:17.565 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:17.565 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:17.565 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:17.565 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:17.565 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:17.565 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:17.565 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:32:17.565 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:32:17.565 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:17.565 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:17.565 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:17.565 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:17.565 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:17.565 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:32:17.565 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:17.565 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:17.565 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:17.565 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.565 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.566 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.566 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:32:17.566 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.566 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:32:17.566 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:17.566 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:17.566 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:17.566 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:17.566 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:17.566 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:17.566 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:17.566 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:17.566 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:17.566 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:17.566 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:32:17.566 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:32:17.566 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:17.566 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:32:17.566 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:17.566 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:17.566 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:17.566 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:17.566 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:17.566 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:17.566 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:17.566 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:17.566 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:17.566 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:17.566 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:32:17.566 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:24.375 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:24.375 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:32:24.375 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:24.375 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:24.375 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:24.375 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:24.375 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:24.375 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:32:24.375 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:24.375 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:32:24.375 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:32:24.375 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:32:24.375 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:32:24.375 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:32:24.375 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:32:24.375 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:24.375 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:24.375 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:24.375 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:24.375 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:24.375 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:24.375 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:24.375 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:24.375 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:24.375 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:24.375 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:24.375 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:24.375 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:24.375 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:24.375 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:24.375 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:24.375 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:24.375 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:24.375 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:24.375 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:24.375 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:24.375 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:24.375 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:24.375 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:24.375 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:24.375 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:24.375 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:24.375 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:24.375 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:24.375 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:24.376 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:24.376 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:24.376 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:24.376 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:24.376 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:24.376 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:24.376 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:24.376 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:24.376 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:24.376 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:24.376 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:24.376 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:24.376 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:24.376 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:24.376 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:24.376 Found net devices under 0000:31:00.0: cvl_0_0 00:32:24.376 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:24.376 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:24.376 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:24.376 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:24.376 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:24.376 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:24.376 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:24.376 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:24.376 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:24.376 Found net devices under 0000:31:00.1: cvl_0_1 00:32:24.376 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:24.376 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:24.376 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:32:24.376 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:24.376 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:24.376 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:24.376 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:24.376 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:24.376 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:24.376 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:24.376 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:24.376 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:24.376 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:24.376 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:24.376 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:24.376 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:24.376 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:24.376 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:24.376 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:24.376 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:24.376 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:24.638 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:24.638 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:24.638 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:24.638 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:24.638 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:24.638 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:24.638 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:24.638 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:24.638 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:24.638 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.721 ms 00:32:24.638 00:32:24.638 --- 10.0.0.2 ping statistics --- 00:32:24.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:24.638 rtt min/avg/max/mdev = 0.721/0.721/0.721/0.000 ms 00:32:24.638 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:24.638 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:24.638 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:32:24.638 00:32:24.638 --- 10.0.0.1 ping statistics --- 00:32:24.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:24.638 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:32:24.638 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:24.638 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:32:24.638 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:24.638 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:24.639 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:24.639 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:24.639 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:24.639 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:24.639 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:24.639 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:32:24.639 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:24.639 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:24.639 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:24.639 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2452233 00:32:24.900 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2452233 00:32:24.900 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:32:24.900 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2452233 ']' 00:32:24.900 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:24.900 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:24.900 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:24.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:24.900 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:24.900 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:24.900 [2024-11-20 16:44:10.654265] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:24.900 [2024-11-20 16:44:10.655405] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:32:24.900 [2024-11-20 16:44:10.655457] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:24.900 [2024-11-20 16:44:10.760660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:24.900 [2024-11-20 16:44:10.811341] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:24.900 [2024-11-20 16:44:10.811391] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:24.900 [2024-11-20 16:44:10.811399] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:24.900 [2024-11-20 16:44:10.811407] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:24.900 [2024-11-20 16:44:10.811413] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:24.900 [2024-11-20 16:44:10.812100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:25.161 [2024-11-20 16:44:10.889843] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:25.161 [2024-11-20 16:44:10.890148] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:25.733 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:25.733 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:32:25.733 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:25.733 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:25.733 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:25.733 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:25.733 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:25.733 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.733 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:25.733 [2024-11-20 16:44:11.516924] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:25.734 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.734 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:25.734 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.734 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:25.734 Malloc0 00:32:25.734 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.734 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:25.734 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.734 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:25.734 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.734 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:25.734 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.734 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:25.734 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.734 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:25.734 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.734 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:25.734 [2024-11-20 16:44:11.593039] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:25.734 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.734 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2452266 00:32:25.734 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:25.734 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:32:25.734 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2452266 /var/tmp/bdevperf.sock 00:32:25.734 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2452266 ']' 00:32:25.734 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:25.734 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:25.734 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:25.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:25.734 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:25.734 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:25.734 [2024-11-20 16:44:11.659886] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:32:25.734 [2024-11-20 16:44:11.659956] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2452266 ] 00:32:25.995 [2024-11-20 16:44:11.738092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:25.995 [2024-11-20 16:44:11.780707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:26.568 16:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:26.568 16:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:32:26.568 16:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:26.568 16:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.568 16:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:26.828 NVMe0n1 00:32:26.828 16:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.828 16:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:26.829 Running I/O for 10 seconds... 00:32:29.154 8407.00 IOPS, 32.84 MiB/s [2024-11-20T15:44:16.056Z] 8724.00 IOPS, 34.08 MiB/s [2024-11-20T15:44:16.998Z] 8910.00 IOPS, 34.80 MiB/s [2024-11-20T15:44:17.940Z] 9611.50 IOPS, 37.54 MiB/s [2024-11-20T15:44:18.882Z] 10051.20 IOPS, 39.26 MiB/s [2024-11-20T15:44:19.826Z] 10406.67 IOPS, 40.65 MiB/s [2024-11-20T15:44:21.209Z] 10561.00 IOPS, 41.25 MiB/s [2024-11-20T15:44:22.150Z] 10758.50 IOPS, 42.03 MiB/s [2024-11-20T15:44:23.094Z] 10922.89 IOPS, 42.67 MiB/s [2024-11-20T15:44:23.094Z] 11052.70 IOPS, 43.17 MiB/s 00:32:37.135 Latency(us) 00:32:37.135 [2024-11-20T15:44:23.094Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:37.135 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:32:37.135 Verification LBA range: start 0x0 length 0x4000 00:32:37.135 NVMe0n1 : 10.06 11078.06 43.27 0.00 0.00 92102.11 24466.77 74711.04 00:32:37.135 [2024-11-20T15:44:23.094Z] =================================================================================================================== 00:32:37.135 [2024-11-20T15:44:23.094Z] Total : 11078.06 43.27 0.00 0.00 92102.11 24466.77 74711.04 00:32:37.135 { 00:32:37.135 "results": [ 00:32:37.135 { 00:32:37.135 "job": "NVMe0n1", 00:32:37.135 "core_mask": "0x1", 00:32:37.135 "workload": "verify", 00:32:37.135 "status": "finished", 00:32:37.135 "verify_range": { 00:32:37.135 "start": 0, 00:32:37.135 "length": 16384 00:32:37.135 }, 00:32:37.135 "queue_depth": 1024, 00:32:37.135 "io_size": 4096, 00:32:37.135 "runtime": 10.060152, 00:32:37.135 "iops": 11078.063234034635, 00:32:37.135 "mibps": 43.27368450794779, 00:32:37.135 "io_failed": 0, 00:32:37.135 "io_timeout": 0, 00:32:37.135 "avg_latency_us": 92102.10953344041, 00:32:37.135 "min_latency_us": 24466.773333333334, 00:32:37.135 "max_latency_us": 74711.04 00:32:37.135 } 00:32:37.135 ], 00:32:37.135 "core_count": 1 00:32:37.135 } 00:32:37.135 16:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2452266 00:32:37.135 16:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2452266 ']' 00:32:37.135 16:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2452266 00:32:37.135 16:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:32:37.135 16:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:37.135 16:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2452266 00:32:37.135 16:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:37.135 16:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:37.135 16:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2452266' 00:32:37.135 killing process with pid 2452266 00:32:37.135 16:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2452266 00:32:37.135 Received shutdown signal, test time was about 10.000000 seconds 00:32:37.135 00:32:37.135 Latency(us) 00:32:37.135 [2024-11-20T15:44:23.094Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:37.135 [2024-11-20T15:44:23.094Z] =================================================================================================================== 00:32:37.135 [2024-11-20T15:44:23.094Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:37.135 16:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2452266 00:32:37.135 16:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:32:37.135 16:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:32:37.135 16:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:37.135 16:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:32:37.135 16:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:37.135 16:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:32:37.135 16:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:37.135 16:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:37.135 rmmod nvme_tcp 00:32:37.396 rmmod nvme_fabrics 00:32:37.396 rmmod nvme_keyring 00:32:37.396 16:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:37.396 16:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:32:37.396 16:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:32:37.396 16:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2452233 ']' 00:32:37.396 16:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2452233 00:32:37.396 16:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2452233 ']' 00:32:37.396 16:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2452233 00:32:37.396 16:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:32:37.396 16:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:37.396 16:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2452233 00:32:37.396 16:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:37.396 16:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:37.396 16:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2452233' 00:32:37.396 killing process with pid 2452233 00:32:37.396 16:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2452233 00:32:37.396 16:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2452233 00:32:37.396 16:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:37.396 16:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:37.396 16:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:37.396 16:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:32:37.396 16:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:32:37.396 16:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:37.396 16:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:32:37.396 16:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:37.396 16:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:37.396 16:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:37.396 16:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:37.396 16:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:39.940 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:39.940 00:32:39.940 real 0m22.354s 00:32:39.940 user 0m24.826s 00:32:39.940 sys 0m7.234s 00:32:39.940 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:39.940 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:39.940 ************************************ 00:32:39.940 END TEST nvmf_queue_depth 00:32:39.940 ************************************ 00:32:39.940 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:32:39.940 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:39.940 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:39.940 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:39.940 ************************************ 00:32:39.940 START TEST nvmf_target_multipath 00:32:39.940 ************************************ 00:32:39.940 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:32:39.940 * Looking for test storage... 00:32:39.940 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:39.940 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:39.940 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:32:39.940 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:39.940 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:39.940 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:39.940 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:39.940 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:39.940 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:32:39.940 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:32:39.940 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:32:39.940 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:32:39.940 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:32:39.940 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:32:39.940 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:32:39.940 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:39.940 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:32:39.940 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:32:39.940 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:39.940 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:39.940 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:32:39.940 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:32:39.940 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:39.940 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:32:39.940 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:32:39.940 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:32:39.940 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:32:39.940 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:39.940 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:32:39.940 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:32:39.940 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:39.940 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:39.940 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:32:39.940 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:39.940 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:39.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.940 --rc genhtml_branch_coverage=1 00:32:39.940 --rc genhtml_function_coverage=1 00:32:39.940 --rc genhtml_legend=1 00:32:39.940 --rc geninfo_all_blocks=1 00:32:39.940 --rc geninfo_unexecuted_blocks=1 00:32:39.940 00:32:39.940 ' 00:32:39.940 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:39.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.940 --rc genhtml_branch_coverage=1 00:32:39.940 --rc genhtml_function_coverage=1 00:32:39.940 --rc genhtml_legend=1 00:32:39.940 --rc geninfo_all_blocks=1 00:32:39.940 --rc geninfo_unexecuted_blocks=1 00:32:39.940 00:32:39.940 ' 00:32:39.940 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:39.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.940 --rc genhtml_branch_coverage=1 00:32:39.940 --rc genhtml_function_coverage=1 00:32:39.940 --rc genhtml_legend=1 00:32:39.941 --rc geninfo_all_blocks=1 00:32:39.941 --rc geninfo_unexecuted_blocks=1 00:32:39.941 00:32:39.941 ' 00:32:39.941 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:39.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.941 --rc genhtml_branch_coverage=1 00:32:39.941 --rc genhtml_function_coverage=1 00:32:39.941 --rc genhtml_legend=1 00:32:39.941 --rc geninfo_all_blocks=1 00:32:39.941 --rc geninfo_unexecuted_blocks=1 00:32:39.941 00:32:39.941 ' 00:32:39.941 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:39.941 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:32:39.941 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:39.941 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:39.941 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:39.941 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:39.941 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:39.941 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:39.941 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:39.941 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:39.941 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:39.941 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:39.941 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:32:39.941 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:32:39.941 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:39.941 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:39.941 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:39.941 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:39.941 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:39.941 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:32:39.941 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:39.941 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:39.941 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:39.941 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.941 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.941 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.941 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:32:39.941 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.941 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:32:39.941 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:39.941 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:39.941 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:39.941 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:39.941 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:39.941 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:39.941 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:39.941 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:39.941 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:39.941 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:39.941 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:39.941 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:39.941 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:32:39.941 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:39.941 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:32:39.941 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:39.941 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:39.941 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:39.941 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:39.941 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:39.941 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:39.941 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:39.941 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:39.942 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:39.942 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:39.942 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:32:39.942 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:32:48.082 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:48.082 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:32:48.082 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:48.082 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:48.082 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:48.082 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:48.082 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:48.082 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:32:48.082 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:48.082 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:32:48.082 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:32:48.082 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:32:48.082 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:32:48.082 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:32:48.082 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:32:48.082 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:48.082 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:48.082 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:48.082 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:48.082 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:48.082 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:48.083 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:48.083 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:48.083 Found net devices under 0000:31:00.0: cvl_0_0 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:48.083 Found net devices under 0000:31:00.1: cvl_0_1 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:48.083 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:48.083 16:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:48.083 16:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:48.083 16:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:48.083 16:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:48.083 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:48.083 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.685 ms 00:32:48.083 00:32:48.083 --- 10.0.0.2 ping statistics --- 00:32:48.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:48.083 rtt min/avg/max/mdev = 0.685/0.685/0.685/0.000 ms 00:32:48.083 16:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:48.083 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:48.083 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:32:48.083 00:32:48.083 --- 10.0.0.1 ping statistics --- 00:32:48.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:48.083 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:32:48.083 16:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:48.083 16:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:32:48.083 16:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:48.083 16:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:48.083 16:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:48.083 16:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:48.083 16:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:48.083 16:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:48.084 16:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:48.084 16:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:32:48.084 16:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:32:48.084 only one NIC for nvmf test 00:32:48.084 16:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:32:48.084 16:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:48.084 16:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:32:48.084 16:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:48.084 16:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:32:48.084 16:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:48.084 16:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:48.084 rmmod nvme_tcp 00:32:48.084 rmmod nvme_fabrics 00:32:48.084 rmmod nvme_keyring 00:32:48.084 16:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:48.084 16:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:32:48.084 16:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:32:48.084 16:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:32:48.084 16:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:48.084 16:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:48.084 16:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:48.084 16:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:32:48.084 16:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:32:48.084 16:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:48.084 16:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:32:48.084 16:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:48.084 16:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:48.084 16:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:48.084 16:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:48.084 16:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:49.467 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:49.467 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:32:49.467 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:32:49.467 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:49.467 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:32:49.467 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:49.467 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:32:49.467 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:49.467 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:49.467 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:49.467 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:32:49.467 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:32:49.467 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:32:49.467 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:49.467 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:49.467 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:49.467 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:32:49.467 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:32:49.467 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:49.467 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:32:49.467 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:49.467 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:49.467 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:49.467 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:49.467 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:49.467 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:49.467 00:32:49.467 real 0m9.842s 00:32:49.467 user 0m2.030s 00:32:49.467 sys 0m5.747s 00:32:49.467 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:49.467 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:32:49.467 ************************************ 00:32:49.467 END TEST nvmf_target_multipath 00:32:49.467 ************************************ 00:32:49.467 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:32:49.467 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:49.468 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:49.468 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:49.468 ************************************ 00:32:49.468 START TEST nvmf_zcopy 00:32:49.468 ************************************ 00:32:49.468 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:32:49.731 * Looking for test storage... 00:32:49.731 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:49.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.731 --rc genhtml_branch_coverage=1 00:32:49.731 --rc genhtml_function_coverage=1 00:32:49.731 --rc genhtml_legend=1 00:32:49.731 --rc geninfo_all_blocks=1 00:32:49.731 --rc geninfo_unexecuted_blocks=1 00:32:49.731 00:32:49.731 ' 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:49.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.731 --rc genhtml_branch_coverage=1 00:32:49.731 --rc genhtml_function_coverage=1 00:32:49.731 --rc genhtml_legend=1 00:32:49.731 --rc geninfo_all_blocks=1 00:32:49.731 --rc geninfo_unexecuted_blocks=1 00:32:49.731 00:32:49.731 ' 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:49.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.731 --rc genhtml_branch_coverage=1 00:32:49.731 --rc genhtml_function_coverage=1 00:32:49.731 --rc genhtml_legend=1 00:32:49.731 --rc geninfo_all_blocks=1 00:32:49.731 --rc geninfo_unexecuted_blocks=1 00:32:49.731 00:32:49.731 ' 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:49.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.731 --rc genhtml_branch_coverage=1 00:32:49.731 --rc genhtml_function_coverage=1 00:32:49.731 --rc genhtml_legend=1 00:32:49.731 --rc geninfo_all_blocks=1 00:32:49.731 --rc geninfo_unexecuted_blocks=1 00:32:49.731 00:32:49.731 ' 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:49.731 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.732 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.732 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.732 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:32:49.732 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.732 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:32:49.732 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:49.732 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:49.732 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:49.732 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:49.732 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:49.732 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:49.732 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:49.732 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:49.732 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:49.732 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:49.732 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:32:49.732 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:49.732 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:49.732 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:49.732 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:49.732 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:49.732 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:49.732 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:49.732 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:49.732 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:49.732 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:49.732 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:32:49.732 16:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:57.875 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:57.875 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:57.875 Found net devices under 0000:31:00.0: cvl_0_0 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:57.875 Found net devices under 0000:31:00.1: cvl_0_1 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:57.875 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:57.876 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:57.876 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:57.876 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:57.876 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:57.876 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:57.876 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:57.876 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:57.876 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:57.876 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:57.876 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.510 ms 00:32:57.876 00:32:57.876 --- 10.0.0.2 ping statistics --- 00:32:57.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:57.876 rtt min/avg/max/mdev = 0.510/0.510/0.510/0.000 ms 00:32:57.876 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:57.876 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:57.876 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:32:57.876 00:32:57.876 --- 10.0.0.1 ping statistics --- 00:32:57.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:57.876 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:32:57.876 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:57.876 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:32:57.876 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:57.876 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:57.876 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:57.876 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:57.876 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:57.876 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:57.876 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:57.876 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:32:57.876 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:57.876 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:57.876 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:57.876 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2462886 00:32:57.876 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2462886 00:32:57.876 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:32:57.876 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2462886 ']' 00:32:57.876 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:57.876 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:57.876 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:57.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:57.876 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:57.876 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:57.876 [2024-11-20 16:44:42.842225] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:57.876 [2024-11-20 16:44:42.843260] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:32:57.876 [2024-11-20 16:44:42.843300] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:57.876 [2024-11-20 16:44:42.939696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:57.876 [2024-11-20 16:44:42.989710] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:57.876 [2024-11-20 16:44:42.989761] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:57.876 [2024-11-20 16:44:42.989770] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:57.876 [2024-11-20 16:44:42.989781] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:57.876 [2024-11-20 16:44:42.989787] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:57.876 [2024-11-20 16:44:42.990603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:57.876 [2024-11-20 16:44:43.068273] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:57.876 [2024-11-20 16:44:43.068559] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:57.876 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:57.876 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:32:57.876 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:57.876 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:57.876 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:57.876 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:57.876 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:32:57.876 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:32:57.876 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.876 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:57.876 [2024-11-20 16:44:43.687463] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:57.876 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.876 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:57.876 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.876 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:57.876 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.876 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:57.876 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.876 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:57.876 [2024-11-20 16:44:43.715736] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:57.876 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.876 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:57.876 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.876 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:57.876 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.877 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:32:57.877 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.877 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:57.877 malloc0 00:32:57.877 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.877 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:32:57.877 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.877 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:57.877 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.877 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:32:57.877 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:32:57.877 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:32:57.877 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:32:57.877 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:57.877 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:57.877 { 00:32:57.877 "params": { 00:32:57.877 "name": "Nvme$subsystem", 00:32:57.877 "trtype": "$TEST_TRANSPORT", 00:32:57.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:57.877 "adrfam": "ipv4", 00:32:57.877 "trsvcid": "$NVMF_PORT", 00:32:57.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:57.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:57.877 "hdgst": ${hdgst:-false}, 00:32:57.877 "ddgst": ${ddgst:-false} 00:32:57.877 }, 00:32:57.877 "method": "bdev_nvme_attach_controller" 00:32:57.877 } 00:32:57.877 EOF 00:32:57.877 )") 00:32:57.877 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:32:57.877 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:32:57.877 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:32:57.877 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:57.877 "params": { 00:32:57.877 "name": "Nvme1", 00:32:57.877 "trtype": "tcp", 00:32:57.877 "traddr": "10.0.0.2", 00:32:57.877 "adrfam": "ipv4", 00:32:57.877 "trsvcid": "4420", 00:32:57.877 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:57.877 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:57.877 "hdgst": false, 00:32:57.877 "ddgst": false 00:32:57.877 }, 00:32:57.877 "method": "bdev_nvme_attach_controller" 00:32:57.877 }' 00:32:57.877 [2024-11-20 16:44:43.826296] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:32:57.877 [2024-11-20 16:44:43.826368] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2463010 ] 00:32:58.138 [2024-11-20 16:44:43.904532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:58.138 [2024-11-20 16:44:43.946380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:58.138 Running I/O for 10 seconds... 00:33:00.465 6616.00 IOPS, 51.69 MiB/s [2024-11-20T15:44:47.365Z] 6663.50 IOPS, 52.06 MiB/s [2024-11-20T15:44:48.305Z] 6677.67 IOPS, 52.17 MiB/s [2024-11-20T15:44:49.246Z] 6684.00 IOPS, 52.22 MiB/s [2024-11-20T15:44:50.187Z] 6744.40 IOPS, 52.69 MiB/s [2024-11-20T15:44:51.128Z] 7232.50 IOPS, 56.50 MiB/s [2024-11-20T15:44:52.513Z] 7583.00 IOPS, 59.24 MiB/s [2024-11-20T15:44:53.453Z] 7847.12 IOPS, 61.31 MiB/s [2024-11-20T15:44:54.392Z] 8052.00 IOPS, 62.91 MiB/s [2024-11-20T15:44:54.392Z] 8216.20 IOPS, 64.19 MiB/s 00:33:08.433 Latency(us) 00:33:08.433 [2024-11-20T15:44:54.392Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:08.433 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:33:08.433 Verification LBA range: start 0x0 length 0x1000 00:33:08.433 Nvme1n1 : 10.05 8187.55 63.97 0.00 0.00 15527.55 2894.51 43690.67 00:33:08.433 [2024-11-20T15:44:54.392Z] =================================================================================================================== 00:33:08.433 [2024-11-20T15:44:54.392Z] Total : 8187.55 63.97 0.00 0.00 15527.55 2894.51 43690.67 00:33:08.433 16:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2465013 00:33:08.433 16:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:33:08.433 16:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:08.433 16:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:33:08.433 16:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:33:08.433 16:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:33:08.433 16:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:33:08.433 16:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:08.433 16:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:08.433 { 00:33:08.433 "params": { 00:33:08.433 "name": "Nvme$subsystem", 00:33:08.433 "trtype": "$TEST_TRANSPORT", 00:33:08.433 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:08.433 "adrfam": "ipv4", 00:33:08.433 "trsvcid": "$NVMF_PORT", 00:33:08.433 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:08.433 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:08.433 "hdgst": ${hdgst:-false}, 00:33:08.433 "ddgst": ${ddgst:-false} 00:33:08.433 }, 00:33:08.433 "method": "bdev_nvme_attach_controller" 00:33:08.433 } 00:33:08.433 EOF 00:33:08.433 )") 00:33:08.433 16:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:33:08.433 [2024-11-20 16:44:54.291008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.433 [2024-11-20 16:44:54.291035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.433 16:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:33:08.433 16:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:33:08.433 16:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:08.433 "params": { 00:33:08.433 "name": "Nvme1", 00:33:08.433 "trtype": "tcp", 00:33:08.433 "traddr": "10.0.0.2", 00:33:08.433 "adrfam": "ipv4", 00:33:08.433 "trsvcid": "4420", 00:33:08.433 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:08.433 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:08.433 "hdgst": false, 00:33:08.433 "ddgst": false 00:33:08.433 }, 00:33:08.433 "method": "bdev_nvme_attach_controller" 00:33:08.433 }' 00:33:08.433 [2024-11-20 16:44:54.302970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.433 [2024-11-20 16:44:54.302979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.433 [2024-11-20 16:44:54.314968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.433 [2024-11-20 16:44:54.314975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.433 [2024-11-20 16:44:54.326968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.433 [2024-11-20 16:44:54.326975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.433 [2024-11-20 16:44:54.338968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.433 [2024-11-20 16:44:54.338976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.433 [2024-11-20 16:44:54.343538] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:33:08.433 [2024-11-20 16:44:54.343584] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2465013 ] 00:33:08.433 [2024-11-20 16:44:54.350968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.433 [2024-11-20 16:44:54.350975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.433 [2024-11-20 16:44:54.362967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.433 [2024-11-20 16:44:54.362974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.433 [2024-11-20 16:44:54.374967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.433 [2024-11-20 16:44:54.374974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.433 [2024-11-20 16:44:54.386967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.433 [2024-11-20 16:44:54.386975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.693 [2024-11-20 16:44:54.398967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.693 [2024-11-20 16:44:54.398975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.693 [2024-11-20 16:44:54.410968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.693 [2024-11-20 16:44:54.410975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.693 [2024-11-20 16:44:54.413187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:08.693 [2024-11-20 16:44:54.422968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.693 [2024-11-20 16:44:54.422977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.693 [2024-11-20 16:44:54.434969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.693 [2024-11-20 16:44:54.434978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.693 [2024-11-20 16:44:54.446968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.693 [2024-11-20 16:44:54.446978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.693 [2024-11-20 16:44:54.448162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:08.693 [2024-11-20 16:44:54.458970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.693 [2024-11-20 16:44:54.458977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.693 [2024-11-20 16:44:54.470974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.693 [2024-11-20 16:44:54.470988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.693 [2024-11-20 16:44:54.482969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.693 [2024-11-20 16:44:54.482983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.693 [2024-11-20 16:44:54.494968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.693 [2024-11-20 16:44:54.494977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.693 [2024-11-20 16:44:54.506968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.693 [2024-11-20 16:44:54.506976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.693 [2024-11-20 16:44:54.518976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.693 [2024-11-20 16:44:54.518992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.693 [2024-11-20 16:44:54.530974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.693 [2024-11-20 16:44:54.530988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.693 [2024-11-20 16:44:54.542971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.693 [2024-11-20 16:44:54.542979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.693 [2024-11-20 16:44:54.554971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.693 [2024-11-20 16:44:54.554980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.693 [2024-11-20 16:44:54.566969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.693 [2024-11-20 16:44:54.566976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.693 [2024-11-20 16:44:54.578969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.693 [2024-11-20 16:44:54.578976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.693 [2024-11-20 16:44:54.590969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.693 [2024-11-20 16:44:54.590978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.693 [2024-11-20 16:44:54.602970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.693 [2024-11-20 16:44:54.602979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.693 [2024-11-20 16:44:54.614970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.693 [2024-11-20 16:44:54.614977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.693 [2024-11-20 16:44:54.626969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.693 [2024-11-20 16:44:54.626976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.693 [2024-11-20 16:44:54.638969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.693 [2024-11-20 16:44:54.638976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.953 [2024-11-20 16:44:54.650970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.953 [2024-11-20 16:44:54.650979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.953 [2024-11-20 16:44:54.662969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.953 [2024-11-20 16:44:54.662976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.953 [2024-11-20 16:44:54.674969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.953 [2024-11-20 16:44:54.674976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.953 [2024-11-20 16:44:54.686970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.953 [2024-11-20 16:44:54.686978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.953 [2024-11-20 16:44:54.698969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.953 [2024-11-20 16:44:54.698976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.953 [2024-11-20 16:44:54.710969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.953 [2024-11-20 16:44:54.710976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.953 [2024-11-20 16:44:54.722970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.953 [2024-11-20 16:44:54.722977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.953 [2024-11-20 16:44:54.735022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.953 [2024-11-20 16:44:54.735032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.953 [2024-11-20 16:44:54.746973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.953 [2024-11-20 16:44:54.746989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.953 Running I/O for 5 seconds... 00:33:08.953 [2024-11-20 16:44:54.762349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.953 [2024-11-20 16:44:54.762366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.953 [2024-11-20 16:44:54.775400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.953 [2024-11-20 16:44:54.775417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.953 [2024-11-20 16:44:54.790340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.953 [2024-11-20 16:44:54.790356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.953 [2024-11-20 16:44:54.803394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.953 [2024-11-20 16:44:54.803408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.953 [2024-11-20 16:44:54.817844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.953 [2024-11-20 16:44:54.817863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.953 [2024-11-20 16:44:54.830972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.953 [2024-11-20 16:44:54.830992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.953 [2024-11-20 16:44:54.843756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.953 [2024-11-20 16:44:54.843771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.953 [2024-11-20 16:44:54.856666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.953 [2024-11-20 16:44:54.856681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.953 [2024-11-20 16:44:54.870078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.953 [2024-11-20 16:44:54.870093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.953 [2024-11-20 16:44:54.883219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.953 [2024-11-20 16:44:54.883234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.953 [2024-11-20 16:44:54.896165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.953 [2024-11-20 16:44:54.896180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.953 [2024-11-20 16:44:54.910062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.953 [2024-11-20 16:44:54.910077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.213 [2024-11-20 16:44:54.922956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.213 [2024-11-20 16:44:54.922971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.213 [2024-11-20 16:44:54.935703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.213 [2024-11-20 16:44:54.935717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.213 [2024-11-20 16:44:54.949977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.213 [2024-11-20 16:44:54.949996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.214 [2024-11-20 16:44:54.963037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.214 [2024-11-20 16:44:54.963051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.214 [2024-11-20 16:44:54.975861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.214 [2024-11-20 16:44:54.975875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.214 [2024-11-20 16:44:54.990292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.214 [2024-11-20 16:44:54.990307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.214 [2024-11-20 16:44:55.003449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.214 [2024-11-20 16:44:55.003463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.214 [2024-11-20 16:44:55.017918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.214 [2024-11-20 16:44:55.017933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.214 [2024-11-20 16:44:55.030604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.214 [2024-11-20 16:44:55.030618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.214 [2024-11-20 16:44:55.043264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.214 [2024-11-20 16:44:55.043278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.214 [2024-11-20 16:44:55.058476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.214 [2024-11-20 16:44:55.058491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.214 [2024-11-20 16:44:55.071962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.214 [2024-11-20 16:44:55.071986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.214 [2024-11-20 16:44:55.086572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.214 [2024-11-20 16:44:55.086587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.214 [2024-11-20 16:44:55.099729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.214 [2024-11-20 16:44:55.099744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.214 [2024-11-20 16:44:55.114605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.214 [2024-11-20 16:44:55.114620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.214 [2024-11-20 16:44:55.127654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.214 [2024-11-20 16:44:55.127668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.214 [2024-11-20 16:44:55.142485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.214 [2024-11-20 16:44:55.142499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.214 [2024-11-20 16:44:55.155574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.214 [2024-11-20 16:44:55.155588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.214 [2024-11-20 16:44:55.170204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.214 [2024-11-20 16:44:55.170218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.475 [2024-11-20 16:44:55.183098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.475 [2024-11-20 16:44:55.183112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.475 [2024-11-20 16:44:55.196019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.475 [2024-11-20 16:44:55.196033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.475 [2024-11-20 16:44:55.209888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.475 [2024-11-20 16:44:55.209902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.475 [2024-11-20 16:44:55.222611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.475 [2024-11-20 16:44:55.222625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.475 [2024-11-20 16:44:55.235278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.475 [2024-11-20 16:44:55.235292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.475 [2024-11-20 16:44:55.250281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.475 [2024-11-20 16:44:55.250295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.475 [2024-11-20 16:44:55.263518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.475 [2024-11-20 16:44:55.263532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.475 [2024-11-20 16:44:55.277557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.475 [2024-11-20 16:44:55.277572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.475 [2024-11-20 16:44:55.290699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.475 [2024-11-20 16:44:55.290714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.475 [2024-11-20 16:44:55.304044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.475 [2024-11-20 16:44:55.304058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.475 [2024-11-20 16:44:55.318033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.475 [2024-11-20 16:44:55.318048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.475 [2024-11-20 16:44:55.331180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.475 [2024-11-20 16:44:55.331200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.475 [2024-11-20 16:44:55.343972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.475 [2024-11-20 16:44:55.343991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.475 [2024-11-20 16:44:55.358374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.475 [2024-11-20 16:44:55.358389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.475 [2024-11-20 16:44:55.371368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.475 [2024-11-20 16:44:55.371382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.475 [2024-11-20 16:44:55.386341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.475 [2024-11-20 16:44:55.386355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.475 [2024-11-20 16:44:55.399582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.475 [2024-11-20 16:44:55.399596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.475 [2024-11-20 16:44:55.413721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.475 [2024-11-20 16:44:55.413736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.475 [2024-11-20 16:44:55.426970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.475 [2024-11-20 16:44:55.426988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.736 [2024-11-20 16:44:55.439519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.736 [2024-11-20 16:44:55.439535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.736 [2024-11-20 16:44:55.454596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.736 [2024-11-20 16:44:55.454611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.736 [2024-11-20 16:44:55.467450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.736 [2024-11-20 16:44:55.467464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.736 [2024-11-20 16:44:55.481855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.736 [2024-11-20 16:44:55.481869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.736 [2024-11-20 16:44:55.495186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.736 [2024-11-20 16:44:55.495200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.736 [2024-11-20 16:44:55.508256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.736 [2024-11-20 16:44:55.508270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.736 [2024-11-20 16:44:55.522110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.736 [2024-11-20 16:44:55.522125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.736 [2024-11-20 16:44:55.535164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.736 [2024-11-20 16:44:55.535178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.736 [2024-11-20 16:44:55.548216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.736 [2024-11-20 16:44:55.548231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.736 [2024-11-20 16:44:55.562387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.736 [2024-11-20 16:44:55.562401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.736 [2024-11-20 16:44:55.575354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.736 [2024-11-20 16:44:55.575368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.736 [2024-11-20 16:44:55.589928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.736 [2024-11-20 16:44:55.589950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.736 [2024-11-20 16:44:55.602875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.736 [2024-11-20 16:44:55.602890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.736 [2024-11-20 16:44:55.616295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.736 [2024-11-20 16:44:55.616310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.736 [2024-11-20 16:44:55.630996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.736 [2024-11-20 16:44:55.631011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.736 [2024-11-20 16:44:55.643628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.736 [2024-11-20 16:44:55.643642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.736 [2024-11-20 16:44:55.657896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.736 [2024-11-20 16:44:55.657910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.736 [2024-11-20 16:44:55.670920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.736 [2024-11-20 16:44:55.670934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.736 [2024-11-20 16:44:55.684178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.736 [2024-11-20 16:44:55.684192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.996 [2024-11-20 16:44:55.698290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.996 [2024-11-20 16:44:55.698304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.996 [2024-11-20 16:44:55.711699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.996 [2024-11-20 16:44:55.711713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.996 [2024-11-20 16:44:55.726298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.996 [2024-11-20 16:44:55.726312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.996 [2024-11-20 16:44:55.739232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.997 [2024-11-20 16:44:55.739247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.997 [2024-11-20 16:44:55.752142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.997 [2024-11-20 16:44:55.752155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.997 19007.00 IOPS, 148.49 MiB/s [2024-11-20T15:44:55.956Z] [2024-11-20 16:44:55.766691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.997 [2024-11-20 16:44:55.766705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.997 [2024-11-20 16:44:55.780009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.997 [2024-11-20 16:44:55.780023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.997 [2024-11-20 16:44:55.794292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.997 [2024-11-20 16:44:55.794306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.997 [2024-11-20 16:44:55.807558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.997 [2024-11-20 16:44:55.807571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.997 [2024-11-20 16:44:55.821907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.997 [2024-11-20 16:44:55.821921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.997 [2024-11-20 16:44:55.834999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.997 [2024-11-20 16:44:55.835013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.997 [2024-11-20 16:44:55.847847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.997 [2024-11-20 16:44:55.847861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.997 [2024-11-20 16:44:55.861882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.997 [2024-11-20 16:44:55.861896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.997 [2024-11-20 16:44:55.874958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.997 [2024-11-20 16:44:55.874973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.997 [2024-11-20 16:44:55.887916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.997 [2024-11-20 16:44:55.887929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.997 [2024-11-20 16:44:55.902275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.997 [2024-11-20 16:44:55.902289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.997 [2024-11-20 16:44:55.915309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.997 [2024-11-20 16:44:55.915323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.997 [2024-11-20 16:44:55.929787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.997 [2024-11-20 16:44:55.929802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:09.997 [2024-11-20 16:44:55.942428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:09.997 [2024-11-20 16:44:55.942442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.257 [2024-11-20 16:44:55.955701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.257 [2024-11-20 16:44:55.955716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.257 [2024-11-20 16:44:55.970289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.257 [2024-11-20 16:44:55.970303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.257 [2024-11-20 16:44:55.983450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.257 [2024-11-20 16:44:55.983463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.257 [2024-11-20 16:44:55.998387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.257 [2024-11-20 16:44:55.998401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.257 [2024-11-20 16:44:56.011415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.257 [2024-11-20 16:44:56.011429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.257 [2024-11-20 16:44:56.026124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.257 [2024-11-20 16:44:56.026139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.257 [2024-11-20 16:44:56.039369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.257 [2024-11-20 16:44:56.039383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.257 [2024-11-20 16:44:56.054076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.257 [2024-11-20 16:44:56.054090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.257 [2024-11-20 16:44:56.066861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.257 [2024-11-20 16:44:56.066875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.257 [2024-11-20 16:44:56.080442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.257 [2024-11-20 16:44:56.080456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.258 [2024-11-20 16:44:56.094445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.258 [2024-11-20 16:44:56.094459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.258 [2024-11-20 16:44:56.107106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.258 [2024-11-20 16:44:56.107120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.258 [2024-11-20 16:44:56.119882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.258 [2024-11-20 16:44:56.119896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.258 [2024-11-20 16:44:56.133735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.258 [2024-11-20 16:44:56.133750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.258 [2024-11-20 16:44:56.146926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.258 [2024-11-20 16:44:56.146942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.258 [2024-11-20 16:44:56.159691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.258 [2024-11-20 16:44:56.159705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.258 [2024-11-20 16:44:56.174406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.258 [2024-11-20 16:44:56.174420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.258 [2024-11-20 16:44:56.187573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.258 [2024-11-20 16:44:56.187586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.258 [2024-11-20 16:44:56.201545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.258 [2024-11-20 16:44:56.201559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.258 [2024-11-20 16:44:56.214538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.258 [2024-11-20 16:44:56.214552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.518 [2024-11-20 16:44:56.227097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.518 [2024-11-20 16:44:56.227112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.518 [2024-11-20 16:44:56.240067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.518 [2024-11-20 16:44:56.240081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.518 [2024-11-20 16:44:56.254401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.518 [2024-11-20 16:44:56.254415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.518 [2024-11-20 16:44:56.267666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.518 [2024-11-20 16:44:56.267680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.518 [2024-11-20 16:44:56.282155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.518 [2024-11-20 16:44:56.282169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.518 [2024-11-20 16:44:56.295087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.518 [2024-11-20 16:44:56.295101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.518 [2024-11-20 16:44:56.307856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.518 [2024-11-20 16:44:56.307870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.518 [2024-11-20 16:44:56.322077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.518 [2024-11-20 16:44:56.322092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.518 [2024-11-20 16:44:56.335025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.518 [2024-11-20 16:44:56.335040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.518 [2024-11-20 16:44:56.347887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.518 [2024-11-20 16:44:56.347901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.518 [2024-11-20 16:44:56.362194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.518 [2024-11-20 16:44:56.362208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.518 [2024-11-20 16:44:56.375456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.518 [2024-11-20 16:44:56.375470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.518 [2024-11-20 16:44:56.390270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.518 [2024-11-20 16:44:56.390285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.518 [2024-11-20 16:44:56.403458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.518 [2024-11-20 16:44:56.403472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.518 [2024-11-20 16:44:56.417838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.518 [2024-11-20 16:44:56.417852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.518 [2024-11-20 16:44:56.430719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.518 [2024-11-20 16:44:56.430733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.518 [2024-11-20 16:44:56.443422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.518 [2024-11-20 16:44:56.443436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.518 [2024-11-20 16:44:56.458632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.518 [2024-11-20 16:44:56.458646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.518 [2024-11-20 16:44:56.471805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.518 [2024-11-20 16:44:56.471819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.778 [2024-11-20 16:44:56.485976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.778 [2024-11-20 16:44:56.485995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.778 [2024-11-20 16:44:56.498970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.778 [2024-11-20 16:44:56.498987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.778 [2024-11-20 16:44:56.511684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.778 [2024-11-20 16:44:56.511698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.778 [2024-11-20 16:44:56.526675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.778 [2024-11-20 16:44:56.526690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.778 [2024-11-20 16:44:56.539889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.778 [2024-11-20 16:44:56.539904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.778 [2024-11-20 16:44:56.553803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.778 [2024-11-20 16:44:56.553818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.778 [2024-11-20 16:44:56.567097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.778 [2024-11-20 16:44:56.567111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.778 [2024-11-20 16:44:56.579904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.778 [2024-11-20 16:44:56.579919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.778 [2024-11-20 16:44:56.594547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.778 [2024-11-20 16:44:56.594561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.778 [2024-11-20 16:44:56.607565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.778 [2024-11-20 16:44:56.607583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.778 [2024-11-20 16:44:56.622149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.778 [2024-11-20 16:44:56.622163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.778 [2024-11-20 16:44:56.635263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.778 [2024-11-20 16:44:56.635277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.778 [2024-11-20 16:44:56.650020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.778 [2024-11-20 16:44:56.650034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.778 [2024-11-20 16:44:56.662896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.778 [2024-11-20 16:44:56.662910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.778 [2024-11-20 16:44:56.675876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.778 [2024-11-20 16:44:56.675890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.778 [2024-11-20 16:44:56.690480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.778 [2024-11-20 16:44:56.690495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.778 [2024-11-20 16:44:56.703650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.778 [2024-11-20 16:44:56.703664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.778 [2024-11-20 16:44:56.718016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.778 [2024-11-20 16:44:56.718031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:10.778 [2024-11-20 16:44:56.731054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:10.778 [2024-11-20 16:44:56.731068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.038 [2024-11-20 16:44:56.744195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.038 [2024-11-20 16:44:56.744209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.038 [2024-11-20 16:44:56.757945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.038 [2024-11-20 16:44:56.757960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.038 19050.50 IOPS, 148.83 MiB/s [2024-11-20T15:44:56.997Z] [2024-11-20 16:44:56.770864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.038 [2024-11-20 16:44:56.770878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.038 [2024-11-20 16:44:56.783576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.038 [2024-11-20 16:44:56.783590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.038 [2024-11-20 16:44:56.798046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.038 [2024-11-20 16:44:56.798061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.038 [2024-11-20 16:44:56.811109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.038 [2024-11-20 16:44:56.811124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.038 [2024-11-20 16:44:56.824066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.038 [2024-11-20 16:44:56.824081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.038 [2024-11-20 16:44:56.838186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.038 [2024-11-20 16:44:56.838200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.038 [2024-11-20 16:44:56.851112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.038 [2024-11-20 16:44:56.851127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.038 [2024-11-20 16:44:56.863945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.038 [2024-11-20 16:44:56.863963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.038 [2024-11-20 16:44:56.878284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.038 [2024-11-20 16:44:56.878299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.038 [2024-11-20 16:44:56.891664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.038 [2024-11-20 16:44:56.891678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.038 [2024-11-20 16:44:56.905843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.038 [2024-11-20 16:44:56.905858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.038 [2024-11-20 16:44:56.918927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.038 [2024-11-20 16:44:56.918942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.038 [2024-11-20 16:44:56.932042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.038 [2024-11-20 16:44:56.932056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.038 [2024-11-20 16:44:56.946351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.038 [2024-11-20 16:44:56.946365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.038 [2024-11-20 16:44:56.959782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.038 [2024-11-20 16:44:56.959795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.038 [2024-11-20 16:44:56.974520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.038 [2024-11-20 16:44:56.974535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.038 [2024-11-20 16:44:56.987806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.038 [2024-11-20 16:44:56.987821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.298 [2024-11-20 16:44:57.002858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.298 [2024-11-20 16:44:57.002872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.298 [2024-11-20 16:44:57.016071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.298 [2024-11-20 16:44:57.016086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.298 [2024-11-20 16:44:57.030340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.298 [2024-11-20 16:44:57.030355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.298 [2024-11-20 16:44:57.043499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.298 [2024-11-20 16:44:57.043513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.298 [2024-11-20 16:44:57.058221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.298 [2024-11-20 16:44:57.058236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.298 [2024-11-20 16:44:57.071351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.298 [2024-11-20 16:44:57.071365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.298 [2024-11-20 16:44:57.086115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.298 [2024-11-20 16:44:57.086130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.298 [2024-11-20 16:44:57.099122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.298 [2024-11-20 16:44:57.099140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.298 [2024-11-20 16:44:57.112174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.298 [2024-11-20 16:44:57.112188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.299 [2024-11-20 16:44:57.126334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.299 [2024-11-20 16:44:57.126353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.299 [2024-11-20 16:44:57.139427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.299 [2024-11-20 16:44:57.139441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.299 [2024-11-20 16:44:57.154184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.299 [2024-11-20 16:44:57.154200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.299 [2024-11-20 16:44:57.167428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.299 [2024-11-20 16:44:57.167443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.299 [2024-11-20 16:44:57.180002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.299 [2024-11-20 16:44:57.180017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.299 [2024-11-20 16:44:57.194102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.299 [2024-11-20 16:44:57.194116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.299 [2024-11-20 16:44:57.207347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.299 [2024-11-20 16:44:57.207361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.299 [2024-11-20 16:44:57.222232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.299 [2024-11-20 16:44:57.222247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.299 [2024-11-20 16:44:57.235312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.299 [2024-11-20 16:44:57.235326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.299 [2024-11-20 16:44:57.250475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.299 [2024-11-20 16:44:57.250490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.558 [2024-11-20 16:44:57.263248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.558 [2024-11-20 16:44:57.263262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.559 [2024-11-20 16:44:57.277818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.559 [2024-11-20 16:44:57.277833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.559 [2024-11-20 16:44:57.290860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.559 [2024-11-20 16:44:57.290875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.559 [2024-11-20 16:44:57.304004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.559 [2024-11-20 16:44:57.304017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.559 [2024-11-20 16:44:57.317877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.559 [2024-11-20 16:44:57.317891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.559 [2024-11-20 16:44:57.330882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.559 [2024-11-20 16:44:57.330896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.559 [2024-11-20 16:44:57.343777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.559 [2024-11-20 16:44:57.343790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.559 [2024-11-20 16:44:57.358317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.559 [2024-11-20 16:44:57.358331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.559 [2024-11-20 16:44:57.371451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.559 [2024-11-20 16:44:57.371464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.559 [2024-11-20 16:44:57.385911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.559 [2024-11-20 16:44:57.385926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.559 [2024-11-20 16:44:57.398792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.559 [2024-11-20 16:44:57.398806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.559 [2024-11-20 16:44:57.411891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.559 [2024-11-20 16:44:57.411905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.559 [2024-11-20 16:44:57.426433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.559 [2024-11-20 16:44:57.426447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.559 [2024-11-20 16:44:57.439581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.559 [2024-11-20 16:44:57.439595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.559 [2024-11-20 16:44:57.454321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.559 [2024-11-20 16:44:57.454335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.559 [2024-11-20 16:44:57.467428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.559 [2024-11-20 16:44:57.467442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.559 [2024-11-20 16:44:57.481869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.559 [2024-11-20 16:44:57.481883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.559 [2024-11-20 16:44:57.494913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.559 [2024-11-20 16:44:57.494928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.559 [2024-11-20 16:44:57.508262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.559 [2024-11-20 16:44:57.508276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.818 [2024-11-20 16:44:57.522599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.818 [2024-11-20 16:44:57.522614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.818 [2024-11-20 16:44:57.536027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.818 [2024-11-20 16:44:57.536041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.818 [2024-11-20 16:44:57.550181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.818 [2024-11-20 16:44:57.550196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.818 [2024-11-20 16:44:57.562903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.818 [2024-11-20 16:44:57.562918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.818 [2024-11-20 16:44:57.576499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.818 [2024-11-20 16:44:57.576513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.818 [2024-11-20 16:44:57.590514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.818 [2024-11-20 16:44:57.590528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.818 [2024-11-20 16:44:57.603552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.818 [2024-11-20 16:44:57.603565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.818 [2024-11-20 16:44:57.617520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.818 [2024-11-20 16:44:57.617535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.818 [2024-11-20 16:44:57.630040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.818 [2024-11-20 16:44:57.630054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.818 [2024-11-20 16:44:57.642637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.818 [2024-11-20 16:44:57.642652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.818 [2024-11-20 16:44:57.655532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.818 [2024-11-20 16:44:57.655546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.818 [2024-11-20 16:44:57.670303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.818 [2024-11-20 16:44:57.670318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.818 [2024-11-20 16:44:57.683484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.818 [2024-11-20 16:44:57.683498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.818 [2024-11-20 16:44:57.697892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.818 [2024-11-20 16:44:57.697906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.818 [2024-11-20 16:44:57.710954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.818 [2024-11-20 16:44:57.710968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.818 [2024-11-20 16:44:57.723931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.818 [2024-11-20 16:44:57.723945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.818 [2024-11-20 16:44:57.737841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.818 [2024-11-20 16:44:57.737856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.818 [2024-11-20 16:44:57.750768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.818 [2024-11-20 16:44:57.750782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:11.818 19051.67 IOPS, 148.84 MiB/s [2024-11-20T15:44:57.777Z] [2024-11-20 16:44:57.764167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:11.818 [2024-11-20 16:44:57.764181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.078 [2024-11-20 16:44:57.778232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.078 [2024-11-20 16:44:57.778247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.078 [2024-11-20 16:44:57.790818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.078 [2024-11-20 16:44:57.790833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.078 [2024-11-20 16:44:57.803643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.078 [2024-11-20 16:44:57.803656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.078 [2024-11-20 16:44:57.818134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.078 [2024-11-20 16:44:57.818148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.078 [2024-11-20 16:44:57.831388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.078 [2024-11-20 16:44:57.831402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.078 [2024-11-20 16:44:57.845887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.078 [2024-11-20 16:44:57.845902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.078 [2024-11-20 16:44:57.858943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.078 [2024-11-20 16:44:57.858958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.078 [2024-11-20 16:44:57.872376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.078 [2024-11-20 16:44:57.872390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.078 [2024-11-20 16:44:57.886223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.078 [2024-11-20 16:44:57.886242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.078 [2024-11-20 16:44:57.898846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.078 [2024-11-20 16:44:57.898860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.078 [2024-11-20 16:44:57.912405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.078 [2024-11-20 16:44:57.912418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.078 [2024-11-20 16:44:57.925866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.078 [2024-11-20 16:44:57.925881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.079 [2024-11-20 16:44:57.938439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.079 [2024-11-20 16:44:57.938453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.079 [2024-11-20 16:44:57.951630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.079 [2024-11-20 16:44:57.951645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.079 [2024-11-20 16:44:57.965972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.079 [2024-11-20 16:44:57.965990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.079 [2024-11-20 16:44:57.979274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.079 [2024-11-20 16:44:57.979288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.079 [2024-11-20 16:44:57.994469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.079 [2024-11-20 16:44:57.994483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.079 [2024-11-20 16:44:58.008121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.079 [2024-11-20 16:44:58.008135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.079 [2024-11-20 16:44:58.022828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.079 [2024-11-20 16:44:58.022842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.339 [2024-11-20 16:44:58.035859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.339 [2024-11-20 16:44:58.035874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.339 [2024-11-20 16:44:58.049909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.339 [2024-11-20 16:44:58.049922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.339 [2024-11-20 16:44:58.063013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.339 [2024-11-20 16:44:58.063027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.339 [2024-11-20 16:44:58.076465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.339 [2024-11-20 16:44:58.076478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.339 [2024-11-20 16:44:58.090522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.339 [2024-11-20 16:44:58.090536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.339 [2024-11-20 16:44:58.104037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.339 [2024-11-20 16:44:58.104051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.339 [2024-11-20 16:44:58.117795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.339 [2024-11-20 16:44:58.117810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.339 [2024-11-20 16:44:58.131227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.339 [2024-11-20 16:44:58.131241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.339 [2024-11-20 16:44:58.143953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.339 [2024-11-20 16:44:58.143971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.339 [2024-11-20 16:44:58.158069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.339 [2024-11-20 16:44:58.158083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.339 [2024-11-20 16:44:58.171078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.339 [2024-11-20 16:44:58.171092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.339 [2024-11-20 16:44:58.183780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.339 [2024-11-20 16:44:58.183794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.339 [2024-11-20 16:44:58.198057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.339 [2024-11-20 16:44:58.198071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.339 [2024-11-20 16:44:58.211206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.339 [2024-11-20 16:44:58.211220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.339 [2024-11-20 16:44:58.224150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.339 [2024-11-20 16:44:58.224164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.339 [2024-11-20 16:44:58.238371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.339 [2024-11-20 16:44:58.238386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.339 [2024-11-20 16:44:58.251176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.339 [2024-11-20 16:44:58.251190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.339 [2024-11-20 16:44:58.264593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.339 [2024-11-20 16:44:58.264607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.339 [2024-11-20 16:44:58.278204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.339 [2024-11-20 16:44:58.278219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.339 [2024-11-20 16:44:58.291092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.339 [2024-11-20 16:44:58.291107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.599 [2024-11-20 16:44:58.304018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.599 [2024-11-20 16:44:58.304033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.599 [2024-11-20 16:44:58.318937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.599 [2024-11-20 16:44:58.318951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.599 [2024-11-20 16:44:58.331526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.599 [2024-11-20 16:44:58.331541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.599 [2024-11-20 16:44:58.345975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.599 [2024-11-20 16:44:58.345994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.599 [2024-11-20 16:44:58.359219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.599 [2024-11-20 16:44:58.359234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.599 [2024-11-20 16:44:58.372257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.599 [2024-11-20 16:44:58.372271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.599 [2024-11-20 16:44:58.386022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.599 [2024-11-20 16:44:58.386037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.599 [2024-11-20 16:44:58.399090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.599 [2024-11-20 16:44:58.399108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.599 [2024-11-20 16:44:58.411710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.599 [2024-11-20 16:44:58.411725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.600 [2024-11-20 16:44:58.426154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.600 [2024-11-20 16:44:58.426169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.600 [2024-11-20 16:44:58.438828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.600 [2024-11-20 16:44:58.438843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.600 [2024-11-20 16:44:58.451202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.600 [2024-11-20 16:44:58.451217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.600 [2024-11-20 16:44:58.463747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.600 [2024-11-20 16:44:58.463761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.600 [2024-11-20 16:44:58.478317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.600 [2024-11-20 16:44:58.478331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.600 [2024-11-20 16:44:58.490896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.600 [2024-11-20 16:44:58.490911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.600 [2024-11-20 16:44:58.504357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.600 [2024-11-20 16:44:58.504371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.600 [2024-11-20 16:44:58.518112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.600 [2024-11-20 16:44:58.518127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.600 [2024-11-20 16:44:58.531086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.600 [2024-11-20 16:44:58.531101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.600 [2024-11-20 16:44:58.543980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.600 [2024-11-20 16:44:58.543999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.860 [2024-11-20 16:44:58.558445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.860 [2024-11-20 16:44:58.558460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.860 [2024-11-20 16:44:58.571471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.860 [2024-11-20 16:44:58.571485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.860 [2024-11-20 16:44:58.586592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.860 [2024-11-20 16:44:58.586608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.860 [2024-11-20 16:44:58.599789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.860 [2024-11-20 16:44:58.599803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.860 [2024-11-20 16:44:58.613725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.860 [2024-11-20 16:44:58.613740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.860 [2024-11-20 16:44:58.626627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.860 [2024-11-20 16:44:58.626642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.860 [2024-11-20 16:44:58.639351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.860 [2024-11-20 16:44:58.639365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.860 [2024-11-20 16:44:58.653755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.860 [2024-11-20 16:44:58.653773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.860 [2024-11-20 16:44:58.666623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.860 [2024-11-20 16:44:58.666637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.860 [2024-11-20 16:44:58.679569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.860 [2024-11-20 16:44:58.679583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.860 [2024-11-20 16:44:58.693782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.860 [2024-11-20 16:44:58.693796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.860 [2024-11-20 16:44:58.706577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.860 [2024-11-20 16:44:58.706592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.860 [2024-11-20 16:44:58.719310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.860 [2024-11-20 16:44:58.719324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.860 [2024-11-20 16:44:58.734162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.860 [2024-11-20 16:44:58.734176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.860 [2024-11-20 16:44:58.747516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.860 [2024-11-20 16:44:58.747530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.860 [2024-11-20 16:44:58.761992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.860 [2024-11-20 16:44:58.762006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.860 19063.50 IOPS, 148.93 MiB/s [2024-11-20T15:44:58.819Z] [2024-11-20 16:44:58.774675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.860 [2024-11-20 16:44:58.774690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.860 [2024-11-20 16:44:58.787202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.860 [2024-11-20 16:44:58.787216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.860 [2024-11-20 16:44:58.800115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.860 [2024-11-20 16:44:58.800130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:12.860 [2024-11-20 16:44:58.814335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:12.860 [2024-11-20 16:44:58.814350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.120 [2024-11-20 16:44:58.826845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.120 [2024-11-20 16:44:58.826859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.120 [2024-11-20 16:44:58.840252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.120 [2024-11-20 16:44:58.840267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.120 [2024-11-20 16:44:58.854415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.120 [2024-11-20 16:44:58.854430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.120 [2024-11-20 16:44:58.867520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.120 [2024-11-20 16:44:58.867534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.120 [2024-11-20 16:44:58.881793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.120 [2024-11-20 16:44:58.881807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.120 [2024-11-20 16:44:58.894760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.120 [2024-11-20 16:44:58.894775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.120 [2024-11-20 16:44:58.907469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.120 [2024-11-20 16:44:58.907483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.120 [2024-11-20 16:44:58.922062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.120 [2024-11-20 16:44:58.922076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.120 [2024-11-20 16:44:58.935194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.120 [2024-11-20 16:44:58.935208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.121 [2024-11-20 16:44:58.947866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.121 [2024-11-20 16:44:58.947880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.121 [2024-11-20 16:44:58.962153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.121 [2024-11-20 16:44:58.962168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.121 [2024-11-20 16:44:58.975156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.121 [2024-11-20 16:44:58.975171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.121 [2024-11-20 16:44:58.988388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.121 [2024-11-20 16:44:58.988403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.121 [2024-11-20 16:44:59.001623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.121 [2024-11-20 16:44:59.001637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.121 [2024-11-20 16:44:59.014463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.121 [2024-11-20 16:44:59.014477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.121 [2024-11-20 16:44:59.027017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.121 [2024-11-20 16:44:59.027031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.121 [2024-11-20 16:44:59.039386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.121 [2024-11-20 16:44:59.039400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.121 [2024-11-20 16:44:59.054499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.121 [2024-11-20 16:44:59.054514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.121 [2024-11-20 16:44:59.067889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.121 [2024-11-20 16:44:59.067903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.380 [2024-11-20 16:44:59.082075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.380 [2024-11-20 16:44:59.082090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.380 [2024-11-20 16:44:59.095024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.380 [2024-11-20 16:44:59.095039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.380 [2024-11-20 16:44:59.108474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.380 [2024-11-20 16:44:59.108488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.380 [2024-11-20 16:44:59.122364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.380 [2024-11-20 16:44:59.122379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.380 [2024-11-20 16:44:59.135079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.380 [2024-11-20 16:44:59.135093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.380 [2024-11-20 16:44:59.148440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.380 [2024-11-20 16:44:59.148454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.380 [2024-11-20 16:44:59.162609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.380 [2024-11-20 16:44:59.162624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.380 [2024-11-20 16:44:59.175682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.380 [2024-11-20 16:44:59.175696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.380 [2024-11-20 16:44:59.190288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.380 [2024-11-20 16:44:59.190302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.380 [2024-11-20 16:44:59.203612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.380 [2024-11-20 16:44:59.203625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.380 [2024-11-20 16:44:59.217904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.380 [2024-11-20 16:44:59.217918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.380 [2024-11-20 16:44:59.230974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.380 [2024-11-20 16:44:59.230993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.380 [2024-11-20 16:44:59.244434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.380 [2024-11-20 16:44:59.244448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.380 [2024-11-20 16:44:59.258402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.380 [2024-11-20 16:44:59.258416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.380 [2024-11-20 16:44:59.271273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.380 [2024-11-20 16:44:59.271287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.380 [2024-11-20 16:44:59.286260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.380 [2024-11-20 16:44:59.286274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.380 [2024-11-20 16:44:59.299190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.380 [2024-11-20 16:44:59.299203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.380 [2024-11-20 16:44:59.312030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.380 [2024-11-20 16:44:59.312044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.380 [2024-11-20 16:44:59.325911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.380 [2024-11-20 16:44:59.325925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.640 [2024-11-20 16:44:59.338702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.640 [2024-11-20 16:44:59.338716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.640 [2024-11-20 16:44:59.351416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.640 [2024-11-20 16:44:59.351430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.640 [2024-11-20 16:44:59.366308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.640 [2024-11-20 16:44:59.366322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.640 [2024-11-20 16:44:59.379303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.640 [2024-11-20 16:44:59.379316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.640 [2024-11-20 16:44:59.394670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.640 [2024-11-20 16:44:59.394684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.640 [2024-11-20 16:44:59.407445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.640 [2024-11-20 16:44:59.407458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.640 [2024-11-20 16:44:59.422116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.640 [2024-11-20 16:44:59.422130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.640 [2024-11-20 16:44:59.435155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.640 [2024-11-20 16:44:59.435169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.640 [2024-11-20 16:44:59.448212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.640 [2024-11-20 16:44:59.448227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.640 [2024-11-20 16:44:59.461891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.640 [2024-11-20 16:44:59.461906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.640 [2024-11-20 16:44:59.475008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.640 [2024-11-20 16:44:59.475022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.640 [2024-11-20 16:44:59.488480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.640 [2024-11-20 16:44:59.488494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.640 [2024-11-20 16:44:59.502425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.640 [2024-11-20 16:44:59.502439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.640 [2024-11-20 16:44:59.515618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.640 [2024-11-20 16:44:59.515632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.640 [2024-11-20 16:44:59.529866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.640 [2024-11-20 16:44:59.529881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.640 [2024-11-20 16:44:59.542739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.640 [2024-11-20 16:44:59.542753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.640 [2024-11-20 16:44:59.555299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.640 [2024-11-20 16:44:59.555313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.640 [2024-11-20 16:44:59.570084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.640 [2024-11-20 16:44:59.570098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.640 [2024-11-20 16:44:59.583187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.640 [2024-11-20 16:44:59.583200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.640 [2024-11-20 16:44:59.596002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.640 [2024-11-20 16:44:59.596016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.901 [2024-11-20 16:44:59.610260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.901 [2024-11-20 16:44:59.610275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.901 [2024-11-20 16:44:59.623844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.901 [2024-11-20 16:44:59.623860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.901 [2024-11-20 16:44:59.637876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.901 [2024-11-20 16:44:59.637892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.901 [2024-11-20 16:44:59.650856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.901 [2024-11-20 16:44:59.650871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.901 [2024-11-20 16:44:59.664360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.901 [2024-11-20 16:44:59.664379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.901 [2024-11-20 16:44:59.677914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.901 [2024-11-20 16:44:59.677929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.901 [2024-11-20 16:44:59.691145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.901 [2024-11-20 16:44:59.691160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.901 [2024-11-20 16:44:59.703893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.901 [2024-11-20 16:44:59.703907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.901 [2024-11-20 16:44:59.718103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.901 [2024-11-20 16:44:59.718117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.901 [2024-11-20 16:44:59.730731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.901 [2024-11-20 16:44:59.730745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.901 [2024-11-20 16:44:59.743522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.901 [2024-11-20 16:44:59.743535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.901 [2024-11-20 16:44:59.757649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.901 [2024-11-20 16:44:59.757663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.901 19077.60 IOPS, 149.04 MiB/s [2024-11-20T15:44:59.860Z] [2024-11-20 16:44:59.769941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.901 [2024-11-20 16:44:59.769956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.901 00:33:13.901 Latency(us) 00:33:13.901 [2024-11-20T15:44:59.860Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:13.901 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:33:13.901 Nvme1n1 : 5.01 19076.02 149.03 0.00 0.00 6702.45 2430.29 12178.77 00:33:13.901 [2024-11-20T15:44:59.860Z] =================================================================================================================== 00:33:13.901 [2024-11-20T15:44:59.860Z] Total : 19076.02 149.03 0.00 0.00 6702.45 2430.29 12178.77 00:33:13.901 [2024-11-20 16:44:59.778973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.901 [2024-11-20 16:44:59.778988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.901 [2024-11-20 16:44:59.790975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.901 [2024-11-20 16:44:59.790991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.901 [2024-11-20 16:44:59.802976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.901 [2024-11-20 16:44:59.802990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.901 [2024-11-20 16:44:59.814974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.901 [2024-11-20 16:44:59.814987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.901 [2024-11-20 16:44:59.826972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.901 [2024-11-20 16:44:59.826985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.901 [2024-11-20 16:44:59.838970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.901 [2024-11-20 16:44:59.838979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:13.901 [2024-11-20 16:44:59.850968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:13.901 [2024-11-20 16:44:59.850976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.161 [2024-11-20 16:44:59.862970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.161 [2024-11-20 16:44:59.862990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.161 [2024-11-20 16:44:59.874968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.161 [2024-11-20 16:44:59.874976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.161 [2024-11-20 16:44:59.886968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:14.161 [2024-11-20 16:44:59.886975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:14.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2465013) - No such process 00:33:14.161 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2465013 00:33:14.161 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:14.161 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.161 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:14.161 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.161 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:14.161 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.161 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:14.161 delay0 00:33:14.161 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.161 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:33:14.161 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.161 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:14.161 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.161 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:33:14.161 [2024-11-20 16:45:00.079183] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:33:20.879 Initializing NVMe Controllers 00:33:20.879 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:20.879 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:20.879 Initialization complete. Launching workers. 00:33:20.879 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 276, failed: 19716 00:33:20.879 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 19900, failed to submit 92 00:33:20.879 success 19815, unsuccessful 85, failed 0 00:33:20.879 16:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:33:20.879 16:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:33:20.879 16:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:20.879 16:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:33:20.879 16:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:20.879 16:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:33:20.879 16:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:20.879 16:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:20.879 rmmod nvme_tcp 00:33:20.879 rmmod nvme_fabrics 00:33:20.879 rmmod nvme_keyring 00:33:20.879 16:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:20.879 16:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:33:20.879 16:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:33:20.879 16:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2462886 ']' 00:33:20.879 16:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2462886 00:33:20.879 16:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2462886 ']' 00:33:20.879 16:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2462886 00:33:20.879 16:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:33:20.879 16:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:20.879 16:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2462886 00:33:20.879 16:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:20.879 16:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:20.879 16:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2462886' 00:33:20.879 killing process with pid 2462886 00:33:20.879 16:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2462886 00:33:20.879 16:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2462886 00:33:21.139 16:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:21.139 16:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:21.139 16:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:21.139 16:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:33:21.139 16:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:33:21.139 16:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:21.139 16:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:33:21.139 16:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:21.139 16:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:21.139 16:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:21.139 16:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:21.139 16:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:23.051 16:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:23.051 00:33:23.051 real 0m33.582s 00:33:23.051 user 0m43.306s 00:33:23.051 sys 0m11.875s 00:33:23.051 16:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:23.051 16:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:23.051 ************************************ 00:33:23.051 END TEST nvmf_zcopy 00:33:23.051 ************************************ 00:33:23.311 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:23.311 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:23.311 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:23.311 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:23.311 ************************************ 00:33:23.311 START TEST nvmf_nmic 00:33:23.311 ************************************ 00:33:23.311 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:23.311 * Looking for test storage... 00:33:23.311 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:23.311 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:23.311 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:23.311 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:33:23.311 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:23.311 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:23.311 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:23.311 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:23.311 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:33:23.311 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:33:23.311 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:33:23.311 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:33:23.311 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:33:23.311 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:33:23.311 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:33:23.311 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:23.311 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:33:23.311 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:33:23.311 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:23.311 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:23.311 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:33:23.311 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:33:23.311 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:23.311 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:33:23.311 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:33:23.311 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:33:23.311 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:33:23.311 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:23.571 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:33:23.571 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:33:23.571 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:23.571 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:23.571 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:33:23.571 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:23.571 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:23.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:23.571 --rc genhtml_branch_coverage=1 00:33:23.571 --rc genhtml_function_coverage=1 00:33:23.571 --rc genhtml_legend=1 00:33:23.571 --rc geninfo_all_blocks=1 00:33:23.571 --rc geninfo_unexecuted_blocks=1 00:33:23.571 00:33:23.571 ' 00:33:23.571 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:23.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:23.571 --rc genhtml_branch_coverage=1 00:33:23.571 --rc genhtml_function_coverage=1 00:33:23.571 --rc genhtml_legend=1 00:33:23.571 --rc geninfo_all_blocks=1 00:33:23.571 --rc geninfo_unexecuted_blocks=1 00:33:23.571 00:33:23.571 ' 00:33:23.571 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:23.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:23.571 --rc genhtml_branch_coverage=1 00:33:23.571 --rc genhtml_function_coverage=1 00:33:23.571 --rc genhtml_legend=1 00:33:23.571 --rc geninfo_all_blocks=1 00:33:23.571 --rc geninfo_unexecuted_blocks=1 00:33:23.571 00:33:23.571 ' 00:33:23.571 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:23.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:23.571 --rc genhtml_branch_coverage=1 00:33:23.571 --rc genhtml_function_coverage=1 00:33:23.571 --rc genhtml_legend=1 00:33:23.571 --rc geninfo_all_blocks=1 00:33:23.571 --rc geninfo_unexecuted_blocks=1 00:33:23.571 00:33:23.571 ' 00:33:23.571 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:23.572 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:33:23.572 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:23.572 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:23.572 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:23.572 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:23.572 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:23.572 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:23.572 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:23.572 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:23.572 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:23.572 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:23.572 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:33:23.572 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:33:23.572 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:23.572 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:23.572 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:23.572 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:23.572 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:23.572 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:33:23.572 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:23.572 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:23.572 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:23.572 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.572 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.572 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.572 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:33:23.572 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.572 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:33:23.572 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:23.572 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:23.572 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:23.572 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:23.572 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:23.572 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:23.572 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:23.572 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:23.572 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:23.572 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:23.572 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:23.572 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:23.572 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:33:23.572 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:23.572 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:23.572 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:23.572 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:23.572 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:23.572 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:23.572 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:23.572 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:23.572 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:23.572 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:23.572 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:33:23.572 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:31.708 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:31.708 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:33:31.708 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:31.708 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:31.708 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:31.708 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:31.708 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:31.708 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:31.709 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:31.709 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:31.709 Found net devices under 0000:31:00.0: cvl_0_0 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:31.709 Found net devices under 0000:31:00.1: cvl_0_1 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:31.709 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:31.709 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:33:31.709 00:33:31.709 --- 10.0.0.2 ping statistics --- 00:33:31.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:31.709 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:33:31.709 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:31.709 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:31.710 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:33:31.710 00:33:31.710 --- 10.0.0.1 ping statistics --- 00:33:31.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:31.710 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:33:31.710 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:31.710 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:33:31.710 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:31.710 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:31.710 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:31.710 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:31.710 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:31.710 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:31.710 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:31.710 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:33:31.710 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:31.710 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:31.710 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:31.710 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2471971 00:33:31.710 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2471971 00:33:31.710 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:33:31.710 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2471971 ']' 00:33:31.710 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:31.710 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:31.710 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:31.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:31.710 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:31.710 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:31.710 [2024-11-20 16:45:16.625818] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:31.710 [2024-11-20 16:45:16.626937] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:33:31.710 [2024-11-20 16:45:16.626996] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:31.710 [2024-11-20 16:45:16.712969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:31.710 [2024-11-20 16:45:16.756758] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:31.710 [2024-11-20 16:45:16.756797] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:31.710 [2024-11-20 16:45:16.756805] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:31.710 [2024-11-20 16:45:16.756812] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:31.710 [2024-11-20 16:45:16.756819] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:31.710 [2024-11-20 16:45:16.758297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:31.710 [2024-11-20 16:45:16.758427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:31.710 [2024-11-20 16:45:16.758585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:31.710 [2024-11-20 16:45:16.758585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:31.710 [2024-11-20 16:45:16.816258] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:31.710 [2024-11-20 16:45:16.816397] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:31.710 [2024-11-20 16:45:16.817488] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:31.710 [2024-11-20 16:45:16.818082] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:31.710 [2024-11-20 16:45:16.818168] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:31.710 16:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:31.710 16:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:33:31.710 16:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:31.710 16:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:31.710 16:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:31.710 16:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:31.710 16:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:31.710 16:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.710 16:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:31.710 [2024-11-20 16:45:17.487042] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:31.710 16:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.710 16:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:31.710 16:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.710 16:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:31.710 Malloc0 00:33:31.710 16:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.710 16:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:31.710 16:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.710 16:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:31.710 16:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.710 16:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:31.710 16:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.710 16:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:31.710 16:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.710 16:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:31.710 16:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.710 16:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:31.710 [2024-11-20 16:45:17.571221] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:31.710 16:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.710 16:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:33:31.710 test case1: single bdev can't be used in multiple subsystems 00:33:31.710 16:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:33:31.710 16:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.710 16:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:31.710 16:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.710 16:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:31.710 16:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.710 16:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:31.710 16:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.710 16:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:33:31.710 16:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:33:31.710 16:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.710 16:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:31.710 [2024-11-20 16:45:17.606964] bdev.c:8467:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:33:31.710 [2024-11-20 16:45:17.606988] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:33:31.710 [2024-11-20 16:45:17.606996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.710 request: 00:33:31.710 { 00:33:31.710 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:33:31.710 "namespace": { 00:33:31.710 "bdev_name": "Malloc0", 00:33:31.710 "no_auto_visible": false 00:33:31.710 }, 00:33:31.710 "method": "nvmf_subsystem_add_ns", 00:33:31.710 "req_id": 1 00:33:31.710 } 00:33:31.710 Got JSON-RPC error response 00:33:31.710 response: 00:33:31.710 { 00:33:31.710 "code": -32602, 00:33:31.710 "message": "Invalid parameters" 00:33:31.710 } 00:33:31.710 16:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:31.710 16:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:33:31.710 16:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:33:31.710 16:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:33:31.710 Adding namespace failed - expected result. 00:33:31.711 16:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:33:31.711 test case2: host connect to nvmf target in multiple paths 00:33:31.711 16:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:31.711 16:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.711 16:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:31.711 [2024-11-20 16:45:17.619077] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:31.711 16:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.711 16:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:32.280 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:33:32.541 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:33:32.541 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:33:32.541 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:33:32.541 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:33:32.541 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:33:35.083 16:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:33:35.083 16:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:33:35.083 16:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:33:35.083 16:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:33:35.083 16:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:33:35.083 16:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:33:35.083 16:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:33:35.083 [global] 00:33:35.083 thread=1 00:33:35.083 invalidate=1 00:33:35.083 rw=write 00:33:35.083 time_based=1 00:33:35.083 runtime=1 00:33:35.083 ioengine=libaio 00:33:35.083 direct=1 00:33:35.083 bs=4096 00:33:35.083 iodepth=1 00:33:35.083 norandommap=0 00:33:35.083 numjobs=1 00:33:35.083 00:33:35.083 verify_dump=1 00:33:35.083 verify_backlog=512 00:33:35.083 verify_state_save=0 00:33:35.083 do_verify=1 00:33:35.083 verify=crc32c-intel 00:33:35.083 [job0] 00:33:35.083 filename=/dev/nvme0n1 00:33:35.083 Could not set queue depth (nvme0n1) 00:33:35.083 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:35.083 fio-3.35 00:33:35.083 Starting 1 thread 00:33:36.022 00:33:36.022 job0: (groupid=0, jobs=1): err= 0: pid=2473113: Wed Nov 20 16:45:21 2024 00:33:36.022 read: IOPS=18, BW=75.3KiB/s (77.1kB/s)(76.0KiB/1009msec) 00:33:36.022 slat (nsec): min=26043, max=26611, avg=26340.32, stdev=149.43 00:33:36.022 clat (usec): min=40918, max=41088, avg=40970.48, stdev=43.65 00:33:36.022 lat (usec): min=40944, max=41114, avg=40996.82, stdev=43.56 00:33:36.022 clat percentiles (usec): 00:33:36.022 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:33:36.022 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:36.022 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:36.022 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:33:36.022 | 99.99th=[41157] 00:33:36.022 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:33:36.022 slat (nsec): min=9664, max=76232, avg=27010.42, stdev=11435.24 00:33:36.022 clat (usec): min=145, max=714, avg=414.93, stdev=74.46 00:33:36.022 lat (usec): min=158, max=746, avg=441.95, stdev=77.19 00:33:36.022 clat percentiles (usec): 00:33:36.022 | 1.00th=[ 253], 5.00th=[ 293], 10.00th=[ 322], 20.00th=[ 347], 00:33:36.022 | 30.00th=[ 371], 40.00th=[ 408], 50.00th=[ 429], 60.00th=[ 441], 00:33:36.022 | 70.00th=[ 457], 80.00th=[ 469], 90.00th=[ 482], 95.00th=[ 529], 00:33:36.022 | 99.00th=[ 635], 99.50th=[ 668], 99.90th=[ 717], 99.95th=[ 717], 00:33:36.022 | 99.99th=[ 717] 00:33:36.022 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:33:36.022 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:36.022 lat (usec) : 250=0.75%, 500=88.14%, 750=7.53% 00:33:36.022 lat (msec) : 50=3.58% 00:33:36.022 cpu : usr=0.99%, sys=0.99%, ctx=531, majf=0, minf=1 00:33:36.022 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:36.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.022 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:36.023 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:36.023 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:36.023 00:33:36.023 Run status group 0 (all jobs): 00:33:36.023 READ: bw=75.3KiB/s (77.1kB/s), 75.3KiB/s-75.3KiB/s (77.1kB/s-77.1kB/s), io=76.0KiB (77.8kB), run=1009-1009msec 00:33:36.023 WRITE: bw=2030KiB/s (2078kB/s), 2030KiB/s-2030KiB/s (2078kB/s-2078kB/s), io=2048KiB (2097kB), run=1009-1009msec 00:33:36.023 00:33:36.023 Disk stats (read/write): 00:33:36.023 nvme0n1: ios=66/512, merge=0/0, ticks=1049/208, in_queue=1257, util=98.10% 00:33:36.023 16:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:36.283 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:33:36.283 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:36.283 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:33:36.283 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:33:36.283 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:36.283 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:33:36.283 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:36.283 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:33:36.283 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:33:36.283 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:33:36.283 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:36.283 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:33:36.283 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:36.283 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:33:36.283 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:36.283 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:36.283 rmmod nvme_tcp 00:33:36.283 rmmod nvme_fabrics 00:33:36.283 rmmod nvme_keyring 00:33:36.283 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:36.283 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:33:36.283 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:33:36.283 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2471971 ']' 00:33:36.283 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2471971 00:33:36.283 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2471971 ']' 00:33:36.283 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2471971 00:33:36.283 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:33:36.543 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:36.543 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2471971 00:33:36.543 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:36.543 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:36.543 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2471971' 00:33:36.543 killing process with pid 2471971 00:33:36.543 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2471971 00:33:36.543 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2471971 00:33:36.543 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:36.543 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:36.543 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:36.543 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:33:36.543 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:33:36.543 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:33:36.543 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:36.543 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:36.543 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:36.543 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:36.543 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:36.543 16:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:39.087 00:33:39.087 real 0m15.452s 00:33:39.087 user 0m34.310s 00:33:39.087 sys 0m7.287s 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:39.087 ************************************ 00:33:39.087 END TEST nvmf_nmic 00:33:39.087 ************************************ 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:39.087 ************************************ 00:33:39.087 START TEST nvmf_fio_target 00:33:39.087 ************************************ 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:33:39.087 * Looking for test storage... 00:33:39.087 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:39.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:39.087 --rc genhtml_branch_coverage=1 00:33:39.087 --rc genhtml_function_coverage=1 00:33:39.087 --rc genhtml_legend=1 00:33:39.087 --rc geninfo_all_blocks=1 00:33:39.087 --rc geninfo_unexecuted_blocks=1 00:33:39.087 00:33:39.087 ' 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:39.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:39.087 --rc genhtml_branch_coverage=1 00:33:39.087 --rc genhtml_function_coverage=1 00:33:39.087 --rc genhtml_legend=1 00:33:39.087 --rc geninfo_all_blocks=1 00:33:39.087 --rc geninfo_unexecuted_blocks=1 00:33:39.087 00:33:39.087 ' 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:39.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:39.087 --rc genhtml_branch_coverage=1 00:33:39.087 --rc genhtml_function_coverage=1 00:33:39.087 --rc genhtml_legend=1 00:33:39.087 --rc geninfo_all_blocks=1 00:33:39.087 --rc geninfo_unexecuted_blocks=1 00:33:39.087 00:33:39.087 ' 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:39.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:39.087 --rc genhtml_branch_coverage=1 00:33:39.087 --rc genhtml_function_coverage=1 00:33:39.087 --rc genhtml_legend=1 00:33:39.087 --rc geninfo_all_blocks=1 00:33:39.087 --rc geninfo_unexecuted_blocks=1 00:33:39.087 00:33:39.087 ' 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:39.087 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:39.088 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:39.088 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:33:39.088 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:39.088 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:39.088 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:39.088 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.088 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.088 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.088 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:33:39.088 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.088 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:33:39.088 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:39.088 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:39.088 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:39.088 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:39.088 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:39.088 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:39.088 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:39.088 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:39.088 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:39.088 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:39.088 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:39.088 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:39.088 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:39.088 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:33:39.088 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:39.088 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:39.088 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:39.088 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:39.088 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:39.088 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:39.088 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:39.088 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:39.088 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:39.088 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:39.088 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:33:39.088 16:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:47.223 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:47.223 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:33:47.223 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:47.223 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:47.223 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:47.223 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:47.223 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:47.223 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:33:47.223 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:47.223 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:33:47.223 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:33:47.223 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:33:47.223 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:33:47.223 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:33:47.223 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:33:47.223 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:47.223 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:47.223 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:47.223 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:47.223 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:47.223 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:47.223 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:47.223 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:47.223 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:47.223 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:47.223 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:47.223 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:47.223 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:47.223 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:47.223 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:47.223 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:47.223 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:47.223 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:47.223 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:47.223 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:47.223 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:47.223 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:47.223 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:47.223 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:47.223 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:47.223 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:47.223 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:47.223 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:47.223 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:47.223 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:47.223 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:47.223 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:47.223 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:47.223 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:47.223 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:47.223 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:47.223 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:47.223 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:47.223 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:47.223 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:47.224 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:47.224 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:47.224 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:47.224 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:47.224 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:47.224 Found net devices under 0000:31:00.0: cvl_0_0 00:33:47.224 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:47.224 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:47.224 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:47.224 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:47.224 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:47.224 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:47.224 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:47.224 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:47.224 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:47.224 Found net devices under 0000:31:00.1: cvl_0_1 00:33:47.224 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:47.224 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:47.224 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:33:47.224 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:47.224 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:47.224 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:47.224 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:47.224 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:47.224 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:47.224 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:47.224 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:47.224 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:47.224 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:47.224 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:47.224 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:47.224 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:47.224 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:47.224 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:47.224 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:47.224 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:47.224 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:47.224 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:47.224 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:47.224 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:47.224 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:47.224 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:47.224 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:47.224 16:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:47.224 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:47.224 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:47.224 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.696 ms 00:33:47.224 00:33:47.224 --- 10.0.0.2 ping statistics --- 00:33:47.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:47.224 rtt min/avg/max/mdev = 0.696/0.696/0.696/0.000 ms 00:33:47.224 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:47.224 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:47.224 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:33:47.224 00:33:47.224 --- 10.0.0.1 ping statistics --- 00:33:47.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:47.224 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:33:47.224 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:47.224 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:33:47.224 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:47.224 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:47.224 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:47.224 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:47.224 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:47.224 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:47.224 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:47.224 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:33:47.224 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:47.224 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:47.224 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:47.224 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2477512 00:33:47.224 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2477512 00:33:47.224 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:33:47.224 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2477512 ']' 00:33:47.224 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:47.224 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:47.224 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:47.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:47.224 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:47.224 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:47.224 [2024-11-20 16:45:32.137102] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:47.224 [2024-11-20 16:45:32.138269] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:33:47.224 [2024-11-20 16:45:32.138319] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:47.224 [2024-11-20 16:45:32.221746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:47.224 [2024-11-20 16:45:32.263864] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:47.224 [2024-11-20 16:45:32.263901] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:47.224 [2024-11-20 16:45:32.263909] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:47.224 [2024-11-20 16:45:32.263916] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:47.224 [2024-11-20 16:45:32.263922] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:47.224 [2024-11-20 16:45:32.265497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:47.224 [2024-11-20 16:45:32.265628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:47.224 [2024-11-20 16:45:32.265753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:47.224 [2024-11-20 16:45:32.265753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:47.224 [2024-11-20 16:45:32.322972] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:47.224 [2024-11-20 16:45:32.323113] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:47.224 [2024-11-20 16:45:32.324124] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:47.224 [2024-11-20 16:45:32.324614] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:47.224 [2024-11-20 16:45:32.324691] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:47.224 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:47.224 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:33:47.224 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:47.224 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:47.224 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:47.225 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:47.225 16:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:47.225 [2024-11-20 16:45:33.142299] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:47.485 16:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:47.485 16:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:33:47.485 16:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:47.745 16:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:33:47.745 16:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:48.005 16:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:33:48.005 16:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:48.265 16:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:33:48.265 16:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:33:48.265 16:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:48.525 16:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:33:48.525 16:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:48.785 16:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:33:48.785 16:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:48.785 16:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:33:48.785 16:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:33:49.045 16:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:49.305 16:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:33:49.305 16:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:49.305 16:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:33:49.305 16:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:33:49.565 16:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:49.824 [2024-11-20 16:45:35.530381] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:49.824 16:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:33:49.824 16:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:33:50.084 16:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:50.653 16:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:33:50.653 16:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:33:50.653 16:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:33:50.653 16:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:33:50.653 16:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:33:50.653 16:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:33:52.561 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:33:52.561 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:33:52.561 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:33:52.561 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:33:52.561 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:33:52.561 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:33:52.561 16:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:33:52.561 [global] 00:33:52.561 thread=1 00:33:52.561 invalidate=1 00:33:52.561 rw=write 00:33:52.561 time_based=1 00:33:52.561 runtime=1 00:33:52.561 ioengine=libaio 00:33:52.561 direct=1 00:33:52.561 bs=4096 00:33:52.561 iodepth=1 00:33:52.561 norandommap=0 00:33:52.561 numjobs=1 00:33:52.561 00:33:52.561 verify_dump=1 00:33:52.561 verify_backlog=512 00:33:52.561 verify_state_save=0 00:33:52.561 do_verify=1 00:33:52.561 verify=crc32c-intel 00:33:52.561 [job0] 00:33:52.561 filename=/dev/nvme0n1 00:33:52.561 [job1] 00:33:52.561 filename=/dev/nvme0n2 00:33:52.561 [job2] 00:33:52.561 filename=/dev/nvme0n3 00:33:52.561 [job3] 00:33:52.561 filename=/dev/nvme0n4 00:33:52.561 Could not set queue depth (nvme0n1) 00:33:52.561 Could not set queue depth (nvme0n2) 00:33:52.561 Could not set queue depth (nvme0n3) 00:33:52.561 Could not set queue depth (nvme0n4) 00:33:53.136 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:53.136 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:53.136 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:53.136 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:53.136 fio-3.35 00:33:53.136 Starting 4 threads 00:33:54.520 00:33:54.520 job0: (groupid=0, jobs=1): err= 0: pid=2478939: Wed Nov 20 16:45:40 2024 00:33:54.520 read: IOPS=288, BW=1154KiB/s (1182kB/s)(1160KiB/1005msec) 00:33:54.520 slat (nsec): min=6399, max=45354, avg=25682.65, stdev=6457.94 00:33:54.520 clat (usec): min=481, max=42054, avg=2343.64, stdev=7838.00 00:33:54.520 lat (usec): min=509, max=42081, avg=2369.32, stdev=7838.36 00:33:54.520 clat percentiles (usec): 00:33:54.520 | 1.00th=[ 519], 5.00th=[ 586], 10.00th=[ 611], 20.00th=[ 660], 00:33:54.520 | 30.00th=[ 701], 40.00th=[ 734], 50.00th=[ 766], 60.00th=[ 807], 00:33:54.520 | 70.00th=[ 873], 80.00th=[ 979], 90.00th=[ 1037], 95.00th=[ 1139], 00:33:54.520 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:54.520 | 99.99th=[42206] 00:33:54.520 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:33:54.520 slat (nsec): min=9262, max=60131, avg=32038.72, stdev=9999.11 00:33:54.520 clat (usec): min=222, max=1975, avg=575.56, stdev=147.27 00:33:54.520 lat (usec): min=257, max=2011, avg=607.60, stdev=148.84 00:33:54.520 clat percentiles (usec): 00:33:54.520 | 1.00th=[ 269], 5.00th=[ 355], 10.00th=[ 388], 20.00th=[ 449], 00:33:54.520 | 30.00th=[ 515], 40.00th=[ 562], 50.00th=[ 586], 60.00th=[ 611], 00:33:54.520 | 70.00th=[ 644], 80.00th=[ 676], 90.00th=[ 734], 95.00th=[ 791], 00:33:54.520 | 99.00th=[ 906], 99.50th=[ 971], 99.90th=[ 1975], 99.95th=[ 1975], 00:33:54.520 | 99.99th=[ 1975] 00:33:54.520 bw ( KiB/s): min= 4096, max= 4096, per=39.76%, avg=4096.00, stdev= 0.00, samples=1 00:33:54.520 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:54.520 lat (usec) : 250=0.50%, 500=17.96%, 750=57.11%, 1000=18.58% 00:33:54.520 lat (msec) : 2=4.49%, 50=1.37% 00:33:54.520 cpu : usr=1.99%, sys=2.79%, ctx=805, majf=0, minf=1 00:33:54.520 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:54.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.520 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.520 issued rwts: total=290,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.520 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:54.520 job1: (groupid=0, jobs=1): err= 0: pid=2478948: Wed Nov 20 16:45:40 2024 00:33:54.520 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:33:54.521 slat (nsec): min=5937, max=48077, avg=27667.67, stdev=3545.27 00:33:54.521 clat (usec): min=634, max=42032, avg=1073.64, stdev=1815.37 00:33:54.521 lat (usec): min=641, max=42060, avg=1101.31, stdev=1815.39 00:33:54.521 clat percentiles (usec): 00:33:54.521 | 1.00th=[ 758], 5.00th=[ 857], 10.00th=[ 898], 20.00th=[ 938], 00:33:54.521 | 30.00th=[ 963], 40.00th=[ 988], 50.00th=[ 1004], 60.00th=[ 1020], 00:33:54.521 | 70.00th=[ 1037], 80.00th=[ 1045], 90.00th=[ 1090], 95.00th=[ 1123], 00:33:54.521 | 99.00th=[ 1172], 99.50th=[ 1188], 99.90th=[42206], 99.95th=[42206], 00:33:54.521 | 99.99th=[42206] 00:33:54.521 write: IOPS=705, BW=2821KiB/s (2889kB/s)(2824KiB/1001msec); 0 zone resets 00:33:54.521 slat (nsec): min=8951, max=63213, avg=32242.20, stdev=9593.23 00:33:54.521 clat (usec): min=243, max=998, avg=571.54, stdev=116.62 00:33:54.521 lat (usec): min=279, max=1033, avg=603.78, stdev=119.17 00:33:54.521 clat percentiles (usec): 00:33:54.521 | 1.00th=[ 302], 5.00th=[ 375], 10.00th=[ 420], 20.00th=[ 474], 00:33:54.521 | 30.00th=[ 515], 40.00th=[ 553], 50.00th=[ 578], 60.00th=[ 603], 00:33:54.521 | 70.00th=[ 627], 80.00th=[ 660], 90.00th=[ 717], 95.00th=[ 750], 00:33:54.521 | 99.00th=[ 848], 99.50th=[ 922], 99.90th=[ 996], 99.95th=[ 996], 00:33:54.521 | 99.99th=[ 996] 00:33:54.521 bw ( KiB/s): min= 4096, max= 4096, per=39.76%, avg=4096.00, stdev= 0.00, samples=1 00:33:54.521 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:54.521 lat (usec) : 250=0.08%, 500=15.27%, 750=39.82%, 1000=23.65% 00:33:54.521 lat (msec) : 2=21.10%, 50=0.08% 00:33:54.521 cpu : usr=2.00%, sys=5.40%, ctx=1220, majf=0, minf=1 00:33:54.521 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:54.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.521 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.521 issued rwts: total=512,706,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.521 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:54.521 job2: (groupid=0, jobs=1): err= 0: pid=2478958: Wed Nov 20 16:45:40 2024 00:33:54.521 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:33:54.521 slat (nsec): min=7006, max=45040, avg=25412.03, stdev=7054.96 00:33:54.521 clat (usec): min=467, max=1102, avg=757.14, stdev=92.22 00:33:54.521 lat (usec): min=490, max=1129, avg=782.55, stdev=93.69 00:33:54.521 clat percentiles (usec): 00:33:54.521 | 1.00th=[ 529], 5.00th=[ 586], 10.00th=[ 644], 20.00th=[ 685], 00:33:54.521 | 30.00th=[ 717], 40.00th=[ 742], 50.00th=[ 758], 60.00th=[ 783], 00:33:54.521 | 70.00th=[ 807], 80.00th=[ 832], 90.00th=[ 865], 95.00th=[ 889], 00:33:54.521 | 99.00th=[ 971], 99.50th=[ 988], 99.90th=[ 1106], 99.95th=[ 1106], 00:33:54.521 | 99.99th=[ 1106] 00:33:54.521 write: IOPS=924, BW=3696KiB/s (3785kB/s)(3700KiB/1001msec); 0 zone resets 00:33:54.521 slat (nsec): min=9679, max=69824, avg=31886.24, stdev=10377.22 00:33:54.521 clat (usec): min=182, max=1103, avg=605.10, stdev=165.02 00:33:54.521 lat (usec): min=193, max=1139, avg=636.98, stdev=169.35 00:33:54.521 clat percentiles (usec): 00:33:54.521 | 1.00th=[ 265], 5.00th=[ 314], 10.00th=[ 388], 20.00th=[ 461], 00:33:54.521 | 30.00th=[ 519], 40.00th=[ 562], 50.00th=[ 611], 60.00th=[ 644], 00:33:54.521 | 70.00th=[ 701], 80.00th=[ 758], 90.00th=[ 824], 95.00th=[ 865], 00:33:54.521 | 99.00th=[ 947], 99.50th=[ 988], 99.90th=[ 1106], 99.95th=[ 1106], 00:33:54.521 | 99.99th=[ 1106] 00:33:54.521 bw ( KiB/s): min= 4096, max= 4096, per=39.76%, avg=4096.00, stdev= 0.00, samples=1 00:33:54.521 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:54.521 lat (usec) : 250=0.56%, 500=16.08%, 750=50.52%, 1000=32.57% 00:33:54.521 lat (msec) : 2=0.28% 00:33:54.521 cpu : usr=2.30%, sys=4.00%, ctx=1438, majf=0, minf=1 00:33:54.521 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:54.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.521 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.521 issued rwts: total=512,925,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.521 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:54.521 job3: (groupid=0, jobs=1): err= 0: pid=2478964: Wed Nov 20 16:45:40 2024 00:33:54.521 read: IOPS=19, BW=77.6KiB/s (79.5kB/s)(80.0KiB/1031msec) 00:33:54.521 slat (nsec): min=25041, max=26228, avg=25466.10, stdev=260.08 00:33:54.521 clat (usec): min=571, max=42107, avg=39867.87, stdev=9250.40 00:33:54.521 lat (usec): min=596, max=42132, avg=39893.34, stdev=9250.38 00:33:54.521 clat percentiles (usec): 00:33:54.521 | 1.00th=[ 570], 5.00th=[ 570], 10.00th=[41681], 20.00th=[41681], 00:33:54.521 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:33:54.521 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:33:54.521 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:54.521 | 99.99th=[42206] 00:33:54.521 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:33:54.521 slat (nsec): min=9990, max=57582, avg=32489.75, stdev=5661.93 00:33:54.521 clat (usec): min=155, max=893, avg=414.17, stdev=124.28 00:33:54.521 lat (usec): min=188, max=906, avg=446.66, stdev=124.10 00:33:54.521 clat percentiles (usec): 00:33:54.521 | 1.00th=[ 253], 5.00th=[ 281], 10.00th=[ 297], 20.00th=[ 310], 00:33:54.521 | 30.00th=[ 322], 40.00th=[ 338], 50.00th=[ 359], 60.00th=[ 388], 00:33:54.521 | 70.00th=[ 498], 80.00th=[ 553], 90.00th=[ 603], 95.00th=[ 627], 00:33:54.521 | 99.00th=[ 693], 99.50th=[ 725], 99.90th=[ 898], 99.95th=[ 898], 00:33:54.521 | 99.99th=[ 898] 00:33:54.521 bw ( KiB/s): min= 4096, max= 4096, per=39.76%, avg=4096.00, stdev= 0.00, samples=1 00:33:54.521 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:54.521 lat (usec) : 250=0.75%, 500=66.73%, 750=28.76%, 1000=0.19% 00:33:54.521 lat (msec) : 50=3.57% 00:33:54.521 cpu : usr=1.07%, sys=1.36%, ctx=532, majf=0, minf=2 00:33:54.521 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:54.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.521 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.521 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.521 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:54.521 00:33:54.521 Run status group 0 (all jobs): 00:33:54.521 READ: bw=5176KiB/s (5300kB/s), 77.6KiB/s-2046KiB/s (79.5kB/s-2095kB/s), io=5336KiB (5464kB), run=1001-1031msec 00:33:54.521 WRITE: bw=10.1MiB/s (10.5MB/s), 1986KiB/s-3696KiB/s (2034kB/s-3785kB/s), io=10.4MiB (10.9MB), run=1001-1031msec 00:33:54.521 00:33:54.521 Disk stats (read/write): 00:33:54.521 nvme0n1: ios=338/512, merge=0/0, ticks=1253/239, in_queue=1492, util=96.99% 00:33:54.521 nvme0n2: ios=517/512, merge=0/0, ticks=1102/238, in_queue=1340, util=97.24% 00:33:54.521 nvme0n3: ios=569/716, merge=0/0, ticks=662/374, in_queue=1036, util=97.25% 00:33:54.521 nvme0n4: ios=42/512, merge=0/0, ticks=1048/194, in_queue=1242, util=96.16% 00:33:54.521 16:45:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:33:54.521 [global] 00:33:54.521 thread=1 00:33:54.521 invalidate=1 00:33:54.521 rw=randwrite 00:33:54.521 time_based=1 00:33:54.521 runtime=1 00:33:54.521 ioengine=libaio 00:33:54.521 direct=1 00:33:54.521 bs=4096 00:33:54.521 iodepth=1 00:33:54.521 norandommap=0 00:33:54.521 numjobs=1 00:33:54.521 00:33:54.521 verify_dump=1 00:33:54.521 verify_backlog=512 00:33:54.521 verify_state_save=0 00:33:54.521 do_verify=1 00:33:54.521 verify=crc32c-intel 00:33:54.521 [job0] 00:33:54.521 filename=/dev/nvme0n1 00:33:54.521 [job1] 00:33:54.521 filename=/dev/nvme0n2 00:33:54.521 [job2] 00:33:54.521 filename=/dev/nvme0n3 00:33:54.521 [job3] 00:33:54.521 filename=/dev/nvme0n4 00:33:54.521 Could not set queue depth (nvme0n1) 00:33:54.521 Could not set queue depth (nvme0n2) 00:33:54.521 Could not set queue depth (nvme0n3) 00:33:54.521 Could not set queue depth (nvme0n4) 00:33:54.781 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:54.781 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:54.781 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:54.781 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:54.781 fio-3.35 00:33:54.781 Starting 4 threads 00:33:56.163 00:33:56.163 job0: (groupid=0, jobs=1): err= 0: pid=2479427: Wed Nov 20 16:45:41 2024 00:33:56.163 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:33:56.163 slat (nsec): min=7069, max=58212, avg=25242.10, stdev=3649.53 00:33:56.163 clat (usec): min=533, max=1398, avg=1079.17, stdev=112.09 00:33:56.163 lat (usec): min=558, max=1423, avg=1104.41, stdev=112.45 00:33:56.163 clat percentiles (usec): 00:33:56.163 | 1.00th=[ 766], 5.00th=[ 881], 10.00th=[ 947], 20.00th=[ 996], 00:33:56.163 | 30.00th=[ 1037], 40.00th=[ 1057], 50.00th=[ 1090], 60.00th=[ 1106], 00:33:56.163 | 70.00th=[ 1139], 80.00th=[ 1172], 90.00th=[ 1221], 95.00th=[ 1254], 00:33:56.163 | 99.00th=[ 1319], 99.50th=[ 1369], 99.90th=[ 1401], 99.95th=[ 1401], 00:33:56.163 | 99.99th=[ 1401] 00:33:56.163 write: IOPS=660, BW=2641KiB/s (2705kB/s)(2644KiB/1001msec); 0 zone resets 00:33:56.163 slat (nsec): min=9290, max=65173, avg=29328.11, stdev=7804.25 00:33:56.163 clat (usec): min=276, max=967, avg=613.44, stdev=121.94 00:33:56.163 lat (usec): min=286, max=997, avg=642.77, stdev=123.84 00:33:56.163 clat percentiles (usec): 00:33:56.163 | 1.00th=[ 338], 5.00th=[ 416], 10.00th=[ 461], 20.00th=[ 506], 00:33:56.163 | 30.00th=[ 553], 40.00th=[ 586], 50.00th=[ 619], 60.00th=[ 644], 00:33:56.164 | 70.00th=[ 676], 80.00th=[ 709], 90.00th=[ 766], 95.00th=[ 816], 00:33:56.164 | 99.00th=[ 906], 99.50th=[ 947], 99.90th=[ 971], 99.95th=[ 971], 00:33:56.164 | 99.99th=[ 971] 00:33:56.164 bw ( KiB/s): min= 4087, max= 4087, per=32.51%, avg=4087.00, stdev= 0.00, samples=1 00:33:56.164 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:33:56.164 lat (usec) : 500=10.66%, 750=39.13%, 1000=16.20% 00:33:56.164 lat (msec) : 2=34.02% 00:33:56.164 cpu : usr=1.60%, sys=3.50%, ctx=1174, majf=0, minf=1 00:33:56.164 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:56.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.164 issued rwts: total=512,661,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:56.164 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:56.164 job1: (groupid=0, jobs=1): err= 0: pid=2479439: Wed Nov 20 16:45:41 2024 00:33:56.164 read: IOPS=601, BW=2406KiB/s (2463kB/s)(2408KiB/1001msec) 00:33:56.164 slat (nsec): min=6497, max=55674, avg=20522.57, stdev=10951.26 00:33:56.164 clat (usec): min=479, max=1342, avg=910.05, stdev=209.32 00:33:56.164 lat (usec): min=486, max=1377, avg=930.57, stdev=218.29 00:33:56.164 clat percentiles (usec): 00:33:56.164 | 1.00th=[ 529], 5.00th=[ 627], 10.00th=[ 660], 20.00th=[ 685], 00:33:56.164 | 30.00th=[ 717], 40.00th=[ 816], 50.00th=[ 955], 60.00th=[ 1020], 00:33:56.164 | 70.00th=[ 1074], 80.00th=[ 1106], 90.00th=[ 1172], 95.00th=[ 1205], 00:33:56.164 | 99.00th=[ 1270], 99.50th=[ 1303], 99.90th=[ 1336], 99.95th=[ 1336], 00:33:56.164 | 99.99th=[ 1336] 00:33:56.164 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:33:56.164 slat (nsec): min=8629, max=54970, avg=11351.95, stdev=5895.03 00:33:56.164 clat (usec): min=120, max=1002, avg=408.60, stdev=140.36 00:33:56.164 lat (usec): min=130, max=1036, avg=419.95, stdev=142.97 00:33:56.164 clat percentiles (usec): 00:33:56.164 | 1.00th=[ 194], 5.00th=[ 212], 10.00th=[ 231], 20.00th=[ 277], 00:33:56.164 | 30.00th=[ 334], 40.00th=[ 367], 50.00th=[ 396], 60.00th=[ 424], 00:33:56.164 | 70.00th=[ 474], 80.00th=[ 519], 90.00th=[ 586], 95.00th=[ 676], 00:33:56.164 | 99.00th=[ 799], 99.50th=[ 857], 99.90th=[ 906], 99.95th=[ 1004], 00:33:56.164 | 99.99th=[ 1004] 00:33:56.164 bw ( KiB/s): min= 4087, max= 4087, per=32.51%, avg=4087.00, stdev= 0.00, samples=1 00:33:56.164 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:33:56.164 lat (usec) : 250=9.96%, 500=38.13%, 750=26.57%, 1000=9.29% 00:33:56.164 lat (msec) : 2=16.05% 00:33:56.164 cpu : usr=1.80%, sys=3.10%, ctx=1629, majf=0, minf=1 00:33:56.164 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:56.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.164 issued rwts: total=602,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:56.164 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:56.164 job2: (groupid=0, jobs=1): err= 0: pid=2479454: Wed Nov 20 16:45:41 2024 00:33:56.164 read: IOPS=18, BW=74.1KiB/s (75.9kB/s)(76.0KiB/1025msec) 00:33:56.164 slat (nsec): min=25119, max=25630, avg=25303.95, stdev=132.85 00:33:56.164 clat (usec): min=1104, max=41135, avg=38881.69, stdev=9148.33 00:33:56.164 lat (usec): min=1130, max=41160, avg=38907.00, stdev=9148.28 00:33:56.164 clat percentiles (usec): 00:33:56.164 | 1.00th=[ 1106], 5.00th=[ 1106], 10.00th=[41157], 20.00th=[41157], 00:33:56.164 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:56.164 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:56.164 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:33:56.164 | 99.99th=[41157] 00:33:56.164 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:33:56.164 slat (nsec): min=9412, max=65831, avg=29536.35, stdev=7449.69 00:33:56.164 clat (usec): min=127, max=845, avg=519.84, stdev=119.84 00:33:56.164 lat (usec): min=158, max=880, avg=549.37, stdev=122.04 00:33:56.164 clat percentiles (usec): 00:33:56.164 | 1.00th=[ 212], 5.00th=[ 302], 10.00th=[ 359], 20.00th=[ 416], 00:33:56.164 | 30.00th=[ 478], 40.00th=[ 510], 50.00th=[ 529], 60.00th=[ 553], 00:33:56.164 | 70.00th=[ 578], 80.00th=[ 627], 90.00th=[ 676], 95.00th=[ 701], 00:33:56.164 | 99.00th=[ 750], 99.50th=[ 783], 99.90th=[ 848], 99.95th=[ 848], 00:33:56.164 | 99.99th=[ 848] 00:33:56.164 bw ( KiB/s): min= 4087, max= 4087, per=32.51%, avg=4087.00, stdev= 0.00, samples=1 00:33:56.164 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:33:56.164 lat (usec) : 250=1.69%, 500=33.71%, 750=60.26%, 1000=0.75% 00:33:56.164 lat (msec) : 2=0.19%, 50=3.39% 00:33:56.164 cpu : usr=0.98%, sys=1.27%, ctx=532, majf=0, minf=1 00:33:56.164 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:56.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.164 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:56.164 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:56.164 job3: (groupid=0, jobs=1): err= 0: pid=2479460: Wed Nov 20 16:45:41 2024 00:33:56.164 read: IOPS=548, BW=2194KiB/s (2246kB/s)(2196KiB/1001msec) 00:33:56.164 slat (nsec): min=6448, max=54563, avg=25195.21, stdev=7424.48 00:33:56.164 clat (usec): min=274, max=990, avg=731.83, stdev=103.34 00:33:56.164 lat (usec): min=302, max=1017, avg=757.03, stdev=104.78 00:33:56.164 clat percentiles (usec): 00:33:56.164 | 1.00th=[ 478], 5.00th=[ 545], 10.00th=[ 603], 20.00th=[ 644], 00:33:56.164 | 30.00th=[ 685], 40.00th=[ 717], 50.00th=[ 742], 60.00th=[ 775], 00:33:56.164 | 70.00th=[ 799], 80.00th=[ 816], 90.00th=[ 848], 95.00th=[ 881], 00:33:56.164 | 99.00th=[ 938], 99.50th=[ 963], 99.90th=[ 988], 99.95th=[ 988], 00:33:56.164 | 99.99th=[ 988] 00:33:56.164 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:33:56.164 slat (nsec): min=8702, max=62566, avg=29195.63, stdev=9639.46 00:33:56.164 clat (usec): min=124, max=1576, avg=530.57, stdev=123.74 00:33:56.164 lat (usec): min=133, max=1587, avg=559.77, stdev=128.41 00:33:56.164 clat percentiles (usec): 00:33:56.164 | 1.00th=[ 255], 5.00th=[ 306], 10.00th=[ 359], 20.00th=[ 437], 00:33:56.164 | 30.00th=[ 474], 40.00th=[ 510], 50.00th=[ 545], 60.00th=[ 570], 00:33:56.164 | 70.00th=[ 603], 80.00th=[ 635], 90.00th=[ 668], 95.00th=[ 709], 00:33:56.164 | 99.00th=[ 766], 99.50th=[ 799], 99.90th=[ 816], 99.95th=[ 1582], 00:33:56.164 | 99.99th=[ 1582] 00:33:56.164 bw ( KiB/s): min= 4087, max= 4087, per=32.51%, avg=4087.00, stdev= 0.00, samples=1 00:33:56.164 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:33:56.164 lat (usec) : 250=0.64%, 500=24.48%, 750=57.79%, 1000=17.04% 00:33:56.164 lat (msec) : 2=0.06% 00:33:56.164 cpu : usr=3.10%, sys=5.90%, ctx=1573, majf=0, minf=1 00:33:56.164 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:56.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:56.164 issued rwts: total=549,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:56.164 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:56.164 00:33:56.164 Run status group 0 (all jobs): 00:33:56.164 READ: bw=6564KiB/s (6721kB/s), 74.1KiB/s-2406KiB/s (75.9kB/s-2463kB/s), io=6728KiB (6889kB), run=1001-1025msec 00:33:56.164 WRITE: bw=12.3MiB/s (12.9MB/s), 1998KiB/s-4092KiB/s (2046kB/s-4190kB/s), io=12.6MiB (13.2MB), run=1001-1025msec 00:33:56.164 00:33:56.164 Disk stats (read/write): 00:33:56.164 nvme0n1: ios=509/512, merge=0/0, ticks=582/298, in_queue=880, util=92.69% 00:33:56.164 nvme0n2: ios=536/899, merge=0/0, ticks=1118/357, in_queue=1475, util=96.74% 00:33:56.164 nvme0n3: ios=14/512, merge=0/0, ticks=534/253, in_queue=787, util=88.55% 00:33:56.164 nvme0n4: ios=548/810, merge=0/0, ticks=435/324, in_queue=759, util=92.99% 00:33:56.164 16:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:33:56.164 [global] 00:33:56.164 thread=1 00:33:56.164 invalidate=1 00:33:56.164 rw=write 00:33:56.164 time_based=1 00:33:56.164 runtime=1 00:33:56.164 ioengine=libaio 00:33:56.164 direct=1 00:33:56.164 bs=4096 00:33:56.164 iodepth=128 00:33:56.164 norandommap=0 00:33:56.164 numjobs=1 00:33:56.164 00:33:56.164 verify_dump=1 00:33:56.164 verify_backlog=512 00:33:56.164 verify_state_save=0 00:33:56.164 do_verify=1 00:33:56.164 verify=crc32c-intel 00:33:56.164 [job0] 00:33:56.164 filename=/dev/nvme0n1 00:33:56.164 [job1] 00:33:56.164 filename=/dev/nvme0n2 00:33:56.164 [job2] 00:33:56.164 filename=/dev/nvme0n3 00:33:56.164 [job3] 00:33:56.164 filename=/dev/nvme0n4 00:33:56.164 Could not set queue depth (nvme0n1) 00:33:56.164 Could not set queue depth (nvme0n2) 00:33:56.164 Could not set queue depth (nvme0n3) 00:33:56.164 Could not set queue depth (nvme0n4) 00:33:56.425 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:56.425 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:56.425 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:56.425 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:56.425 fio-3.35 00:33:56.425 Starting 4 threads 00:33:57.843 00:33:57.843 job0: (groupid=0, jobs=1): err= 0: pid=2479845: Wed Nov 20 16:45:43 2024 00:33:57.843 read: IOPS=9373, BW=36.6MiB/s (38.4MB/s)(36.9MiB/1009msec) 00:33:57.843 slat (nsec): min=1009, max=6772.8k, avg=52761.00, stdev=399379.07 00:33:57.843 clat (usec): min=1517, max=14902, avg=7202.58, stdev=1693.63 00:33:57.843 lat (usec): min=2919, max=14904, avg=7255.34, stdev=1707.57 00:33:57.843 clat percentiles (usec): 00:33:57.843 | 1.00th=[ 3621], 5.00th=[ 5014], 10.00th=[ 5407], 20.00th=[ 5866], 00:33:57.843 | 30.00th=[ 6194], 40.00th=[ 6521], 50.00th=[ 6849], 60.00th=[ 7439], 00:33:57.843 | 70.00th=[ 7898], 80.00th=[ 8586], 90.00th=[ 9634], 95.00th=[10421], 00:33:57.843 | 99.00th=[11863], 99.50th=[11994], 99.90th=[14484], 99.95th=[14484], 00:33:57.843 | 99.99th=[14877] 00:33:57.843 write: IOPS=9641, BW=37.7MiB/s (39.5MB/s)(38.0MiB/1009msec); 0 zone resets 00:33:57.843 slat (nsec): min=1704, max=5454.7k, avg=46566.97, stdev=333140.96 00:33:57.843 clat (usec): min=1199, max=12201, avg=6123.17, stdev=1379.71 00:33:57.843 lat (usec): min=1210, max=12203, avg=6169.74, stdev=1382.63 00:33:57.843 clat percentiles (usec): 00:33:57.843 | 1.00th=[ 2573], 5.00th=[ 3851], 10.00th=[ 4080], 20.00th=[ 4817], 00:33:57.843 | 30.00th=[ 5800], 40.00th=[ 6259], 50.00th=[ 6456], 60.00th=[ 6587], 00:33:57.843 | 70.00th=[ 6718], 80.00th=[ 6849], 90.00th=[ 7046], 95.00th=[ 8717], 00:33:57.843 | 99.00th=[ 9765], 99.50th=[ 9896], 99.90th=[11863], 99.95th=[12125], 00:33:57.843 | 99.99th=[12256] 00:33:57.843 bw ( KiB/s): min=38768, max=39056, per=40.98%, avg=38912.00, stdev=203.65, samples=2 00:33:57.843 iops : min= 9692, max= 9764, avg=9728.00, stdev=50.91, samples=2 00:33:57.843 lat (msec) : 2=0.14%, 4=5.36%, 10=91.08%, 20=3.42% 00:33:57.843 cpu : usr=7.54%, sys=9.72%, ctx=540, majf=0, minf=1 00:33:57.843 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:33:57.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.843 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:57.843 issued rwts: total=9458,9728,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:57.843 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:57.843 job1: (groupid=0, jobs=1): err= 0: pid=2479846: Wed Nov 20 16:45:43 2024 00:33:57.843 read: IOPS=3026, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1015msec) 00:33:57.843 slat (nsec): min=1006, max=9974.0k, avg=105709.95, stdev=722349.16 00:33:57.843 clat (usec): min=4128, max=28837, avg=12251.19, stdev=4136.25 00:33:57.843 lat (usec): min=4137, max=28840, avg=12356.90, stdev=4197.05 00:33:57.843 clat percentiles (usec): 00:33:57.843 | 1.00th=[ 5473], 5.00th=[ 7898], 10.00th=[ 9241], 20.00th=[10683], 00:33:57.843 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:33:57.843 | 70.00th=[11469], 80.00th=[11994], 90.00th=[17695], 95.00th=[22414], 00:33:57.843 | 99.00th=[27395], 99.50th=[28443], 99.90th=[28705], 99.95th=[28967], 00:33:57.844 | 99.99th=[28967] 00:33:57.844 write: IOPS=3505, BW=13.7MiB/s (14.4MB/s)(13.9MiB/1015msec); 0 zone resets 00:33:57.844 slat (nsec): min=1691, max=10733k, avg=186594.92, stdev=973768.85 00:33:57.844 clat (usec): min=1225, max=111523, avg=25538.94, stdev=23583.29 00:33:57.844 lat (usec): min=1234, max=111532, avg=25725.53, stdev=23717.54 00:33:57.844 clat percentiles (msec): 00:33:57.844 | 1.00th=[ 4], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 9], 00:33:57.844 | 30.00th=[ 10], 40.00th=[ 13], 50.00th=[ 19], 60.00th=[ 21], 00:33:57.844 | 70.00th=[ 23], 80.00th=[ 43], 90.00th=[ 60], 95.00th=[ 74], 00:33:57.844 | 99.00th=[ 109], 99.50th=[ 111], 99.90th=[ 112], 99.95th=[ 112], 00:33:57.844 | 99.99th=[ 112] 00:33:57.844 bw ( KiB/s): min=10368, max=17072, per=14.45%, avg=13720.00, stdev=4740.44, samples=2 00:33:57.844 iops : min= 2592, max= 4268, avg=3430.00, stdev=1185.11, samples=2 00:33:57.844 lat (msec) : 2=0.12%, 4=0.62%, 10=21.73%, 20=49.65%, 50=19.19% 00:33:57.844 lat (msec) : 100=7.51%, 250=1.18% 00:33:57.844 cpu : usr=2.47%, sys=3.35%, ctx=374, majf=0, minf=1 00:33:57.844 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:33:57.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.844 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:57.844 issued rwts: total=3072,3558,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:57.844 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:57.844 job2: (groupid=0, jobs=1): err= 0: pid=2479861: Wed Nov 20 16:45:43 2024 00:33:57.844 read: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec) 00:33:57.844 slat (nsec): min=1034, max=10862k, avg=109429.65, stdev=786058.16 00:33:57.844 clat (usec): min=4081, max=30021, avg=12960.49, stdev=4241.04 00:33:57.844 lat (usec): min=4090, max=30024, avg=13069.92, stdev=4304.04 00:33:57.844 clat percentiles (usec): 00:33:57.844 | 1.00th=[ 5866], 5.00th=[ 8586], 10.00th=[ 8979], 20.00th=[10028], 00:33:57.844 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12125], 60.00th=[12125], 00:33:57.844 | 70.00th=[12649], 80.00th=[14091], 90.00th=[18220], 95.00th=[23200], 00:33:57.844 | 99.00th=[28705], 99.50th=[29492], 99.90th=[30016], 99.95th=[30016], 00:33:57.844 | 99.99th=[30016] 00:33:57.844 write: IOPS=3875, BW=15.1MiB/s (15.9MB/s)(15.3MiB/1009msec); 0 zone resets 00:33:57.844 slat (nsec): min=1766, max=9118.5k, avg=151198.70, stdev=823905.59 00:33:57.844 clat (usec): min=1820, max=92455, avg=20580.75, stdev=17990.00 00:33:57.844 lat (usec): min=2963, max=92466, avg=20731.94, stdev=18104.31 00:33:57.844 clat percentiles (usec): 00:33:57.844 | 1.00th=[ 3195], 5.00th=[ 6783], 10.00th=[ 7242], 20.00th=[ 8586], 00:33:57.844 | 30.00th=[ 9634], 40.00th=[10814], 50.00th=[12649], 60.00th=[19006], 00:33:57.844 | 70.00th=[20317], 80.00th=[27919], 90.00th=[51119], 95.00th=[53740], 00:33:57.844 | 99.00th=[87557], 99.50th=[91751], 99.90th=[92799], 99.95th=[92799], 00:33:57.844 | 99.99th=[92799] 00:33:57.844 bw ( KiB/s): min=14400, max=15856, per=15.93%, avg=15128.00, stdev=1029.55, samples=2 00:33:57.844 iops : min= 3600, max= 3964, avg=3782.00, stdev=257.39, samples=2 00:33:57.844 lat (msec) : 2=0.01%, 4=0.64%, 10=24.86%, 20=51.56%, 50=17.36% 00:33:57.844 lat (msec) : 100=5.56% 00:33:57.844 cpu : usr=2.18%, sys=4.66%, ctx=352, majf=0, minf=1 00:33:57.844 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:33:57.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.844 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:57.844 issued rwts: total=3584,3910,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:57.844 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:57.844 job3: (groupid=0, jobs=1): err= 0: pid=2479867: Wed Nov 20 16:45:43 2024 00:33:57.844 read: IOPS=7308, BW=28.5MiB/s (29.9MB/s)(29.9MiB/1048msec) 00:33:57.844 slat (nsec): min=1023, max=6851.2k, avg=61939.49, stdev=478285.70 00:33:57.844 clat (usec): min=3231, max=54325, avg=8969.86, stdev=5706.17 00:33:57.844 lat (usec): min=3237, max=54971, avg=9031.80, stdev=5715.28 00:33:57.844 clat percentiles (usec): 00:33:57.844 | 1.00th=[ 3916], 5.00th=[ 5735], 10.00th=[ 6325], 20.00th=[ 6652], 00:33:57.844 | 30.00th=[ 6980], 40.00th=[ 7242], 50.00th=[ 7767], 60.00th=[ 8586], 00:33:57.844 | 70.00th=[ 9110], 80.00th=[10290], 90.00th=[11731], 95.00th=[12780], 00:33:57.844 | 99.00th=[48497], 99.50th=[48497], 99.90th=[54264], 99.95th=[54264], 00:33:57.844 | 99.99th=[54264] 00:33:57.844 write: IOPS=7328, BW=28.6MiB/s (30.0MB/s)(30.0MiB/1048msec); 0 zone resets 00:33:57.844 slat (nsec): min=1696, max=8342.8k, avg=63329.16, stdev=422783.78 00:33:57.844 clat (usec): min=1346, max=75160, avg=8373.48, stdev=7975.62 00:33:57.844 lat (usec): min=1349, max=75169, avg=8436.81, stdev=8026.42 00:33:57.844 clat percentiles (usec): 00:33:57.844 | 1.00th=[ 3064], 5.00th=[ 4424], 10.00th=[ 4752], 20.00th=[ 5538], 00:33:57.844 | 30.00th=[ 6325], 40.00th=[ 7111], 50.00th=[ 7504], 60.00th=[ 7635], 00:33:57.844 | 70.00th=[ 7767], 80.00th=[ 7898], 90.00th=[ 9896], 95.00th=[11338], 00:33:57.844 | 99.00th=[61604], 99.50th=[64226], 99.90th=[72877], 99.95th=[74974], 00:33:57.844 | 99.99th=[74974] 00:33:57.844 bw ( KiB/s): min=28672, max=32768, per=32.36%, avg=30720.00, stdev=2896.31, samples=2 00:33:57.844 iops : min= 7168, max= 8192, avg=7680.00, stdev=724.08, samples=2 00:33:57.844 lat (msec) : 2=0.19%, 4=1.47%, 10=82.52%, 20=13.48%, 50=1.38% 00:33:57.844 lat (msec) : 100=0.96% 00:33:57.844 cpu : usr=5.82%, sys=6.87%, ctx=597, majf=0, minf=1 00:33:57.844 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:33:57.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.844 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:57.844 issued rwts: total=7659,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:57.844 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:57.844 00:33:57.844 Run status group 0 (all jobs): 00:33:57.844 READ: bw=88.6MiB/s (92.9MB/s), 11.8MiB/s-36.6MiB/s (12.4MB/s-38.4MB/s), io=92.9MiB (97.4MB), run=1009-1048msec 00:33:57.844 WRITE: bw=92.7MiB/s (97.2MB/s), 13.7MiB/s-37.7MiB/s (14.4MB/s-39.5MB/s), io=97.2MiB (102MB), run=1009-1048msec 00:33:57.844 00:33:57.844 Disk stats (read/write): 00:33:57.844 nvme0n1: ios=7775/8192, merge=0/0, ticks=52498/47089, in_queue=99587, util=86.96% 00:33:57.844 nvme0n2: ios=2605/3031, merge=0/0, ticks=29916/72649, in_queue=102565, util=90.91% 00:33:57.844 nvme0n3: ios=3123/3215, merge=0/0, ticks=37929/62947, in_queue=100876, util=95.15% 00:33:57.844 nvme0n4: ios=6205/6415, merge=0/0, ticks=48514/53352, in_queue=101866, util=94.68% 00:33:57.844 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:33:57.844 [global] 00:33:57.844 thread=1 00:33:57.844 invalidate=1 00:33:57.844 rw=randwrite 00:33:57.844 time_based=1 00:33:57.844 runtime=1 00:33:57.844 ioengine=libaio 00:33:57.844 direct=1 00:33:57.844 bs=4096 00:33:57.844 iodepth=128 00:33:57.844 norandommap=0 00:33:57.844 numjobs=1 00:33:57.844 00:33:57.844 verify_dump=1 00:33:57.844 verify_backlog=512 00:33:57.844 verify_state_save=0 00:33:57.844 do_verify=1 00:33:57.844 verify=crc32c-intel 00:33:57.844 [job0] 00:33:57.844 filename=/dev/nvme0n1 00:33:57.844 [job1] 00:33:57.844 filename=/dev/nvme0n2 00:33:57.844 [job2] 00:33:57.844 filename=/dev/nvme0n3 00:33:57.844 [job3] 00:33:57.844 filename=/dev/nvme0n4 00:33:57.844 Could not set queue depth (nvme0n1) 00:33:57.844 Could not set queue depth (nvme0n2) 00:33:57.844 Could not set queue depth (nvme0n3) 00:33:57.844 Could not set queue depth (nvme0n4) 00:33:58.112 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:58.112 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:58.112 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:58.112 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:58.112 fio-3.35 00:33:58.112 Starting 4 threads 00:33:59.518 00:33:59.518 job0: (groupid=0, jobs=1): err= 0: pid=2480353: Wed Nov 20 16:45:45 2024 00:33:59.518 read: IOPS=5592, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1007msec) 00:33:59.518 slat (nsec): min=1007, max=16963k, avg=92373.58, stdev=756588.01 00:33:59.518 clat (usec): min=3686, max=41439, avg=11628.19, stdev=5467.03 00:33:59.518 lat (usec): min=3690, max=41448, avg=11720.57, stdev=5527.30 00:33:59.518 clat percentiles (usec): 00:33:59.518 | 1.00th=[ 4621], 5.00th=[ 6849], 10.00th=[ 7111], 20.00th=[ 7373], 00:33:59.518 | 30.00th=[ 8586], 40.00th=[ 9110], 50.00th=[ 9765], 60.00th=[10683], 00:33:59.518 | 70.00th=[12911], 80.00th=[14746], 90.00th=[18482], 95.00th=[22676], 00:33:59.518 | 99.00th=[33817], 99.50th=[38011], 99.90th=[40633], 99.95th=[41681], 00:33:59.518 | 99.99th=[41681] 00:33:59.518 write: IOPS=5873, BW=22.9MiB/s (24.1MB/s)(23.1MiB/1007msec); 0 zone resets 00:33:59.518 slat (nsec): min=1615, max=19286k, avg=75382.86, stdev=557969.52 00:33:59.518 clat (usec): min=1846, max=41400, avg=10488.14, stdev=5228.29 00:33:59.518 lat (usec): min=1854, max=41403, avg=10563.52, stdev=5258.66 00:33:59.518 clat percentiles (usec): 00:33:59.518 | 1.00th=[ 4047], 5.00th=[ 4817], 10.00th=[ 5669], 20.00th=[ 6521], 00:33:59.518 | 30.00th=[ 7046], 40.00th=[ 7898], 50.00th=[ 8586], 60.00th=[10028], 00:33:59.518 | 70.00th=[12256], 80.00th=[14877], 90.00th=[17695], 95.00th=[19792], 00:33:59.518 | 99.00th=[26870], 99.50th=[35390], 99.90th=[35914], 99.95th=[35914], 00:33:59.518 | 99.99th=[41157] 00:33:59.518 bw ( KiB/s): min=19256, max=27048, per=25.19%, avg=23152.00, stdev=5509.78, samples=2 00:33:59.518 iops : min= 4814, max= 6762, avg=5788.00, stdev=1377.44, samples=2 00:33:59.518 lat (msec) : 2=0.05%, 4=0.66%, 10=54.49%, 20=38.94%, 50=5.86% 00:33:59.518 cpu : usr=4.67%, sys=5.86%, ctx=370, majf=0, minf=2 00:33:59.518 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:33:59.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:59.518 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:59.518 issued rwts: total=5632,5915,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:59.518 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:59.518 job1: (groupid=0, jobs=1): err= 0: pid=2480361: Wed Nov 20 16:45:45 2024 00:33:59.518 read: IOPS=9669, BW=37.8MiB/s (39.6MB/s)(38.0MiB/1006msec) 00:33:59.518 slat (nsec): min=989, max=7084.0k, avg=49612.11, stdev=389894.73 00:33:59.518 clat (usec): min=1887, max=14408, avg=6593.73, stdev=1742.18 00:33:59.518 lat (usec): min=1894, max=15352, avg=6643.34, stdev=1765.58 00:33:59.518 clat percentiles (usec): 00:33:59.518 | 1.00th=[ 3163], 5.00th=[ 4424], 10.00th=[ 4752], 20.00th=[ 5211], 00:33:59.519 | 30.00th=[ 5604], 40.00th=[ 5866], 50.00th=[ 6325], 60.00th=[ 6652], 00:33:59.519 | 70.00th=[ 7111], 80.00th=[ 8029], 90.00th=[ 8979], 95.00th=[10159], 00:33:59.519 | 99.00th=[11469], 99.50th=[11994], 99.90th=[13042], 99.95th=[13304], 00:33:59.519 | 99.99th=[14353] 00:33:59.519 write: IOPS=10.1k, BW=39.3MiB/s (41.2MB/s)(39.5MiB/1006msec); 0 zone resets 00:33:59.519 slat (nsec): min=1593, max=17992k, avg=46642.25, stdev=346569.61 00:33:59.519 clat (usec): min=715, max=22059, avg=6243.29, stdev=3204.62 00:33:59.519 lat (usec): min=723, max=22068, avg=6289.94, stdev=3215.59 00:33:59.519 clat percentiles (usec): 00:33:59.519 | 1.00th=[ 2376], 5.00th=[ 3392], 10.00th=[ 3654], 20.00th=[ 4490], 00:33:59.519 | 30.00th=[ 5211], 40.00th=[ 5407], 50.00th=[ 5604], 60.00th=[ 5735], 00:33:59.519 | 70.00th=[ 6259], 80.00th=[ 6915], 90.00th=[ 8094], 95.00th=[12125], 00:33:59.519 | 99.00th=[20055], 99.50th=[21365], 99.90th=[21627], 99.95th=[21627], 00:33:59.519 | 99.99th=[22152] 00:33:59.519 bw ( KiB/s): min=39616, max=40304, per=43.48%, avg=39960.00, stdev=486.49, samples=2 00:33:59.519 iops : min= 9904, max=10076, avg=9990.00, stdev=121.62, samples=2 00:33:59.519 lat (usec) : 750=0.05%, 1000=0.01% 00:33:59.519 lat (msec) : 2=0.33%, 4=8.09%, 10=85.45%, 20=5.56%, 50=0.52% 00:33:59.519 cpu : usr=6.57%, sys=8.56%, ctx=614, majf=0, minf=1 00:33:59.519 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:33:59.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:59.519 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:59.519 issued rwts: total=9728,10118,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:59.519 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:59.519 job2: (groupid=0, jobs=1): err= 0: pid=2480369: Wed Nov 20 16:45:45 2024 00:33:59.519 read: IOPS=2033, BW=8135KiB/s (8330kB/s)(8192KiB/1007msec) 00:33:59.519 slat (nsec): min=971, max=18746k, avg=168148.05, stdev=1226510.24 00:33:59.519 clat (msec): min=7, max=106, avg=24.16, stdev=15.53 00:33:59.519 lat (msec): min=7, max=114, avg=24.33, stdev=15.65 00:33:59.519 clat percentiles (msec): 00:33:59.519 | 1.00th=[ 8], 5.00th=[ 12], 10.00th=[ 13], 20.00th=[ 13], 00:33:59.519 | 30.00th=[ 14], 40.00th=[ 14], 50.00th=[ 16], 60.00th=[ 26], 00:33:59.519 | 70.00th=[ 31], 80.00th=[ 36], 90.00th=[ 42], 95.00th=[ 54], 00:33:59.519 | 99.00th=[ 88], 99.50th=[ 96], 99.90th=[ 107], 99.95th=[ 107], 00:33:59.519 | 99.99th=[ 107] 00:33:59.519 write: IOPS=2395, BW=9581KiB/s (9811kB/s)(9648KiB/1007msec); 0 zone resets 00:33:59.519 slat (nsec): min=1631, max=18039k, avg=267713.82, stdev=1503991.92 00:33:59.519 clat (usec): min=1203, max=120132, avg=32267.03, stdev=23186.79 00:33:59.519 lat (msec): min=6, max=120, avg=32.53, stdev=23.36 00:33:59.519 clat percentiles (msec): 00:33:59.519 | 1.00th=[ 10], 5.00th=[ 12], 10.00th=[ 13], 20.00th=[ 15], 00:33:59.519 | 30.00th=[ 17], 40.00th=[ 22], 50.00th=[ 27], 60.00th=[ 33], 00:33:59.519 | 70.00th=[ 38], 80.00th=[ 42], 90.00th=[ 53], 95.00th=[ 92], 00:33:59.519 | 99.00th=[ 118], 99.50th=[ 120], 99.90th=[ 121], 99.95th=[ 121], 00:33:59.519 | 99.99th=[ 121] 00:33:59.519 bw ( KiB/s): min= 8192, max=10080, per=9.94%, avg=9136.00, stdev=1335.02, samples=2 00:33:59.519 iops : min= 2048, max= 2520, avg=2284.00, stdev=333.75, samples=2 00:33:59.519 lat (msec) : 2=0.02%, 10=2.89%, 20=41.59%, 50=46.32%, 100=6.79% 00:33:59.519 lat (msec) : 250=2.38% 00:33:59.519 cpu : usr=1.59%, sys=2.78%, ctx=208, majf=0, minf=1 00:33:59.519 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:33:59.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:59.519 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:59.519 issued rwts: total=2048,2412,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:59.519 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:59.519 job3: (groupid=0, jobs=1): err= 0: pid=2480370: Wed Nov 20 16:45:45 2024 00:33:59.519 read: IOPS=5011, BW=19.6MiB/s (20.5MB/s)(20.5MiB/1048msec) 00:33:59.519 slat (nsec): min=1011, max=14795k, avg=81969.56, stdev=698807.60 00:33:59.519 clat (usec): min=1489, max=60629, avg=12031.99, stdev=8656.65 00:33:59.519 lat (usec): min=1495, max=69610, avg=12113.96, stdev=8702.98 00:33:59.519 clat percentiles (usec): 00:33:59.519 | 1.00th=[ 2638], 5.00th=[ 4883], 10.00th=[ 6652], 20.00th=[ 7373], 00:33:59.519 | 30.00th=[ 7701], 40.00th=[ 8094], 50.00th=[10290], 60.00th=[11994], 00:33:59.519 | 70.00th=[13435], 80.00th=[15533], 90.00th=[17695], 95.00th=[19792], 00:33:59.519 | 99.00th=[60031], 99.50th=[60556], 99.90th=[60556], 99.95th=[60556], 00:33:59.519 | 99.99th=[60556] 00:33:59.519 write: IOPS=5374, BW=21.0MiB/s (22.0MB/s)(22.0MiB/1048msec); 0 zone resets 00:33:59.519 slat (nsec): min=1632, max=11386k, avg=86760.81, stdev=610745.32 00:33:59.519 clat (usec): min=702, max=68704, avg=12376.39, stdev=13980.49 00:33:59.519 lat (usec): min=736, max=68713, avg=12463.15, stdev=14079.14 00:33:59.519 clat percentiles (usec): 00:33:59.519 | 1.00th=[ 1369], 5.00th=[ 3556], 10.00th=[ 4686], 20.00th=[ 5866], 00:33:59.519 | 30.00th=[ 6915], 40.00th=[ 7439], 50.00th=[ 7898], 60.00th=[ 8356], 00:33:59.519 | 70.00th=[10290], 80.00th=[11731], 90.00th=[20317], 95.00th=[57410], 00:33:59.519 | 99.00th=[64750], 99.50th=[66323], 99.90th=[68682], 99.95th=[68682], 00:33:59.519 | 99.99th=[68682] 00:33:59.519 bw ( KiB/s): min=16384, max=28672, per=24.51%, avg=22528.00, stdev=8688.93, samples=2 00:33:59.519 iops : min= 4096, max= 7168, avg=5632.00, stdev=2172.23, samples=2 00:33:59.519 lat (usec) : 750=0.01%, 1000=0.12% 00:33:59.519 lat (msec) : 2=1.40%, 4=2.77%, 10=53.43%, 20=34.82%, 50=3.15% 00:33:59.519 lat (msec) : 100=4.31% 00:33:59.519 cpu : usr=4.58%, sys=5.44%, ctx=345, majf=0, minf=1 00:33:59.519 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:33:59.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:59.519 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:59.519 issued rwts: total=5252,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:59.519 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:59.519 00:33:59.519 Run status group 0 (all jobs): 00:33:59.519 READ: bw=84.5MiB/s (88.6MB/s), 8135KiB/s-37.8MiB/s (8330kB/s-39.6MB/s), io=88.5MiB (92.8MB), run=1006-1048msec 00:33:59.519 WRITE: bw=89.7MiB/s (94.1MB/s), 9581KiB/s-39.3MiB/s (9811kB/s-41.2MB/s), io=94.1MiB (98.6MB), run=1006-1048msec 00:33:59.519 00:33:59.519 Disk stats (read/write): 00:33:59.519 nvme0n1: ios=5164/5183, merge=0/0, ticks=52502/47352, in_queue=99854, util=86.47% 00:33:59.519 nvme0n2: ios=8237/8239, merge=0/0, ticks=51182/49799, in_queue=100981, util=91.34% 00:33:59.519 nvme0n3: ios=1593/2002, merge=0/0, ticks=12988/21751, in_queue=34739, util=92.96% 00:33:59.519 nvme0n4: ios=4155/4608, merge=0/0, ticks=43528/58693, in_queue=102221, util=96.71% 00:33:59.519 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:33:59.519 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2480681 00:33:59.519 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:33:59.519 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:33:59.519 [global] 00:33:59.519 thread=1 00:33:59.519 invalidate=1 00:33:59.519 rw=read 00:33:59.519 time_based=1 00:33:59.519 runtime=10 00:33:59.519 ioengine=libaio 00:33:59.519 direct=1 00:33:59.519 bs=4096 00:33:59.519 iodepth=1 00:33:59.519 norandommap=1 00:33:59.519 numjobs=1 00:33:59.519 00:33:59.519 [job0] 00:33:59.519 filename=/dev/nvme0n1 00:33:59.519 [job1] 00:33:59.519 filename=/dev/nvme0n2 00:33:59.519 [job2] 00:33:59.519 filename=/dev/nvme0n3 00:33:59.519 [job3] 00:33:59.519 filename=/dev/nvme0n4 00:33:59.519 Could not set queue depth (nvme0n1) 00:33:59.519 Could not set queue depth (nvme0n2) 00:33:59.519 Could not set queue depth (nvme0n3) 00:33:59.519 Could not set queue depth (nvme0n4) 00:33:59.782 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:59.782 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:59.782 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:59.782 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:59.782 fio-3.35 00:33:59.782 Starting 4 threads 00:34:02.322 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:34:02.583 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:34:02.583 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=450560, buflen=4096 00:34:02.583 fio: pid=2480878, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:02.583 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:02.583 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:34:02.583 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=2056192, buflen=4096 00:34:02.583 fio: pid=2480873, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:02.844 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:02.844 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:34:02.844 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=622592, buflen=4096 00:34:02.844 fio: pid=2480866, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:03.103 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=16584704, buflen=4096 00:34:03.103 fio: pid=2480867, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:03.103 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:03.103 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:34:03.103 00:34:03.103 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2480866: Wed Nov 20 16:45:48 2024 00:34:03.103 read: IOPS=51, BW=203KiB/s (208kB/s)(608KiB/2992msec) 00:34:03.103 slat (usec): min=6, max=1790, avg=35.69, stdev=144.56 00:34:03.103 clat (usec): min=439, max=41235, avg=19499.20, stdev=20168.89 00:34:03.103 lat (usec): min=466, max=42912, avg=19534.96, stdev=20186.61 00:34:03.103 clat percentiles (usec): 00:34:03.103 | 1.00th=[ 449], 5.00th=[ 502], 10.00th=[ 545], 20.00th=[ 627], 00:34:03.103 | 30.00th=[ 685], 40.00th=[ 791], 50.00th=[ 881], 60.00th=[41157], 00:34:03.104 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:03.104 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:34:03.104 | 99.99th=[41157] 00:34:03.104 bw ( KiB/s): min= 96, max= 712, per=3.60%, avg=220.60, stdev=274.72, samples=5 00:34:03.104 iops : min= 24, max= 178, avg=55.00, stdev=68.76, samples=5 00:34:03.104 lat (usec) : 500=3.92%, 750=32.03%, 1000=16.99% 00:34:03.104 lat (msec) : 50=46.41% 00:34:03.104 cpu : usr=0.00%, sys=0.23%, ctx=156, majf=0, minf=1 00:34:03.104 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:03.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:03.104 complete : 0=0.6%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:03.104 issued rwts: total=153,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:03.104 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:03.104 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2480867: Wed Nov 20 16:45:48 2024 00:34:03.104 read: IOPS=1285, BW=5142KiB/s (5265kB/s)(15.8MiB/3150msec) 00:34:03.104 slat (usec): min=5, max=11228, avg=31.95, stdev=281.31 00:34:03.104 clat (usec): min=187, max=5302, avg=735.23, stdev=124.96 00:34:03.104 lat (usec): min=196, max=11867, avg=767.17, stdev=308.01 00:34:03.104 clat percentiles (usec): 00:34:03.104 | 1.00th=[ 392], 5.00th=[ 523], 10.00th=[ 578], 20.00th=[ 668], 00:34:03.104 | 30.00th=[ 717], 40.00th=[ 750], 50.00th=[ 775], 60.00th=[ 783], 00:34:03.104 | 70.00th=[ 791], 80.00th=[ 807], 90.00th=[ 824], 95.00th=[ 840], 00:34:03.104 | 99.00th=[ 881], 99.50th=[ 898], 99.90th=[ 955], 99.95th=[ 979], 00:34:03.104 | 99.99th=[ 5276] 00:34:03.104 bw ( KiB/s): min= 5104, max= 5283, per=84.77%, avg=5181.83, stdev=63.27, samples=6 00:34:03.104 iops : min= 1276, max= 1320, avg=1295.33, stdev=15.58, samples=6 00:34:03.104 lat (usec) : 250=0.40%, 500=3.11%, 750=37.56%, 1000=58.86% 00:34:03.104 lat (msec) : 2=0.02%, 10=0.02% 00:34:03.104 cpu : usr=1.21%, sys=3.68%, ctx=4054, majf=0, minf=2 00:34:03.104 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:03.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:03.104 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:03.104 issued rwts: total=4050,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:03.104 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:03.104 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2480873: Wed Nov 20 16:45:48 2024 00:34:03.104 read: IOPS=179, BW=715KiB/s (733kB/s)(2008KiB/2807msec) 00:34:03.104 slat (usec): min=6, max=10994, avg=57.64, stdev=558.20 00:34:03.104 clat (usec): min=282, max=41600, avg=5484.39, stdev=12967.23 00:34:03.104 lat (usec): min=309, max=41626, avg=5542.09, stdev=12968.15 00:34:03.104 clat percentiles (usec): 00:34:03.104 | 1.00th=[ 449], 5.00th=[ 545], 10.00th=[ 594], 20.00th=[ 652], 00:34:03.104 | 30.00th=[ 701], 40.00th=[ 750], 50.00th=[ 783], 60.00th=[ 840], 00:34:03.104 | 70.00th=[ 865], 80.00th=[ 914], 90.00th=[40633], 95.00th=[41157], 00:34:03.104 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:34:03.104 | 99.99th=[41681] 00:34:03.104 bw ( KiB/s): min= 96, max= 1448, per=7.26%, avg=444.80, stdev=584.62, samples=5 00:34:03.104 iops : min= 24, max= 362, avg=111.20, stdev=146.15, samples=5 00:34:03.104 lat (usec) : 500=2.98%, 750=37.97%, 1000=45.92% 00:34:03.104 lat (msec) : 2=0.99%, 4=0.20%, 50=11.73% 00:34:03.104 cpu : usr=0.21%, sys=0.64%, ctx=505, majf=0, minf=2 00:34:03.104 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:03.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:03.104 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:03.104 issued rwts: total=503,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:03.104 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:03.104 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2480878: Wed Nov 20 16:45:48 2024 00:34:03.104 read: IOPS=42, BW=168KiB/s (172kB/s)(440KiB/2617msec) 00:34:03.104 slat (nsec): min=6751, max=39102, avg=24543.04, stdev=6156.82 00:34:03.104 clat (usec): min=464, max=42091, avg=23535.86, stdev=20505.64 00:34:03.104 lat (usec): min=492, max=42117, avg=23560.38, stdev=20507.62 00:34:03.104 clat percentiles (usec): 00:34:03.104 | 1.00th=[ 578], 5.00th=[ 603], 10.00th=[ 701], 20.00th=[ 758], 00:34:03.104 | 30.00th=[ 832], 40.00th=[ 873], 50.00th=[41157], 60.00th=[41681], 00:34:03.104 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:03.104 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:03.104 | 99.99th=[42206] 00:34:03.104 bw ( KiB/s): min= 88, max= 480, per=2.80%, avg=171.20, stdev=172.66, samples=5 00:34:03.104 iops : min= 22, max= 120, avg=42.80, stdev=43.16, samples=5 00:34:03.104 lat (usec) : 500=0.90%, 750=18.02%, 1000=25.23% 00:34:03.104 lat (msec) : 50=54.95% 00:34:03.104 cpu : usr=0.00%, sys=0.19%, ctx=111, majf=0, minf=2 00:34:03.104 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:03.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:03.104 complete : 0=0.9%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:03.104 issued rwts: total=111,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:03.104 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:03.104 00:34:03.104 Run status group 0 (all jobs): 00:34:03.104 READ: bw=6112KiB/s (6258kB/s), 168KiB/s-5142KiB/s (172kB/s-5265kB/s), io=18.8MiB (19.7MB), run=2617-3150msec 00:34:03.104 00:34:03.104 Disk stats (read/write): 00:34:03.104 nvme0n1: ios=147/0, merge=0/0, ticks=2801/0, in_queue=2801, util=94.92% 00:34:03.104 nvme0n2: ios=4004/0, merge=0/0, ticks=2879/0, in_queue=2879, util=94.77% 00:34:03.104 nvme0n3: ios=384/0, merge=0/0, ticks=2552/0, in_queue=2552, util=96.07% 00:34:03.104 nvme0n4: ios=110/0, merge=0/0, ticks=2591/0, in_queue=2591, util=96.43% 00:34:03.363 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:03.364 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:34:03.364 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:03.364 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:34:03.623 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:03.623 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:34:03.882 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:03.882 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:34:03.882 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:34:03.882 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 2480681 00:34:03.882 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:34:03.882 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:04.143 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:04.143 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:04.143 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:34:04.143 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:34:04.143 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:04.143 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:34:04.143 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:04.143 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:34:04.143 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:34:04.143 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:34:04.143 nvmf hotplug test: fio failed as expected 00:34:04.143 16:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:04.403 16:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:34:04.403 16:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:34:04.403 16:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:34:04.403 16:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:34:04.403 16:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:34:04.403 16:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:04.403 16:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:34:04.403 16:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:04.403 16:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:34:04.403 16:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:04.403 16:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:04.403 rmmod nvme_tcp 00:34:04.403 rmmod nvme_fabrics 00:34:04.403 rmmod nvme_keyring 00:34:04.403 16:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:04.403 16:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:34:04.403 16:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:34:04.403 16:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2477512 ']' 00:34:04.403 16:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2477512 00:34:04.403 16:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2477512 ']' 00:34:04.403 16:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2477512 00:34:04.403 16:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:34:04.403 16:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:04.403 16:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2477512 00:34:04.403 16:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:04.403 16:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:04.403 16:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2477512' 00:34:04.403 killing process with pid 2477512 00:34:04.403 16:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2477512 00:34:04.403 16:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2477512 00:34:04.663 16:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:04.663 16:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:04.663 16:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:04.663 16:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:34:04.663 16:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:34:04.663 16:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:04.663 16:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:34:04.663 16:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:04.663 16:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:04.663 16:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:04.663 16:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:04.663 16:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:06.570 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:06.570 00:34:06.570 real 0m27.850s 00:34:06.570 user 2m16.674s 00:34:06.570 sys 0m11.850s 00:34:06.570 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:06.570 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:06.570 ************************************ 00:34:06.570 END TEST nvmf_fio_target 00:34:06.570 ************************************ 00:34:06.570 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:06.570 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:06.570 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:06.570 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:06.831 ************************************ 00:34:06.831 START TEST nvmf_bdevio 00:34:06.831 ************************************ 00:34:06.831 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:06.831 * Looking for test storage... 00:34:06.831 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:06.831 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:06.831 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:34:06.831 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:06.831 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:06.831 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:06.831 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:06.831 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:06.831 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:34:06.831 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:34:06.831 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:34:06.831 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:34:06.831 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:34:06.831 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:34:06.831 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:34:06.831 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:06.831 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:34:06.831 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:34:06.831 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:06.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:06.832 --rc genhtml_branch_coverage=1 00:34:06.832 --rc genhtml_function_coverage=1 00:34:06.832 --rc genhtml_legend=1 00:34:06.832 --rc geninfo_all_blocks=1 00:34:06.832 --rc geninfo_unexecuted_blocks=1 00:34:06.832 00:34:06.832 ' 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:06.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:06.832 --rc genhtml_branch_coverage=1 00:34:06.832 --rc genhtml_function_coverage=1 00:34:06.832 --rc genhtml_legend=1 00:34:06.832 --rc geninfo_all_blocks=1 00:34:06.832 --rc geninfo_unexecuted_blocks=1 00:34:06.832 00:34:06.832 ' 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:06.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:06.832 --rc genhtml_branch_coverage=1 00:34:06.832 --rc genhtml_function_coverage=1 00:34:06.832 --rc genhtml_legend=1 00:34:06.832 --rc geninfo_all_blocks=1 00:34:06.832 --rc geninfo_unexecuted_blocks=1 00:34:06.832 00:34:06.832 ' 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:06.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:06.832 --rc genhtml_branch_coverage=1 00:34:06.832 --rc genhtml_function_coverage=1 00:34:06.832 --rc genhtml_legend=1 00:34:06.832 --rc geninfo_all_blocks=1 00:34:06.832 --rc geninfo_unexecuted_blocks=1 00:34:06.832 00:34:06.832 ' 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:34:06.832 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:14.979 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:14.979 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:34:14.979 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:14.979 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:14.979 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:14.979 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:14.979 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:14.979 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:34:14.979 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:14.979 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:34:14.979 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:34:14.979 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:34:14.979 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:34:14.979 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:34:14.979 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:34:14.979 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:14.979 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:14.979 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:14.979 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:14.979 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:14.979 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:14.979 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:14.979 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:14.979 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:14.979 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:14.979 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:14.979 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:14.979 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:14.979 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:14.979 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:14.979 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:14.979 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:14.979 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:14.979 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:14.979 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:14.979 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:14.979 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:14.979 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:14.979 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:14.979 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:14.979 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:14.979 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:14.979 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:14.980 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:14.980 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:14.980 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:14.980 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:14.980 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:14.980 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:14.980 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:14.980 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:14.980 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:14.980 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:14.980 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:14.980 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:14.980 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:14.980 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:14.980 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:14.980 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:14.980 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:14.980 Found net devices under 0000:31:00.0: cvl_0_0 00:34:14.980 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:14.980 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:14.980 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:14.980 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:14.980 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:14.980 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:14.980 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:14.980 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:14.980 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:14.980 Found net devices under 0000:31:00.1: cvl_0_1 00:34:14.980 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:14.980 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:14.980 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:34:14.980 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:14.980 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:14.980 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:14.980 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:14.980 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:14.980 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:14.980 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:14.980 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:14.980 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:14.980 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:14.980 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:14.980 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:14.980 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:14.980 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:14.980 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:14.980 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:14.980 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:14.980 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:14.980 16:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:14.980 16:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:14.980 16:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:14.980 16:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:14.980 16:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:14.980 16:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:14.980 16:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:14.980 16:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:14.980 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:14.980 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.720 ms 00:34:14.980 00:34:14.980 --- 10.0.0.2 ping statistics --- 00:34:14.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:14.980 rtt min/avg/max/mdev = 0.720/0.720/0.720/0.000 ms 00:34:14.980 16:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:14.980 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:14.980 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:34:14.980 00:34:14.980 --- 10.0.0.1 ping statistics --- 00:34:14.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:14.980 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:34:14.980 16:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:14.980 16:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:34:14.980 16:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:14.980 16:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:14.980 16:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:14.980 16:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:14.980 16:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:14.980 16:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:14.980 16:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:14.980 16:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:34:14.980 16:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:14.980 16:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:14.980 16:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:14.980 16:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2485938 00:34:14.980 16:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2485938 00:34:14.980 16:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:34:14.980 16:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2485938 ']' 00:34:14.980 16:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:14.980 16:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:14.980 16:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:14.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:14.980 16:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:14.980 16:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:14.980 [2024-11-20 16:46:00.281752] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:14.980 [2024-11-20 16:46:00.282931] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:34:14.980 [2024-11-20 16:46:00.282988] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:14.980 [2024-11-20 16:46:00.384132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:14.980 [2024-11-20 16:46:00.434767] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:14.980 [2024-11-20 16:46:00.434816] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:14.980 [2024-11-20 16:46:00.434824] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:14.980 [2024-11-20 16:46:00.434832] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:14.980 [2024-11-20 16:46:00.434838] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:14.980 [2024-11-20 16:46:00.436898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:14.980 [2024-11-20 16:46:00.437053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:34:14.980 [2024-11-20 16:46:00.437217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:14.980 [2024-11-20 16:46:00.437218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:34:14.981 [2024-11-20 16:46:00.523369] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:14.981 [2024-11-20 16:46:00.524396] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:14.981 [2024-11-20 16:46:00.524654] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:14.981 [2024-11-20 16:46:00.525324] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:14.981 [2024-11-20 16:46:00.525367] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:15.241 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:15.241 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:34:15.241 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:15.241 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:15.241 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:15.241 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:15.241 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:15.241 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.241 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:15.241 [2024-11-20 16:46:01.138286] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:15.241 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.241 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:15.241 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.241 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:15.241 Malloc0 00:34:15.241 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.241 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:15.241 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.241 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:15.501 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.501 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:15.501 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.501 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:15.501 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.501 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:15.501 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.501 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:15.501 [2024-11-20 16:46:01.230501] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:15.501 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.501 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:34:15.501 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:34:15.501 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:34:15.501 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:34:15.501 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:15.501 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:15.501 { 00:34:15.501 "params": { 00:34:15.501 "name": "Nvme$subsystem", 00:34:15.501 "trtype": "$TEST_TRANSPORT", 00:34:15.501 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:15.501 "adrfam": "ipv4", 00:34:15.501 "trsvcid": "$NVMF_PORT", 00:34:15.501 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:15.501 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:15.501 "hdgst": ${hdgst:-false}, 00:34:15.501 "ddgst": ${ddgst:-false} 00:34:15.501 }, 00:34:15.501 "method": "bdev_nvme_attach_controller" 00:34:15.501 } 00:34:15.501 EOF 00:34:15.501 )") 00:34:15.501 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:34:15.501 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:34:15.501 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:34:15.501 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:15.501 "params": { 00:34:15.501 "name": "Nvme1", 00:34:15.501 "trtype": "tcp", 00:34:15.501 "traddr": "10.0.0.2", 00:34:15.501 "adrfam": "ipv4", 00:34:15.501 "trsvcid": "4420", 00:34:15.501 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:15.501 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:15.501 "hdgst": false, 00:34:15.501 "ddgst": false 00:34:15.501 }, 00:34:15.501 "method": "bdev_nvme_attach_controller" 00:34:15.501 }' 00:34:15.501 [2024-11-20 16:46:01.293456] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:34:15.501 [2024-11-20 16:46:01.293511] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2486285 ] 00:34:15.501 [2024-11-20 16:46:01.365436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:15.501 [2024-11-20 16:46:01.404256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:15.501 [2024-11-20 16:46:01.404371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:15.501 [2024-11-20 16:46:01.404375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:15.760 I/O targets: 00:34:15.760 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:34:15.760 00:34:15.760 00:34:15.760 CUnit - A unit testing framework for C - Version 2.1-3 00:34:15.760 http://cunit.sourceforge.net/ 00:34:15.760 00:34:15.760 00:34:15.760 Suite: bdevio tests on: Nvme1n1 00:34:16.020 Test: blockdev write read block ...passed 00:34:16.020 Test: blockdev write zeroes read block ...passed 00:34:16.020 Test: blockdev write zeroes read no split ...passed 00:34:16.020 Test: blockdev write zeroes read split ...passed 00:34:16.020 Test: blockdev write zeroes read split partial ...passed 00:34:16.020 Test: blockdev reset ...[2024-11-20 16:46:01.875485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:34:16.020 [2024-11-20 16:46:01.875546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16841c0 (9): Bad file descriptor 00:34:16.020 [2024-11-20 16:46:01.928051] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:34:16.020 passed 00:34:16.020 Test: blockdev write read 8 blocks ...passed 00:34:16.020 Test: blockdev write read size > 128k ...passed 00:34:16.020 Test: blockdev write read invalid size ...passed 00:34:16.281 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:16.281 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:16.281 Test: blockdev write read max offset ...passed 00:34:16.281 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:16.281 Test: blockdev writev readv 8 blocks ...passed 00:34:16.281 Test: blockdev writev readv 30 x 1block ...passed 00:34:16.281 Test: blockdev writev readv block ...passed 00:34:16.281 Test: blockdev writev readv size > 128k ...passed 00:34:16.281 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:16.281 Test: blockdev comparev and writev ...[2024-11-20 16:46:02.147558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:16.281 [2024-11-20 16:46:02.147583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:16.281 [2024-11-20 16:46:02.147595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:16.281 [2024-11-20 16:46:02.147601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:16.281 [2024-11-20 16:46:02.148020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:16.281 [2024-11-20 16:46:02.148029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:16.281 [2024-11-20 16:46:02.148039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:16.281 [2024-11-20 16:46:02.148045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:16.281 [2024-11-20 16:46:02.148476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:16.281 [2024-11-20 16:46:02.148484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:16.281 [2024-11-20 16:46:02.148494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:16.281 [2024-11-20 16:46:02.148500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:16.281 [2024-11-20 16:46:02.148914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:16.281 [2024-11-20 16:46:02.148923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:16.281 [2024-11-20 16:46:02.148933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:16.281 [2024-11-20 16:46:02.148938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:16.281 passed 00:34:16.281 Test: blockdev nvme passthru rw ...passed 00:34:16.281 Test: blockdev nvme passthru vendor specific ...[2024-11-20 16:46:02.233514] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:16.281 [2024-11-20 16:46:02.233526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:16.281 [2024-11-20 16:46:02.233754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:16.281 [2024-11-20 16:46:02.233762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:16.281 [2024-11-20 16:46:02.233976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:16.281 [2024-11-20 16:46:02.233988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:16.281 [2024-11-20 16:46:02.234223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:16.281 [2024-11-20 16:46:02.234231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:16.281 passed 00:34:16.542 Test: blockdev nvme admin passthru ...passed 00:34:16.542 Test: blockdev copy ...passed 00:34:16.542 00:34:16.542 Run Summary: Type Total Ran Passed Failed Inactive 00:34:16.542 suites 1 1 n/a 0 0 00:34:16.542 tests 23 23 23 0 0 00:34:16.542 asserts 152 152 152 0 n/a 00:34:16.542 00:34:16.542 Elapsed time = 1.174 seconds 00:34:16.542 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:16.542 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.542 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:16.542 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.542 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:34:16.542 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:34:16.542 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:16.542 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:34:16.542 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:16.542 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:34:16.542 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:16.542 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:16.542 rmmod nvme_tcp 00:34:16.542 rmmod nvme_fabrics 00:34:16.542 rmmod nvme_keyring 00:34:16.542 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:16.542 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:34:16.542 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:34:16.542 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2485938 ']' 00:34:16.542 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2485938 00:34:16.542 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2485938 ']' 00:34:16.542 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2485938 00:34:16.543 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:34:16.543 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:16.543 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2485938 00:34:16.803 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:34:16.803 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:34:16.803 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2485938' 00:34:16.803 killing process with pid 2485938 00:34:16.803 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2485938 00:34:16.803 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2485938 00:34:16.803 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:16.803 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:16.803 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:16.803 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:34:16.803 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:16.803 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:34:16.803 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:34:16.803 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:16.803 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:16.803 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:16.803 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:16.803 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:19.420 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:19.420 00:34:19.420 real 0m12.278s 00:34:19.420 user 0m10.107s 00:34:19.420 sys 0m6.357s 00:34:19.420 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:19.420 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:19.420 ************************************ 00:34:19.420 END TEST nvmf_bdevio 00:34:19.420 ************************************ 00:34:19.420 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:34:19.420 00:34:19.420 real 4m57.711s 00:34:19.420 user 10m15.355s 00:34:19.420 sys 2m2.101s 00:34:19.420 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:19.420 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:19.420 ************************************ 00:34:19.420 END TEST nvmf_target_core_interrupt_mode 00:34:19.420 ************************************ 00:34:19.420 16:46:04 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:19.420 16:46:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:19.420 16:46:04 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:19.420 16:46:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:19.420 ************************************ 00:34:19.420 START TEST nvmf_interrupt 00:34:19.420 ************************************ 00:34:19.420 16:46:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:19.420 * Looking for test storage... 00:34:19.420 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:19.420 16:46:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:19.420 16:46:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:34:19.420 16:46:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:19.420 16:46:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:19.420 16:46:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:19.420 16:46:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:19.420 16:46:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:19.420 16:46:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:34:19.420 16:46:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:34:19.420 16:46:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:34:19.420 16:46:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:34:19.420 16:46:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:34:19.420 16:46:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:34:19.420 16:46:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:34:19.420 16:46:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:19.420 16:46:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:34:19.420 16:46:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:34:19.420 16:46:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:19.420 16:46:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:19.420 16:46:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:34:19.420 16:46:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:34:19.420 16:46:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:19.420 16:46:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:34:19.420 16:46:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:34:19.420 16:46:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:34:19.420 16:46:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:34:19.420 16:46:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:19.420 16:46:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:34:19.420 16:46:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:34:19.420 16:46:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:19.420 16:46:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:19.420 16:46:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:34:19.420 16:46:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:19.420 16:46:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:19.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:19.420 --rc genhtml_branch_coverage=1 00:34:19.420 --rc genhtml_function_coverage=1 00:34:19.420 --rc genhtml_legend=1 00:34:19.420 --rc geninfo_all_blocks=1 00:34:19.420 --rc geninfo_unexecuted_blocks=1 00:34:19.420 00:34:19.420 ' 00:34:19.420 16:46:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:19.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:19.420 --rc genhtml_branch_coverage=1 00:34:19.420 --rc genhtml_function_coverage=1 00:34:19.420 --rc genhtml_legend=1 00:34:19.420 --rc geninfo_all_blocks=1 00:34:19.420 --rc geninfo_unexecuted_blocks=1 00:34:19.420 00:34:19.420 ' 00:34:19.420 16:46:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:19.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:19.420 --rc genhtml_branch_coverage=1 00:34:19.420 --rc genhtml_function_coverage=1 00:34:19.420 --rc genhtml_legend=1 00:34:19.420 --rc geninfo_all_blocks=1 00:34:19.420 --rc geninfo_unexecuted_blocks=1 00:34:19.420 00:34:19.420 ' 00:34:19.420 16:46:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:19.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:19.420 --rc genhtml_branch_coverage=1 00:34:19.420 --rc genhtml_function_coverage=1 00:34:19.420 --rc genhtml_legend=1 00:34:19.420 --rc geninfo_all_blocks=1 00:34:19.420 --rc geninfo_unexecuted_blocks=1 00:34:19.420 00:34:19.420 ' 00:34:19.420 16:46:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:19.420 16:46:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:34:19.420 16:46:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:19.420 16:46:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:19.420 16:46:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:19.421 16:46:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:19.421 16:46:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:19.421 16:46:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:19.421 16:46:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:19.421 16:46:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:19.421 16:46:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:19.421 16:46:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:19.421 16:46:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:34:19.421 16:46:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:34:19.421 16:46:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:19.421 16:46:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:19.421 16:46:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:19.421 16:46:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:19.421 16:46:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:19.421 16:46:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:34:19.421 16:46:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:19.421 16:46:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:19.421 16:46:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:19.421 16:46:05 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.421 16:46:05 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.421 16:46:05 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.421 16:46:05 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:34:19.421 16:46:05 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.421 16:46:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:34:19.421 16:46:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:19.421 16:46:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:19.421 16:46:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:19.421 16:46:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:19.421 16:46:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:19.421 16:46:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:19.421 16:46:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:19.421 16:46:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:19.421 16:46:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:19.421 16:46:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:19.421 16:46:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:34:19.421 16:46:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:19.421 16:46:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:34:19.421 16:46:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:19.421 16:46:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:19.421 16:46:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:19.421 16:46:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:19.421 16:46:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:19.421 16:46:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:19.421 16:46:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:19.421 16:46:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:19.421 16:46:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:19.421 16:46:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:19.421 16:46:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:34:19.421 16:46:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:26.108 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:26.108 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:34:26.108 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:26.108 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:26.108 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:26.108 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:26.108 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:26.108 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:34:26.108 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:26.108 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:34:26.108 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:34:26.108 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:34:26.108 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:34:26.108 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:34:26.108 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:34:26.108 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:26.108 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:26.108 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:26.108 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:26.108 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:26.108 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:26.108 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:26.108 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:26.108 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:26.108 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:26.108 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:26.108 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:26.108 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:26.108 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:26.108 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:26.108 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:26.108 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:26.108 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:26.108 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:26.108 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:26.108 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:26.108 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:26.108 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:26.108 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:26.108 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:26.108 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:26.108 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:26.108 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:26.108 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:26.108 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:26.108 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:26.108 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:26.109 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:26.109 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:26.109 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:26.109 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:26.109 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:26.109 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:26.109 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:26.109 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:26.109 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:26.109 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:26.109 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:26.109 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:26.109 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:26.109 Found net devices under 0000:31:00.0: cvl_0_0 00:34:26.109 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:26.109 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:26.109 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:26.109 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:26.109 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:26.109 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:26.109 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:26.109 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:26.109 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:26.109 Found net devices under 0000:31:00.1: cvl_0_1 00:34:26.109 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:26.109 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:26.109 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:34:26.109 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:26.109 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:26.109 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:26.109 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:26.109 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:26.109 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:26.109 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:26.109 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:26.109 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:26.109 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:26.109 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:26.109 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:26.109 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:26.109 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:26.109 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:26.109 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:26.109 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:26.109 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:26.369 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:26.369 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:26.369 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:26.369 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:26.369 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:26.369 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:26.369 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:26.369 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:26.369 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:26.369 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.422 ms 00:34:26.369 00:34:26.369 --- 10.0.0.2 ping statistics --- 00:34:26.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:26.369 rtt min/avg/max/mdev = 0.422/0.422/0.422/0.000 ms 00:34:26.630 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:26.630 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:26.630 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:34:26.630 00:34:26.630 --- 10.0.0.1 ping statistics --- 00:34:26.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:26.630 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:34:26.630 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:26.630 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:34:26.630 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:26.630 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:26.630 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:26.630 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:26.630 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:26.630 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:26.630 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:26.630 16:46:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:34:26.630 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:26.630 16:46:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:26.630 16:46:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:26.630 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=2490667 00:34:26.630 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 2490667 00:34:26.630 16:46:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:34:26.630 16:46:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 2490667 ']' 00:34:26.630 16:46:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:26.630 16:46:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:26.630 16:46:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:26.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:26.630 16:46:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:26.630 16:46:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:26.630 [2024-11-20 16:46:12.438886] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:26.630 [2024-11-20 16:46:12.439899] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:34:26.630 [2024-11-20 16:46:12.439939] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:26.630 [2024-11-20 16:46:12.518792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:26.630 [2024-11-20 16:46:12.553928] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:26.630 [2024-11-20 16:46:12.553960] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:26.630 [2024-11-20 16:46:12.553967] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:26.630 [2024-11-20 16:46:12.553974] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:26.630 [2024-11-20 16:46:12.553979] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:26.630 [2024-11-20 16:46:12.555122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:26.630 [2024-11-20 16:46:12.555209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:26.890 [2024-11-20 16:46:12.611022] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:26.890 [2024-11-20 16:46:12.611454] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:26.890 [2024-11-20 16:46:12.611811] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:27.460 16:46:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:27.460 16:46:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:34:27.460 16:46:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:27.460 16:46:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:27.460 16:46:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:27.460 16:46:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:27.460 16:46:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:34:27.460 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:34:27.461 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:34:27.461 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:34:27.461 5000+0 records in 00:34:27.461 5000+0 records out 00:34:27.461 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0191951 s, 533 MB/s 00:34:27.461 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:34:27.461 16:46:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.461 16:46:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:27.461 AIO0 00:34:27.461 16:46:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.461 16:46:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:34:27.461 16:46:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.461 16:46:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:27.461 [2024-11-20 16:46:13.347688] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:27.461 16:46:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.461 16:46:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:27.461 16:46:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.461 16:46:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:27.461 16:46:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.461 16:46:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:34:27.461 16:46:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.461 16:46:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:27.461 16:46:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.461 16:46:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:27.461 16:46:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.461 16:46:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:27.461 [2024-11-20 16:46:13.375996] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:27.461 16:46:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.461 16:46:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:34:27.461 16:46:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2490667 0 00:34:27.461 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2490667 0 idle 00:34:27.461 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2490667 00:34:27.461 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:27.461 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:27.461 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:27.461 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:27.461 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:27.461 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:27.461 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:27.461 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:27.461 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:27.461 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2490667 -w 256 00:34:27.461 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:27.722 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2490667 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.23 reactor_0' 00:34:27.722 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2490667 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.23 reactor_0 00:34:27.722 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:27.722 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:27.722 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:27.722 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:27.722 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:27.722 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:27.722 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:27.722 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:27.722 16:46:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:34:27.722 16:46:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2490667 1 00:34:27.722 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2490667 1 idle 00:34:27.722 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2490667 00:34:27.722 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:27.722 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:27.722 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:27.722 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:27.722 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:27.722 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:27.722 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:27.722 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:27.722 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:27.722 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2490667 -w 256 00:34:27.722 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:27.982 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2490677 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1' 00:34:27.982 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2490677 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1 00:34:27.982 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:27.982 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:27.982 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:27.982 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:27.982 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:27.982 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:27.982 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:27.982 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:27.982 16:46:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:34:27.982 16:46:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=2491038 00:34:27.982 16:46:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:34:27.982 16:46:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:34:27.982 16:46:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2490667 0 00:34:27.982 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2490667 0 busy 00:34:27.982 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2490667 00:34:27.982 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:27.982 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:27.982 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:34:27.982 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:27.983 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:34:27.983 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:27.983 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:27.983 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:27.983 16:46:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:27.983 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2490667 -w 256 00:34:27.983 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:27.983 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2490667 root 20 0 128.2g 44928 32256 R 80.0 0.0 0:00.35 reactor_0' 00:34:27.983 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2490667 root 20 0 128.2g 44928 32256 R 80.0 0.0 0:00.35 reactor_0 00:34:27.983 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:27.983 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:27.983 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=80.0 00:34:27.983 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=80 00:34:27.983 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:27.983 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:27.983 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:34:27.983 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:27.983 16:46:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:34:27.983 16:46:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:34:27.983 16:46:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2490667 1 00:34:27.983 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2490667 1 busy 00:34:27.983 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2490667 00:34:27.983 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:27.983 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:27.983 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:34:27.983 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:27.983 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:34:27.983 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:27.983 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:27.983 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:28.243 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2490667 -w 256 00:34:28.243 16:46:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:28.243 16:46:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2490677 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.25 reactor_1' 00:34:28.243 16:46:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2490677 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.25 reactor_1 00:34:28.243 16:46:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:28.243 16:46:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:28.243 16:46:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:34:28.243 16:46:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:34:28.243 16:46:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:28.243 16:46:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:28.243 16:46:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:34:28.243 16:46:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:28.243 16:46:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 2491038 00:34:38.235 Initializing NVMe Controllers 00:34:38.235 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:38.235 Controller IO queue size 256, less than required. 00:34:38.235 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:38.235 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:34:38.235 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:34:38.235 Initialization complete. Launching workers. 00:34:38.235 ======================================================== 00:34:38.235 Latency(us) 00:34:38.235 Device Information : IOPS MiB/s Average min max 00:34:38.235 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16577.50 64.76 15451.94 2623.14 19006.65 00:34:38.235 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 19332.50 75.52 13243.89 7470.77 51574.25 00:34:38.235 ======================================================== 00:34:38.235 Total : 35910.00 140.27 14263.22 2623.14 51574.25 00:34:38.235 00:34:38.235 [2024-11-20 16:46:23.937041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d94c0 is same with the state(6) to be set 00:34:38.235 16:46:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:34:38.235 16:46:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2490667 0 00:34:38.235 16:46:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2490667 0 idle 00:34:38.235 16:46:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2490667 00:34:38.235 16:46:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:38.235 16:46:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:38.235 16:46:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:38.235 16:46:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:38.235 16:46:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:38.235 16:46:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:38.235 16:46:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:38.235 16:46:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:38.235 16:46:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:38.235 16:46:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:38.236 16:46:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2490667 -w 256 00:34:38.236 16:46:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2490667 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.23 reactor_0' 00:34:38.236 16:46:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:38.236 16:46:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2490667 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.23 reactor_0 00:34:38.236 16:46:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:38.236 16:46:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:38.236 16:46:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:38.236 16:46:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:38.236 16:46:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:38.236 16:46:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:38.236 16:46:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:38.236 16:46:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:34:38.236 16:46:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2490667 1 00:34:38.236 16:46:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2490667 1 idle 00:34:38.236 16:46:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2490667 00:34:38.236 16:46:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:38.236 16:46:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:38.236 16:46:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:38.236 16:46:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:38.236 16:46:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:38.236 16:46:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:38.236 16:46:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:38.236 16:46:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:38.236 16:46:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:38.236 16:46:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2490667 -w 256 00:34:38.236 16:46:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:38.496 16:46:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2490677 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.01 reactor_1' 00:34:38.496 16:46:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2490677 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.01 reactor_1 00:34:38.496 16:46:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:38.496 16:46:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:38.496 16:46:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:38.496 16:46:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:38.496 16:46:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:38.496 16:46:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:38.496 16:46:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:38.496 16:46:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:38.496 16:46:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:39.066 16:46:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:34:39.066 16:46:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:34:39.066 16:46:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:34:39.066 16:46:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:34:39.066 16:46:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:34:40.977 16:46:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:34:40.977 16:46:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:34:40.977 16:46:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:34:41.237 16:46:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:34:41.237 16:46:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:34:41.237 16:46:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:34:41.237 16:46:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:34:41.237 16:46:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2490667 0 00:34:41.237 16:46:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2490667 0 idle 00:34:41.237 16:46:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2490667 00:34:41.237 16:46:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:41.237 16:46:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:41.237 16:46:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:41.237 16:46:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:41.237 16:46:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:41.237 16:46:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:41.237 16:46:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:41.237 16:46:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:41.237 16:46:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:41.237 16:46:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2490667 -w 256 00:34:41.237 16:46:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:41.237 16:46:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2490667 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.48 reactor_0' 00:34:41.237 16:46:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2490667 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.48 reactor_0 00:34:41.237 16:46:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:41.237 16:46:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:41.237 16:46:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:41.237 16:46:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:41.237 16:46:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:41.237 16:46:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:41.237 16:46:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:41.237 16:46:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:41.237 16:46:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:34:41.237 16:46:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2490667 1 00:34:41.237 16:46:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2490667 1 idle 00:34:41.237 16:46:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2490667 00:34:41.237 16:46:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:41.237 16:46:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:41.237 16:46:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:41.237 16:46:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:41.237 16:46:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:41.237 16:46:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:41.237 16:46:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:41.237 16:46:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:41.237 16:46:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:41.237 16:46:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2490667 -w 256 00:34:41.237 16:46:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:41.497 16:46:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2490677 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.15 reactor_1' 00:34:41.497 16:46:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2490677 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.15 reactor_1 00:34:41.497 16:46:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:41.497 16:46:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:41.497 16:46:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:41.497 16:46:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:41.497 16:46:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:41.497 16:46:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:41.497 16:46:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:41.497 16:46:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:41.497 16:46:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:41.757 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:41.757 16:46:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:41.757 16:46:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:34:41.757 16:46:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:34:41.757 16:46:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:41.757 16:46:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:34:41.757 16:46:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:41.757 16:46:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:34:41.757 16:46:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:34:41.757 16:46:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:34:41.757 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:41.757 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:34:41.757 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:41.757 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:34:41.757 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:41.757 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:41.757 rmmod nvme_tcp 00:34:41.757 rmmod nvme_fabrics 00:34:41.757 rmmod nvme_keyring 00:34:41.757 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:41.757 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:34:41.757 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:34:41.757 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 2490667 ']' 00:34:41.757 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 2490667 00:34:41.757 16:46:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 2490667 ']' 00:34:41.757 16:46:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 2490667 00:34:41.757 16:46:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:34:41.757 16:46:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:41.757 16:46:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2490667 00:34:41.757 16:46:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:41.757 16:46:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:41.757 16:46:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2490667' 00:34:41.757 killing process with pid 2490667 00:34:41.757 16:46:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 2490667 00:34:41.757 16:46:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 2490667 00:34:42.018 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:42.018 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:42.018 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:42.018 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:34:42.018 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:34:42.018 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:42.018 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:34:42.018 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:42.018 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:42.018 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:42.018 16:46:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:42.018 16:46:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:44.563 16:46:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:44.563 00:34:44.563 real 0m25.009s 00:34:44.563 user 0m40.343s 00:34:44.563 sys 0m9.180s 00:34:44.563 16:46:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:44.563 16:46:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:44.563 ************************************ 00:34:44.563 END TEST nvmf_interrupt 00:34:44.563 ************************************ 00:34:44.563 00:34:44.563 real 29m54.658s 00:34:44.563 user 61m16.532s 00:34:44.563 sys 9m59.673s 00:34:44.563 16:46:29 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:44.563 16:46:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:44.563 ************************************ 00:34:44.563 END TEST nvmf_tcp 00:34:44.563 ************************************ 00:34:44.563 16:46:30 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:34:44.563 16:46:30 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:44.563 16:46:30 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:44.563 16:46:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:44.563 16:46:30 -- common/autotest_common.sh@10 -- # set +x 00:34:44.563 ************************************ 00:34:44.563 START TEST spdkcli_nvmf_tcp 00:34:44.563 ************************************ 00:34:44.563 16:46:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:44.563 * Looking for test storage... 00:34:44.563 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:34:44.563 16:46:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:44.563 16:46:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:34:44.563 16:46:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:44.563 16:46:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:44.563 16:46:30 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:44.563 16:46:30 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:44.563 16:46:30 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:44.563 16:46:30 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:34:44.563 16:46:30 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:34:44.563 16:46:30 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:34:44.563 16:46:30 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:34:44.563 16:46:30 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:34:44.563 16:46:30 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:34:44.563 16:46:30 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:34:44.563 16:46:30 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:44.563 16:46:30 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:34:44.563 16:46:30 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:34:44.563 16:46:30 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:44.563 16:46:30 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:44.563 16:46:30 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:34:44.563 16:46:30 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:34:44.563 16:46:30 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:44.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:44.564 --rc genhtml_branch_coverage=1 00:34:44.564 --rc genhtml_function_coverage=1 00:34:44.564 --rc genhtml_legend=1 00:34:44.564 --rc geninfo_all_blocks=1 00:34:44.564 --rc geninfo_unexecuted_blocks=1 00:34:44.564 00:34:44.564 ' 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:44.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:44.564 --rc genhtml_branch_coverage=1 00:34:44.564 --rc genhtml_function_coverage=1 00:34:44.564 --rc genhtml_legend=1 00:34:44.564 --rc geninfo_all_blocks=1 00:34:44.564 --rc geninfo_unexecuted_blocks=1 00:34:44.564 00:34:44.564 ' 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:44.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:44.564 --rc genhtml_branch_coverage=1 00:34:44.564 --rc genhtml_function_coverage=1 00:34:44.564 --rc genhtml_legend=1 00:34:44.564 --rc geninfo_all_blocks=1 00:34:44.564 --rc geninfo_unexecuted_blocks=1 00:34:44.564 00:34:44.564 ' 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:44.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:44.564 --rc genhtml_branch_coverage=1 00:34:44.564 --rc genhtml_function_coverage=1 00:34:44.564 --rc genhtml_legend=1 00:34:44.564 --rc geninfo_all_blocks=1 00:34:44.564 --rc geninfo_unexecuted_blocks=1 00:34:44.564 00:34:44.564 ' 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:44.564 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2494218 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2494218 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 2494218 ']' 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:44.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:44.564 16:46:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:44.564 [2024-11-20 16:46:30.350735] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:34:44.564 [2024-11-20 16:46:30.350802] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2494218 ] 00:34:44.564 [2024-11-20 16:46:30.429304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:44.564 [2024-11-20 16:46:30.472768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:44.564 [2024-11-20 16:46:30.472770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:45.505 16:46:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:45.505 16:46:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:34:45.505 16:46:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:45.505 16:46:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:45.505 16:46:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:45.505 16:46:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:45.505 16:46:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:45.505 16:46:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:45.505 16:46:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:45.505 16:46:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:45.505 16:46:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:45.505 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:45.505 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:45.505 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:45.505 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:45.505 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:45.505 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:45.505 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:45.505 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:45.505 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:45.505 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:45.505 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:45.505 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:45.505 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:45.505 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:45.505 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:45.505 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:45.505 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:45.505 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:45.505 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:45.505 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:45.505 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:45.505 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:45.505 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:45.505 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:45.505 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:45.505 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:45.505 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:45.505 ' 00:34:48.053 [2024-11-20 16:46:33.618906] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:48.993 [2024-11-20 16:46:34.826866] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:51.534 [2024-11-20 16:46:37.045366] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:34:53.443 [2024-11-20 16:46:38.951121] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:54.827 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:54.827 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:54.827 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:54.827 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:54.827 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:54.827 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:54.827 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:54.827 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:54.827 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:54.827 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:54.827 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:54.827 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:54.827 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:54.827 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:54.827 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:54.827 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:54.827 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:54.827 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:54.827 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:54.827 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:54.827 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:54.827 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:54.827 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:54.827 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:54.827 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:54.827 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:54.827 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:54.827 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:54.827 16:46:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:54.827 16:46:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:54.827 16:46:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:54.827 16:46:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:54.827 16:46:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:54.827 16:46:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:54.827 16:46:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:34:54.827 16:46:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:55.088 16:46:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:55.088 16:46:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:55.088 16:46:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:55.088 16:46:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:55.088 16:46:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:55.088 16:46:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:55.088 16:46:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:55.088 16:46:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:55.088 16:46:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:55.088 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:55.088 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:55.088 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:55.088 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:34:55.088 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:34:55.088 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:55.088 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:55.088 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:55.088 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:55.088 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:55.088 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:55.088 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:55.088 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:55.088 ' 00:35:00.372 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:35:00.372 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:35:00.372 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:00.372 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:35:00.372 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:35:00.372 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:35:00.372 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:35:00.372 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:00.372 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:35:00.372 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:35:00.372 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:35:00.372 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:35:00.372 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:35:00.372 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:35:00.372 16:46:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:35:00.372 16:46:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:00.372 16:46:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:00.372 16:46:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2494218 00:35:00.372 16:46:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2494218 ']' 00:35:00.372 16:46:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2494218 00:35:00.372 16:46:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:35:00.372 16:46:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:00.372 16:46:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2494218 00:35:00.372 16:46:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:00.372 16:46:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:00.372 16:46:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2494218' 00:35:00.372 killing process with pid 2494218 00:35:00.372 16:46:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 2494218 00:35:00.372 16:46:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 2494218 00:35:00.372 16:46:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:35:00.372 16:46:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:35:00.372 16:46:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2494218 ']' 00:35:00.372 16:46:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2494218 00:35:00.372 16:46:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2494218 ']' 00:35:00.372 16:46:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2494218 00:35:00.372 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2494218) - No such process 00:35:00.372 16:46:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 2494218 is not found' 00:35:00.372 Process with pid 2494218 is not found 00:35:00.372 16:46:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:35:00.372 16:46:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:35:00.372 16:46:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:35:00.372 00:35:00.372 real 0m16.224s 00:35:00.372 user 0m33.559s 00:35:00.372 sys 0m0.726s 00:35:00.372 16:46:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:00.372 16:46:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:00.372 ************************************ 00:35:00.372 END TEST spdkcli_nvmf_tcp 00:35:00.372 ************************************ 00:35:00.372 16:46:46 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:00.372 16:46:46 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:00.372 16:46:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:00.372 16:46:46 -- common/autotest_common.sh@10 -- # set +x 00:35:00.633 ************************************ 00:35:00.633 START TEST nvmf_identify_passthru 00:35:00.633 ************************************ 00:35:00.633 16:46:46 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:00.633 * Looking for test storage... 00:35:00.633 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:00.633 16:46:46 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:00.633 16:46:46 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:35:00.633 16:46:46 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:00.633 16:46:46 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:00.633 16:46:46 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:00.633 16:46:46 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:00.633 16:46:46 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:00.633 16:46:46 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:35:00.633 16:46:46 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:35:00.633 16:46:46 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:35:00.633 16:46:46 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:35:00.633 16:46:46 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:35:00.633 16:46:46 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:35:00.633 16:46:46 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:35:00.633 16:46:46 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:00.633 16:46:46 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:35:00.633 16:46:46 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:35:00.633 16:46:46 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:00.633 16:46:46 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:00.633 16:46:46 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:35:00.633 16:46:46 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:35:00.633 16:46:46 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:00.633 16:46:46 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:35:00.633 16:46:46 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:35:00.633 16:46:46 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:35:00.633 16:46:46 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:35:00.633 16:46:46 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:00.633 16:46:46 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:35:00.633 16:46:46 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:35:00.633 16:46:46 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:00.633 16:46:46 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:00.633 16:46:46 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:35:00.633 16:46:46 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:00.633 16:46:46 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:00.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:00.633 --rc genhtml_branch_coverage=1 00:35:00.633 --rc genhtml_function_coverage=1 00:35:00.633 --rc genhtml_legend=1 00:35:00.633 --rc geninfo_all_blocks=1 00:35:00.633 --rc geninfo_unexecuted_blocks=1 00:35:00.633 00:35:00.633 ' 00:35:00.633 16:46:46 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:00.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:00.633 --rc genhtml_branch_coverage=1 00:35:00.633 --rc genhtml_function_coverage=1 00:35:00.633 --rc genhtml_legend=1 00:35:00.633 --rc geninfo_all_blocks=1 00:35:00.633 --rc geninfo_unexecuted_blocks=1 00:35:00.633 00:35:00.633 ' 00:35:00.633 16:46:46 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:00.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:00.633 --rc genhtml_branch_coverage=1 00:35:00.633 --rc genhtml_function_coverage=1 00:35:00.633 --rc genhtml_legend=1 00:35:00.633 --rc geninfo_all_blocks=1 00:35:00.633 --rc geninfo_unexecuted_blocks=1 00:35:00.633 00:35:00.633 ' 00:35:00.633 16:46:46 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:00.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:00.633 --rc genhtml_branch_coverage=1 00:35:00.633 --rc genhtml_function_coverage=1 00:35:00.633 --rc genhtml_legend=1 00:35:00.633 --rc geninfo_all_blocks=1 00:35:00.633 --rc geninfo_unexecuted_blocks=1 00:35:00.633 00:35:00.633 ' 00:35:00.633 16:46:46 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:00.633 16:46:46 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:35:00.633 16:46:46 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:00.633 16:46:46 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:00.633 16:46:46 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:00.633 16:46:46 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:00.633 16:46:46 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:00.633 16:46:46 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:00.633 16:46:46 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:00.633 16:46:46 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:00.633 16:46:46 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:00.633 16:46:46 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:00.633 16:46:46 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:35:00.633 16:46:46 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:35:00.633 16:46:46 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:00.633 16:46:46 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:00.633 16:46:46 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:00.633 16:46:46 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:00.633 16:46:46 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:00.633 16:46:46 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:00.633 16:46:46 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:00.633 16:46:46 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:00.633 16:46:46 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:00.633 16:46:46 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.633 16:46:46 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.634 16:46:46 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.634 16:46:46 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:00.634 16:46:46 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.634 16:46:46 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:35:00.634 16:46:46 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:00.634 16:46:46 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:00.634 16:46:46 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:00.634 16:46:46 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:00.634 16:46:46 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:00.634 16:46:46 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:00.634 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:00.634 16:46:46 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:00.634 16:46:46 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:00.634 16:46:46 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:00.894 16:46:46 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:00.894 16:46:46 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:00.894 16:46:46 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:00.894 16:46:46 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:00.894 16:46:46 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:00.894 16:46:46 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.894 16:46:46 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.894 16:46:46 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.894 16:46:46 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:00.894 16:46:46 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.894 16:46:46 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:35:00.894 16:46:46 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:00.894 16:46:46 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:00.894 16:46:46 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:00.894 16:46:46 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:00.894 16:46:46 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:00.894 16:46:46 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:00.894 16:46:46 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:00.894 16:46:46 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:00.894 16:46:46 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:00.894 16:46:46 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:00.894 16:46:46 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:35:00.894 16:46:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:09.029 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:09.029 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:35:09.029 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:09.029 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:09.030 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:09.030 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:09.030 Found net devices under 0000:31:00.0: cvl_0_0 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:09.030 Found net devices under 0000:31:00.1: cvl_0_1 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:09.030 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:09.030 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.583 ms 00:35:09.030 00:35:09.030 --- 10.0.0.2 ping statistics --- 00:35:09.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:09.030 rtt min/avg/max/mdev = 0.583/0.583/0.583/0.000 ms 00:35:09.030 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:09.030 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:09.030 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:35:09.030 00:35:09.030 --- 10.0.0.1 ping statistics --- 00:35:09.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:09.031 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:35:09.031 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:09.031 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:35:09.031 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:09.031 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:09.031 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:09.031 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:09.031 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:09.031 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:09.031 16:46:53 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:09.031 16:46:53 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:35:09.031 16:46:53 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:09.031 16:46:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:09.031 16:46:54 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:35:09.031 16:46:54 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:35:09.031 16:46:54 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:35:09.031 16:46:54 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:35:09.031 16:46:54 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:35:09.031 16:46:54 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:35:09.031 16:46:54 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:35:09.031 16:46:54 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:09.031 16:46:54 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:09.031 16:46:54 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:35:09.031 16:46:54 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:35:09.031 16:46:54 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:35:09.031 16:46:54 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:65:00.0 00:35:09.031 16:46:54 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:35:09.031 16:46:54 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:35:09.031 16:46:54 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:35:09.031 16:46:54 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:35:09.031 16:46:54 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:35:09.031 16:46:54 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605500 00:35:09.031 16:46:54 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:35:09.031 16:46:54 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:35:09.031 16:46:54 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:35:09.292 16:46:55 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:35:09.292 16:46:55 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:35:09.292 16:46:55 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:09.292 16:46:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:09.292 16:46:55 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:35:09.292 16:46:55 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:09.292 16:46:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:09.292 16:46:55 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2501181 00:35:09.292 16:46:55 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:09.292 16:46:55 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:35:09.292 16:46:55 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2501181 00:35:09.292 16:46:55 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 2501181 ']' 00:35:09.292 16:46:55 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:09.292 16:46:55 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:09.292 16:46:55 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:09.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:09.292 16:46:55 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:09.292 16:46:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:09.292 [2024-11-20 16:46:55.174944] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:35:09.292 [2024-11-20 16:46:55.175016] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:09.553 [2024-11-20 16:46:55.258655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:09.553 [2024-11-20 16:46:55.301579] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:09.553 [2024-11-20 16:46:55.301613] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:09.553 [2024-11-20 16:46:55.301622] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:09.553 [2024-11-20 16:46:55.301629] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:09.553 [2024-11-20 16:46:55.301635] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:09.553 [2024-11-20 16:46:55.303489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:09.553 [2024-11-20 16:46:55.303605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:09.553 [2024-11-20 16:46:55.303762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:09.553 [2024-11-20 16:46:55.303762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:10.123 16:46:55 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:10.123 16:46:55 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:35:10.123 16:46:55 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:35:10.123 16:46:55 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.123 16:46:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:10.123 INFO: Log level set to 20 00:35:10.123 INFO: Requests: 00:35:10.123 { 00:35:10.123 "jsonrpc": "2.0", 00:35:10.123 "method": "nvmf_set_config", 00:35:10.123 "id": 1, 00:35:10.123 "params": { 00:35:10.123 "admin_cmd_passthru": { 00:35:10.123 "identify_ctrlr": true 00:35:10.123 } 00:35:10.123 } 00:35:10.123 } 00:35:10.123 00:35:10.123 INFO: response: 00:35:10.123 { 00:35:10.123 "jsonrpc": "2.0", 00:35:10.123 "id": 1, 00:35:10.123 "result": true 00:35:10.123 } 00:35:10.123 00:35:10.123 16:46:55 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.123 16:46:55 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:35:10.123 16:46:55 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.123 16:46:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:10.123 INFO: Setting log level to 20 00:35:10.123 INFO: Setting log level to 20 00:35:10.123 INFO: Log level set to 20 00:35:10.123 INFO: Log level set to 20 00:35:10.123 INFO: Requests: 00:35:10.123 { 00:35:10.123 "jsonrpc": "2.0", 00:35:10.123 "method": "framework_start_init", 00:35:10.123 "id": 1 00:35:10.123 } 00:35:10.123 00:35:10.123 INFO: Requests: 00:35:10.123 { 00:35:10.123 "jsonrpc": "2.0", 00:35:10.123 "method": "framework_start_init", 00:35:10.123 "id": 1 00:35:10.123 } 00:35:10.123 00:35:10.123 [2024-11-20 16:46:56.056948] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:35:10.123 INFO: response: 00:35:10.123 { 00:35:10.123 "jsonrpc": "2.0", 00:35:10.123 "id": 1, 00:35:10.123 "result": true 00:35:10.123 } 00:35:10.123 00:35:10.123 INFO: response: 00:35:10.123 { 00:35:10.123 "jsonrpc": "2.0", 00:35:10.123 "id": 1, 00:35:10.123 "result": true 00:35:10.123 } 00:35:10.123 00:35:10.123 16:46:56 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.123 16:46:56 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:10.123 16:46:56 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.123 16:46:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:10.123 INFO: Setting log level to 40 00:35:10.123 INFO: Setting log level to 40 00:35:10.123 INFO: Setting log level to 40 00:35:10.123 [2024-11-20 16:46:56.070279] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:10.123 16:46:56 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.123 16:46:56 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:35:10.123 16:46:56 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:10.123 16:46:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:10.382 16:46:56 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:35:10.382 16:46:56 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.382 16:46:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:10.642 Nvme0n1 00:35:10.642 16:46:56 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.642 16:46:56 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:35:10.642 16:46:56 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.642 16:46:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:10.642 16:46:56 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.642 16:46:56 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:35:10.642 16:46:56 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.642 16:46:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:10.642 16:46:56 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.642 16:46:56 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:10.642 16:46:56 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.642 16:46:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:10.642 [2024-11-20 16:46:56.465248] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:10.642 16:46:56 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.642 16:46:56 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:35:10.642 16:46:56 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.642 16:46:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:10.642 [ 00:35:10.642 { 00:35:10.642 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:10.642 "subtype": "Discovery", 00:35:10.642 "listen_addresses": [], 00:35:10.642 "allow_any_host": true, 00:35:10.642 "hosts": [] 00:35:10.642 }, 00:35:10.642 { 00:35:10.642 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:10.642 "subtype": "NVMe", 00:35:10.643 "listen_addresses": [ 00:35:10.643 { 00:35:10.643 "trtype": "TCP", 00:35:10.643 "adrfam": "IPv4", 00:35:10.643 "traddr": "10.0.0.2", 00:35:10.643 "trsvcid": "4420" 00:35:10.643 } 00:35:10.643 ], 00:35:10.643 "allow_any_host": true, 00:35:10.643 "hosts": [], 00:35:10.643 "serial_number": "SPDK00000000000001", 00:35:10.643 "model_number": "SPDK bdev Controller", 00:35:10.643 "max_namespaces": 1, 00:35:10.643 "min_cntlid": 1, 00:35:10.643 "max_cntlid": 65519, 00:35:10.643 "namespaces": [ 00:35:10.643 { 00:35:10.643 "nsid": 1, 00:35:10.643 "bdev_name": "Nvme0n1", 00:35:10.643 "name": "Nvme0n1", 00:35:10.643 "nguid": "36344730526055000025384500000031", 00:35:10.643 "uuid": "36344730-5260-5500-0025-384500000031" 00:35:10.643 } 00:35:10.643 ] 00:35:10.643 } 00:35:10.643 ] 00:35:10.643 16:46:56 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.643 16:46:56 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:10.643 16:46:56 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:35:10.643 16:46:56 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:35:10.902 16:46:56 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605500 00:35:10.903 16:46:56 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:10.903 16:46:56 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:35:10.903 16:46:56 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:35:11.163 16:46:56 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:35:11.163 16:46:56 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605500 '!=' S64GNE0R605500 ']' 00:35:11.163 16:46:56 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:35:11.163 16:46:56 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:11.163 16:46:56 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.163 16:46:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:11.163 16:46:56 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.163 16:46:56 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:35:11.163 16:46:56 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:35:11.163 16:46:56 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:11.163 16:46:56 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:35:11.163 16:46:56 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:11.163 16:46:56 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:35:11.163 16:46:56 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:11.163 16:46:56 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:11.163 rmmod nvme_tcp 00:35:11.163 rmmod nvme_fabrics 00:35:11.163 rmmod nvme_keyring 00:35:11.163 16:46:56 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:11.163 16:46:56 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:35:11.163 16:46:56 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:35:11.163 16:46:56 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 2501181 ']' 00:35:11.163 16:46:56 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 2501181 00:35:11.163 16:46:56 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 2501181 ']' 00:35:11.163 16:46:56 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 2501181 00:35:11.163 16:46:56 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:35:11.163 16:46:56 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:11.163 16:46:56 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2501181 00:35:11.163 16:46:57 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:11.163 16:46:57 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:11.163 16:46:57 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2501181' 00:35:11.163 killing process with pid 2501181 00:35:11.164 16:46:57 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 2501181 00:35:11.164 16:46:57 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 2501181 00:35:11.424 16:46:57 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:11.424 16:46:57 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:11.424 16:46:57 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:11.424 16:46:57 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:35:11.424 16:46:57 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:35:11.424 16:46:57 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:11.424 16:46:57 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:35:11.424 16:46:57 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:11.424 16:46:57 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:11.424 16:46:57 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:11.424 16:46:57 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:11.424 16:46:57 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:13.966 16:46:59 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:13.966 00:35:13.966 real 0m13.016s 00:35:13.966 user 0m10.238s 00:35:13.966 sys 0m6.544s 00:35:13.966 16:46:59 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:13.966 16:46:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:13.966 ************************************ 00:35:13.966 END TEST nvmf_identify_passthru 00:35:13.966 ************************************ 00:35:13.966 16:46:59 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:13.966 16:46:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:13.966 16:46:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:13.966 16:46:59 -- common/autotest_common.sh@10 -- # set +x 00:35:13.966 ************************************ 00:35:13.966 START TEST nvmf_dif 00:35:13.966 ************************************ 00:35:13.966 16:46:59 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:13.966 * Looking for test storage... 00:35:13.966 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:13.966 16:46:59 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:13.966 16:46:59 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:35:13.966 16:46:59 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:13.966 16:46:59 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:13.966 16:46:59 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:13.966 16:46:59 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:13.966 16:46:59 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:13.966 16:46:59 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:35:13.966 16:46:59 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:35:13.966 16:46:59 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:35:13.966 16:46:59 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:35:13.966 16:46:59 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:35:13.966 16:46:59 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:35:13.966 16:46:59 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:35:13.966 16:46:59 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:13.966 16:46:59 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:35:13.966 16:46:59 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:35:13.966 16:46:59 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:13.966 16:46:59 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:13.966 16:46:59 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:35:13.966 16:46:59 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:35:13.966 16:46:59 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:13.966 16:46:59 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:35:13.966 16:46:59 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:35:13.966 16:46:59 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:35:13.966 16:46:59 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:35:13.966 16:46:59 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:13.966 16:46:59 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:35:13.966 16:46:59 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:35:13.966 16:46:59 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:13.966 16:46:59 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:13.966 16:46:59 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:35:13.966 16:46:59 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:13.966 16:46:59 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:13.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:13.966 --rc genhtml_branch_coverage=1 00:35:13.966 --rc genhtml_function_coverage=1 00:35:13.966 --rc genhtml_legend=1 00:35:13.966 --rc geninfo_all_blocks=1 00:35:13.966 --rc geninfo_unexecuted_blocks=1 00:35:13.966 00:35:13.966 ' 00:35:13.966 16:46:59 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:13.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:13.966 --rc genhtml_branch_coverage=1 00:35:13.966 --rc genhtml_function_coverage=1 00:35:13.966 --rc genhtml_legend=1 00:35:13.966 --rc geninfo_all_blocks=1 00:35:13.966 --rc geninfo_unexecuted_blocks=1 00:35:13.966 00:35:13.966 ' 00:35:13.966 16:46:59 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:13.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:13.966 --rc genhtml_branch_coverage=1 00:35:13.966 --rc genhtml_function_coverage=1 00:35:13.966 --rc genhtml_legend=1 00:35:13.966 --rc geninfo_all_blocks=1 00:35:13.966 --rc geninfo_unexecuted_blocks=1 00:35:13.966 00:35:13.966 ' 00:35:13.966 16:46:59 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:13.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:13.966 --rc genhtml_branch_coverage=1 00:35:13.966 --rc genhtml_function_coverage=1 00:35:13.966 --rc genhtml_legend=1 00:35:13.966 --rc geninfo_all_blocks=1 00:35:13.966 --rc geninfo_unexecuted_blocks=1 00:35:13.966 00:35:13.966 ' 00:35:13.966 16:46:59 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:13.966 16:46:59 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:35:13.966 16:46:59 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:13.966 16:46:59 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:13.966 16:46:59 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:13.966 16:46:59 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:13.966 16:46:59 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:13.966 16:46:59 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:13.966 16:46:59 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:13.966 16:46:59 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:13.966 16:46:59 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:13.966 16:46:59 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:13.966 16:46:59 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:35:13.966 16:46:59 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:35:13.966 16:46:59 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:13.966 16:46:59 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:13.966 16:46:59 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:13.966 16:46:59 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:13.966 16:46:59 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:13.966 16:46:59 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:35:13.966 16:46:59 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:13.966 16:46:59 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:13.966 16:46:59 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:13.966 16:46:59 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.966 16:46:59 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.966 16:46:59 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.966 16:46:59 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:35:13.967 16:46:59 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.967 16:46:59 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:35:13.967 16:46:59 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:13.967 16:46:59 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:13.967 16:46:59 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:13.967 16:46:59 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:13.967 16:46:59 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:13.967 16:46:59 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:13.967 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:13.967 16:46:59 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:13.967 16:46:59 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:13.967 16:46:59 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:13.967 16:46:59 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:35:13.967 16:46:59 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:13.967 16:46:59 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:13.967 16:46:59 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:35:13.967 16:46:59 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:35:13.967 16:46:59 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:13.967 16:46:59 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:13.967 16:46:59 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:13.967 16:46:59 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:13.967 16:46:59 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:13.967 16:46:59 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:13.967 16:46:59 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:13.967 16:46:59 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:13.967 16:46:59 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:13.967 16:46:59 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:13.967 16:46:59 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:35:13.967 16:46:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:20.546 16:47:06 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:20.546 16:47:06 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:35:20.546 16:47:06 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:20.546 16:47:06 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:20.546 16:47:06 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:20.546 16:47:06 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:20.546 16:47:06 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:20.546 16:47:06 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:35:20.546 16:47:06 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:20.546 16:47:06 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:35:20.546 16:47:06 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:35:20.546 16:47:06 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:35:20.546 16:47:06 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:35:20.546 16:47:06 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:35:20.546 16:47:06 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:35:20.546 16:47:06 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:20.546 16:47:06 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:20.546 16:47:06 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:20.546 16:47:06 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:20.546 16:47:06 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:20.546 16:47:06 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:20.546 16:47:06 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:20.546 16:47:06 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:20.546 16:47:06 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:20.546 16:47:06 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:20.546 16:47:06 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:20.546 16:47:06 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:20.546 16:47:06 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:20.546 16:47:06 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:20.546 16:47:06 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:20.546 16:47:06 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:20.546 16:47:06 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:20.546 16:47:06 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:20.546 16:47:06 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:20.546 16:47:06 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:20.546 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:20.546 16:47:06 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:20.546 16:47:06 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:20.546 16:47:06 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:20.547 16:47:06 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:20.547 16:47:06 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:20.547 16:47:06 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:20.547 16:47:06 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:20.547 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:20.547 16:47:06 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:20.547 16:47:06 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:20.547 16:47:06 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:20.547 16:47:06 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:20.547 16:47:06 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:20.547 16:47:06 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:20.547 16:47:06 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:20.547 16:47:06 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:20.547 16:47:06 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:20.547 16:47:06 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:20.547 16:47:06 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:20.547 16:47:06 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:20.547 16:47:06 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:20.547 16:47:06 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:20.547 16:47:06 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:20.547 16:47:06 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:20.547 Found net devices under 0000:31:00.0: cvl_0_0 00:35:20.547 16:47:06 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:20.547 16:47:06 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:20.547 16:47:06 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:20.547 16:47:06 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:20.547 16:47:06 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:20.547 16:47:06 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:20.547 16:47:06 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:20.547 16:47:06 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:20.547 16:47:06 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:20.547 Found net devices under 0000:31:00.1: cvl_0_1 00:35:20.547 16:47:06 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:20.547 16:47:06 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:20.547 16:47:06 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:35:20.547 16:47:06 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:20.547 16:47:06 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:20.547 16:47:06 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:20.547 16:47:06 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:20.547 16:47:06 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:20.547 16:47:06 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:20.547 16:47:06 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:20.547 16:47:06 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:20.547 16:47:06 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:20.547 16:47:06 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:20.547 16:47:06 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:20.547 16:47:06 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:20.547 16:47:06 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:20.547 16:47:06 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:20.547 16:47:06 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:20.547 16:47:06 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:20.547 16:47:06 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:20.808 16:47:06 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:20.808 16:47:06 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:20.808 16:47:06 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:20.808 16:47:06 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:20.808 16:47:06 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:21.069 16:47:06 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:21.069 16:47:06 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:21.069 16:47:06 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:21.069 16:47:06 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:21.069 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:21.069 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.695 ms 00:35:21.069 00:35:21.069 --- 10.0.0.2 ping statistics --- 00:35:21.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:21.069 rtt min/avg/max/mdev = 0.695/0.695/0.695/0.000 ms 00:35:21.069 16:47:06 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:21.069 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:21.069 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:35:21.069 00:35:21.069 --- 10.0.0.1 ping statistics --- 00:35:21.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:21.069 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:35:21.069 16:47:06 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:21.069 16:47:06 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:35:21.069 16:47:06 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:35:21.069 16:47:06 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:24.366 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:35:24.366 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:35:24.366 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:35:24.366 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:35:24.366 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:35:24.366 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:35:24.366 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:35:24.366 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:35:24.366 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:35:24.366 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:35:24.366 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:35:24.366 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:35:24.366 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:35:24.366 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:35:24.366 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:35:24.366 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:35:24.366 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:35:24.625 16:47:10 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:24.625 16:47:10 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:24.625 16:47:10 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:24.625 16:47:10 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:24.625 16:47:10 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:24.625 16:47:10 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:24.625 16:47:10 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:24.625 16:47:10 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:35:24.625 16:47:10 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:24.625 16:47:10 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:24.625 16:47:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:24.625 16:47:10 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=2507238 00:35:24.625 16:47:10 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 2507238 00:35:24.625 16:47:10 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:24.625 16:47:10 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 2507238 ']' 00:35:24.625 16:47:10 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:24.625 16:47:10 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:24.625 16:47:10 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:24.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:24.625 16:47:10 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:24.625 16:47:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:24.625 [2024-11-20 16:47:10.498051] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:35:24.625 [2024-11-20 16:47:10.498115] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:24.625 [2024-11-20 16:47:10.581867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:24.884 [2024-11-20 16:47:10.621314] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:24.884 [2024-11-20 16:47:10.621346] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:24.884 [2024-11-20 16:47:10.621355] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:24.884 [2024-11-20 16:47:10.621362] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:24.884 [2024-11-20 16:47:10.621367] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:24.884 [2024-11-20 16:47:10.621956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:25.455 16:47:11 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:25.455 16:47:11 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:35:25.455 16:47:11 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:25.455 16:47:11 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:25.455 16:47:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:25.455 16:47:11 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:25.455 16:47:11 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:35:25.455 16:47:11 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:35:25.455 16:47:11 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.455 16:47:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:25.455 [2024-11-20 16:47:11.308018] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:25.455 16:47:11 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.455 16:47:11 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:35:25.455 16:47:11 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:25.455 16:47:11 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:25.455 16:47:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:25.455 ************************************ 00:35:25.455 START TEST fio_dif_1_default 00:35:25.455 ************************************ 00:35:25.455 16:47:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:35:25.455 16:47:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:35:25.455 16:47:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:35:25.455 16:47:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:35:25.455 16:47:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:35:25.455 16:47:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:35:25.455 16:47:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:25.455 16:47:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.455 16:47:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:25.455 bdev_null0 00:35:25.455 16:47:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.455 16:47:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:25.455 16:47:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.455 16:47:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:25.455 16:47:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.455 16:47:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:25.455 16:47:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.455 16:47:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:25.455 16:47:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.455 16:47:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:25.455 16:47:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.455 16:47:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:25.455 [2024-11-20 16:47:11.392371] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:25.455 16:47:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.455 16:47:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:35:25.455 16:47:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:35:25.455 16:47:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:25.455 16:47:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:35:25.455 16:47:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:25.455 16:47:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:35:25.455 16:47:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:25.455 16:47:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:25.455 16:47:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:25.455 { 00:35:25.455 "params": { 00:35:25.455 "name": "Nvme$subsystem", 00:35:25.455 "trtype": "$TEST_TRANSPORT", 00:35:25.455 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:25.455 "adrfam": "ipv4", 00:35:25.455 "trsvcid": "$NVMF_PORT", 00:35:25.455 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:25.455 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:25.455 "hdgst": ${hdgst:-false}, 00:35:25.455 "ddgst": ${ddgst:-false} 00:35:25.455 }, 00:35:25.455 "method": "bdev_nvme_attach_controller" 00:35:25.455 } 00:35:25.455 EOF 00:35:25.455 )") 00:35:25.455 16:47:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:25.455 16:47:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:35:25.455 16:47:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:25.455 16:47:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:35:25.455 16:47:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:25.455 16:47:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:35:25.455 16:47:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:25.455 16:47:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:35:25.455 16:47:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:25.455 16:47:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:25.455 16:47:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:35:25.455 16:47:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:25.455 16:47:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:35:25.455 16:47:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:35:25.455 16:47:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:35:25.455 16:47:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:25.455 16:47:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:35:25.455 16:47:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:35:25.455 16:47:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:25.455 "params": { 00:35:25.455 "name": "Nvme0", 00:35:25.455 "trtype": "tcp", 00:35:25.455 "traddr": "10.0.0.2", 00:35:25.455 "adrfam": "ipv4", 00:35:25.455 "trsvcid": "4420", 00:35:25.455 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:25.455 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:25.455 "hdgst": false, 00:35:25.455 "ddgst": false 00:35:25.455 }, 00:35:25.455 "method": "bdev_nvme_attach_controller" 00:35:25.455 }' 00:35:25.735 16:47:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:25.735 16:47:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:25.735 16:47:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:25.735 16:47:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:25.735 16:47:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:25.735 16:47:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:25.735 16:47:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:25.735 16:47:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:25.735 16:47:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:25.735 16:47:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:25.998 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:25.998 fio-3.35 00:35:25.998 Starting 1 thread 00:35:38.289 00:35:38.289 filename0: (groupid=0, jobs=1): err= 0: pid=2507765: Wed Nov 20 16:47:22 2024 00:35:38.289 read: IOPS=190, BW=763KiB/s (781kB/s)(7632KiB/10008msec) 00:35:38.289 slat (nsec): min=5506, max=52275, avg=6379.95, stdev=2265.73 00:35:38.289 clat (usec): min=664, max=42017, avg=20962.58, stdev=20081.19 00:35:38.289 lat (usec): min=669, max=42026, avg=20968.96, stdev=20081.10 00:35:38.289 clat percentiles (usec): 00:35:38.289 | 1.00th=[ 750], 5.00th=[ 881], 10.00th=[ 898], 20.00th=[ 922], 00:35:38.289 | 30.00th=[ 947], 40.00th=[ 955], 50.00th=[ 1844], 60.00th=[41157], 00:35:38.289 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:38.289 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:38.289 | 99.99th=[42206] 00:35:38.289 bw ( KiB/s): min= 704, max= 768, per=99.79%, avg=761.60, stdev=16.74, samples=20 00:35:38.289 iops : min= 176, max= 192, avg=190.40, stdev= 4.19, samples=20 00:35:38.289 lat (usec) : 750=1.00%, 1000=48.48% 00:35:38.289 lat (msec) : 2=0.63%, 50=49.90% 00:35:38.289 cpu : usr=93.32%, sys=6.47%, ctx=12, majf=0, minf=249 00:35:38.289 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:38.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:38.289 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:38.289 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:38.289 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:38.289 00:35:38.289 Run status group 0 (all jobs): 00:35:38.289 READ: bw=763KiB/s (781kB/s), 763KiB/s-763KiB/s (781kB/s-781kB/s), io=7632KiB (7815kB), run=10008-10008msec 00:35:38.289 16:47:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:38.289 16:47:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:35:38.289 16:47:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:35:38.289 16:47:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:38.289 16:47:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:35:38.289 16:47:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:38.289 16:47:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.289 16:47:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:38.289 16:47:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.289 16:47:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:38.289 16:47:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.289 16:47:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:38.289 16:47:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.289 00:35:38.289 real 0m11.105s 00:35:38.289 user 0m28.886s 00:35:38.289 sys 0m0.959s 00:35:38.289 16:47:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:38.289 16:47:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:38.289 ************************************ 00:35:38.289 END TEST fio_dif_1_default 00:35:38.289 ************************************ 00:35:38.289 16:47:22 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:38.289 16:47:22 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:38.289 16:47:22 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:38.289 16:47:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:38.289 ************************************ 00:35:38.289 START TEST fio_dif_1_multi_subsystems 00:35:38.289 ************************************ 00:35:38.289 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:35:38.289 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:35:38.289 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:38.289 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:35:38.289 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:38.289 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:35:38.289 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:35:38.289 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:38.289 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.289 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:38.290 bdev_null0 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:38.290 [2024-11-20 16:47:22.578425] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:38.290 bdev_null1 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:38.290 { 00:35:38.290 "params": { 00:35:38.290 "name": "Nvme$subsystem", 00:35:38.290 "trtype": "$TEST_TRANSPORT", 00:35:38.290 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:38.290 "adrfam": "ipv4", 00:35:38.290 "trsvcid": "$NVMF_PORT", 00:35:38.290 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:38.290 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:38.290 "hdgst": ${hdgst:-false}, 00:35:38.290 "ddgst": ${ddgst:-false} 00:35:38.290 }, 00:35:38.290 "method": "bdev_nvme_attach_controller" 00:35:38.290 } 00:35:38.290 EOF 00:35:38.290 )") 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:38.290 { 00:35:38.290 "params": { 00:35:38.290 "name": "Nvme$subsystem", 00:35:38.290 "trtype": "$TEST_TRANSPORT", 00:35:38.290 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:38.290 "adrfam": "ipv4", 00:35:38.290 "trsvcid": "$NVMF_PORT", 00:35:38.290 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:38.290 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:38.290 "hdgst": ${hdgst:-false}, 00:35:38.290 "ddgst": ${ddgst:-false} 00:35:38.290 }, 00:35:38.290 "method": "bdev_nvme_attach_controller" 00:35:38.290 } 00:35:38.290 EOF 00:35:38.290 )") 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:38.290 "params": { 00:35:38.290 "name": "Nvme0", 00:35:38.290 "trtype": "tcp", 00:35:38.290 "traddr": "10.0.0.2", 00:35:38.290 "adrfam": "ipv4", 00:35:38.290 "trsvcid": "4420", 00:35:38.290 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:38.290 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:38.290 "hdgst": false, 00:35:38.290 "ddgst": false 00:35:38.290 }, 00:35:38.290 "method": "bdev_nvme_attach_controller" 00:35:38.290 },{ 00:35:38.290 "params": { 00:35:38.290 "name": "Nvme1", 00:35:38.290 "trtype": "tcp", 00:35:38.290 "traddr": "10.0.0.2", 00:35:38.290 "adrfam": "ipv4", 00:35:38.290 "trsvcid": "4420", 00:35:38.290 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:38.290 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:38.290 "hdgst": false, 00:35:38.290 "ddgst": false 00:35:38.290 }, 00:35:38.290 "method": "bdev_nvme_attach_controller" 00:35:38.290 }' 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:38.290 16:47:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:38.290 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:38.290 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:38.290 fio-3.35 00:35:38.290 Starting 2 threads 00:35:48.285 00:35:48.285 filename0: (groupid=0, jobs=1): err= 0: pid=2509971: Wed Nov 20 16:47:33 2024 00:35:48.285 read: IOPS=97, BW=391KiB/s (400kB/s)(3920KiB/10036msec) 00:35:48.285 slat (nsec): min=5503, max=33579, avg=6658.69, stdev=2047.83 00:35:48.285 clat (usec): min=2067, max=42040, avg=40941.45, stdev=2508.53 00:35:48.285 lat (usec): min=2079, max=42047, avg=40948.11, stdev=2507.86 00:35:48.285 clat percentiles (usec): 00:35:48.285 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:48.285 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:48.285 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:35:48.285 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:48.285 | 99.99th=[42206] 00:35:48.285 bw ( KiB/s): min= 384, max= 416, per=49.82%, avg=390.40, stdev=13.13, samples=20 00:35:48.285 iops : min= 96, max= 104, avg=97.60, stdev= 3.28, samples=20 00:35:48.285 lat (msec) : 4=0.41%, 50=99.59% 00:35:48.285 cpu : usr=95.12%, sys=4.65%, ctx=14, majf=0, minf=92 00:35:48.285 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:48.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.285 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.285 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.285 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:48.285 filename1: (groupid=0, jobs=1): err= 0: pid=2509972: Wed Nov 20 16:47:33 2024 00:35:48.285 read: IOPS=98, BW=393KiB/s (402kB/s)(3936KiB/10015msec) 00:35:48.285 slat (nsec): min=5548, max=32498, avg=6624.93, stdev=2109.68 00:35:48.285 clat (usec): min=930, max=42045, avg=40689.84, stdev=3562.15 00:35:48.285 lat (usec): min=936, max=42051, avg=40696.46, stdev=3561.73 00:35:48.285 clat percentiles (usec): 00:35:48.285 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:48.285 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:48.285 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:48.285 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:48.285 | 99.99th=[42206] 00:35:48.285 bw ( KiB/s): min= 384, max= 416, per=50.08%, avg=392.00, stdev=14.22, samples=20 00:35:48.285 iops : min= 96, max= 104, avg=98.00, stdev= 3.55, samples=20 00:35:48.285 lat (usec) : 1000=0.41% 00:35:48.285 lat (msec) : 2=0.41%, 50=99.19% 00:35:48.285 cpu : usr=95.43%, sys=4.35%, ctx=16, majf=0, minf=191 00:35:48.285 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:48.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.285 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.285 issued rwts: total=984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.285 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:48.285 00:35:48.285 Run status group 0 (all jobs): 00:35:48.285 READ: bw=783KiB/s (802kB/s), 391KiB/s-393KiB/s (400kB/s-402kB/s), io=7856KiB (8045kB), run=10015-10036msec 00:35:48.285 16:47:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:48.285 16:47:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:35:48.285 16:47:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:48.285 16:47:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:48.285 16:47:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:35:48.285 16:47:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:48.285 16:47:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.285 16:47:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:48.285 16:47:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.285 16:47:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:48.285 16:47:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.285 16:47:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:48.285 16:47:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.285 16:47:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:48.285 16:47:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:48.285 16:47:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:35:48.285 16:47:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:48.285 16:47:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.285 16:47:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:48.285 16:47:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.285 16:47:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:48.285 16:47:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.285 16:47:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:48.285 16:47:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.285 00:35:48.285 real 0m11.342s 00:35:48.285 user 0m32.030s 00:35:48.285 sys 0m1.269s 00:35:48.285 16:47:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:48.285 16:47:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:48.285 ************************************ 00:35:48.285 END TEST fio_dif_1_multi_subsystems 00:35:48.285 ************************************ 00:35:48.285 16:47:33 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:48.285 16:47:33 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:48.285 16:47:33 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:48.285 16:47:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:48.285 ************************************ 00:35:48.285 START TEST fio_dif_rand_params 00:35:48.285 ************************************ 00:35:48.285 16:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:35:48.285 16:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:35:48.285 16:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:48.285 16:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:35:48.285 16:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:35:48.285 16:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:35:48.285 16:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:35:48.285 16:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:35:48.285 16:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:35:48.285 16:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:48.285 16:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:48.285 16:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:48.285 16:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:48.285 16:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:48.285 16:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.285 16:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.285 bdev_null0 00:35:48.285 16:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.285 16:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:48.285 16:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.285 16:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.285 16:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.285 16:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:48.285 16:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.285 16:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.285 16:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.285 16:47:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:48.285 16:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.285 16:47:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.285 [2024-11-20 16:47:33.998181] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:48.285 16:47:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.285 16:47:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:48.285 16:47:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:48.285 16:47:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:48.285 16:47:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:35:48.285 16:47:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:48.285 16:47:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:35:48.286 16:47:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:48.286 16:47:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:48.286 16:47:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:48.286 16:47:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:48.286 16:47:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:48.286 { 00:35:48.286 "params": { 00:35:48.286 "name": "Nvme$subsystem", 00:35:48.286 "trtype": "$TEST_TRANSPORT", 00:35:48.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:48.286 "adrfam": "ipv4", 00:35:48.286 "trsvcid": "$NVMF_PORT", 00:35:48.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:48.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:48.286 "hdgst": ${hdgst:-false}, 00:35:48.286 "ddgst": ${ddgst:-false} 00:35:48.286 }, 00:35:48.286 "method": "bdev_nvme_attach_controller" 00:35:48.286 } 00:35:48.286 EOF 00:35:48.286 )") 00:35:48.286 16:47:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:48.286 16:47:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:48.286 16:47:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:48.286 16:47:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:48.286 16:47:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:48.286 16:47:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:35:48.286 16:47:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:48.286 16:47:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:48.286 16:47:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:48.286 16:47:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:48.286 16:47:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:48.286 16:47:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:35:48.286 16:47:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:48.286 16:47:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:48.286 16:47:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:35:48.286 16:47:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:35:48.286 16:47:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:48.286 "params": { 00:35:48.286 "name": "Nvme0", 00:35:48.286 "trtype": "tcp", 00:35:48.286 "traddr": "10.0.0.2", 00:35:48.286 "adrfam": "ipv4", 00:35:48.286 "trsvcid": "4420", 00:35:48.286 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:48.286 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:48.286 "hdgst": false, 00:35:48.286 "ddgst": false 00:35:48.286 }, 00:35:48.286 "method": "bdev_nvme_attach_controller" 00:35:48.286 }' 00:35:48.286 16:47:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:48.286 16:47:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:48.286 16:47:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:48.286 16:47:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:48.286 16:47:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:48.286 16:47:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:48.286 16:47:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:48.286 16:47:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:48.286 16:47:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:48.286 16:47:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:48.546 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:48.546 ... 00:35:48.546 fio-3.35 00:35:48.546 Starting 3 threads 00:35:55.124 00:35:55.124 filename0: (groupid=0, jobs=1): err= 0: pid=2512309: Wed Nov 20 16:47:40 2024 00:35:55.124 read: IOPS=239, BW=30.0MiB/s (31.4MB/s)(150MiB/5005msec) 00:35:55.124 slat (nsec): min=5793, max=32578, avg=8332.96, stdev=1899.73 00:35:55.124 clat (usec): min=6676, max=56707, avg=12498.87, stdev=4668.45 00:35:55.124 lat (usec): min=6696, max=56713, avg=12507.20, stdev=4668.52 00:35:55.124 clat percentiles (usec): 00:35:55.124 | 1.00th=[ 7635], 5.00th=[ 9110], 10.00th=[ 9896], 20.00th=[10552], 00:35:55.124 | 30.00th=[11207], 40.00th=[11731], 50.00th=[12125], 60.00th=[12649], 00:35:55.124 | 70.00th=[13042], 80.00th=[13435], 90.00th=[14091], 95.00th=[14615], 00:35:55.124 | 99.00th=[49021], 99.50th=[50594], 99.90th=[55837], 99.95th=[56886], 00:35:55.124 | 99.99th=[56886] 00:35:55.124 bw ( KiB/s): min=24832, max=32256, per=33.94%, avg=30668.80, stdev=2221.29, samples=10 00:35:55.124 iops : min= 194, max= 252, avg=239.60, stdev=17.35, samples=10 00:35:55.124 lat (msec) : 10=11.50%, 20=87.00%, 50=0.75%, 100=0.75% 00:35:55.124 cpu : usr=95.58%, sys=4.16%, ctx=6, majf=0, minf=64 00:35:55.124 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:55.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.124 issued rwts: total=1200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.124 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:55.124 filename0: (groupid=0, jobs=1): err= 0: pid=2512310: Wed Nov 20 16:47:40 2024 00:35:55.124 read: IOPS=236, BW=29.5MiB/s (31.0MB/s)(149MiB/5047msec) 00:35:55.124 slat (nsec): min=5739, max=33532, avg=8316.81, stdev=1859.05 00:35:55.124 clat (usec): min=6839, max=50138, avg=12644.81, stdev=2174.50 00:35:55.124 lat (usec): min=6851, max=50144, avg=12653.13, stdev=2174.43 00:35:55.124 clat percentiles (usec): 00:35:55.124 | 1.00th=[ 7898], 5.00th=[ 9765], 10.00th=[10290], 20.00th=[11207], 00:35:55.124 | 30.00th=[11994], 40.00th=[12518], 50.00th=[12780], 60.00th=[13042], 00:35:55.124 | 70.00th=[13435], 80.00th=[13829], 90.00th=[14484], 95.00th=[14877], 00:35:55.124 | 99.00th=[16319], 99.50th=[16581], 99.90th=[46924], 99.95th=[50070], 00:35:55.124 | 99.99th=[50070] 00:35:55.124 bw ( KiB/s): min=28928, max=33280, per=33.74%, avg=30489.60, stdev=1425.09, samples=10 00:35:55.124 iops : min= 226, max= 260, avg=238.20, stdev=11.13, samples=10 00:35:55.124 lat (msec) : 10=6.87%, 20=92.96%, 50=0.08%, 100=0.08% 00:35:55.124 cpu : usr=95.46%, sys=4.30%, ctx=10, majf=0, minf=122 00:35:55.124 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:55.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.124 issued rwts: total=1193,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.124 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:55.124 filename0: (groupid=0, jobs=1): err= 0: pid=2512311: Wed Nov 20 16:47:40 2024 00:35:55.124 read: IOPS=232, BW=29.1MiB/s (30.5MB/s)(146MiB/5023msec) 00:35:55.124 slat (nsec): min=5555, max=31250, avg=6289.48, stdev=910.33 00:35:55.124 clat (usec): min=6973, max=58107, avg=12864.28, stdev=5602.87 00:35:55.124 lat (usec): min=6979, max=58138, avg=12870.57, stdev=5603.12 00:35:55.124 clat percentiles (usec): 00:35:55.124 | 1.00th=[ 7701], 5.00th=[ 9110], 10.00th=[ 9896], 20.00th=[10683], 00:35:55.124 | 30.00th=[11469], 40.00th=[11994], 50.00th=[12387], 60.00th=[12649], 00:35:55.124 | 70.00th=[13173], 80.00th=[13698], 90.00th=[14222], 95.00th=[14877], 00:35:55.124 | 99.00th=[51119], 99.50th=[53216], 99.90th=[57934], 99.95th=[57934], 00:35:55.124 | 99.99th=[57934] 00:35:55.124 bw ( KiB/s): min=27648, max=31744, per=33.06%, avg=29875.20, stdev=1265.99, samples=10 00:35:55.124 iops : min= 216, max= 248, avg=233.40, stdev= 9.89, samples=10 00:35:55.124 lat (msec) : 10=10.68%, 20=87.52%, 50=0.09%, 100=1.71% 00:35:55.124 cpu : usr=95.02%, sys=4.74%, ctx=8, majf=0, minf=92 00:35:55.124 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:55.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.124 issued rwts: total=1170,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.124 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:55.124 00:35:55.124 Run status group 0 (all jobs): 00:35:55.124 READ: bw=88.2MiB/s (92.5MB/s), 29.1MiB/s-30.0MiB/s (30.5MB/s-31.4MB/s), io=445MiB (467MB), run=5005-5047msec 00:35:55.124 16:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:55.124 16:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:55.124 16:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:55.124 16:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:55.124 16:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:55.124 16:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:55.124 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.124 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.124 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.124 16:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:55.124 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.124 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.124 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.124 16:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:35:55.124 16:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:35:55.124 16:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:35:55.124 16:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:35:55.124 16:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:35:55.124 16:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.125 bdev_null0 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.125 [2024-11-20 16:47:40.290414] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.125 bdev_null1 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.125 bdev_null2 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:55.125 { 00:35:55.125 "params": { 00:35:55.125 "name": "Nvme$subsystem", 00:35:55.125 "trtype": "$TEST_TRANSPORT", 00:35:55.125 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:55.125 "adrfam": "ipv4", 00:35:55.125 "trsvcid": "$NVMF_PORT", 00:35:55.125 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:55.125 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:55.125 "hdgst": ${hdgst:-false}, 00:35:55.125 "ddgst": ${ddgst:-false} 00:35:55.125 }, 00:35:55.125 "method": "bdev_nvme_attach_controller" 00:35:55.125 } 00:35:55.125 EOF 00:35:55.125 )") 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:55.125 { 00:35:55.125 "params": { 00:35:55.125 "name": "Nvme$subsystem", 00:35:55.125 "trtype": "$TEST_TRANSPORT", 00:35:55.125 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:55.125 "adrfam": "ipv4", 00:35:55.125 "trsvcid": "$NVMF_PORT", 00:35:55.125 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:55.125 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:55.125 "hdgst": ${hdgst:-false}, 00:35:55.125 "ddgst": ${ddgst:-false} 00:35:55.125 }, 00:35:55.125 "method": "bdev_nvme_attach_controller" 00:35:55.125 } 00:35:55.125 EOF 00:35:55.125 )") 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:55.125 16:47:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:55.125 { 00:35:55.125 "params": { 00:35:55.125 "name": "Nvme$subsystem", 00:35:55.125 "trtype": "$TEST_TRANSPORT", 00:35:55.125 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:55.125 "adrfam": "ipv4", 00:35:55.125 "trsvcid": "$NVMF_PORT", 00:35:55.125 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:55.125 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:55.125 "hdgst": ${hdgst:-false}, 00:35:55.125 "ddgst": ${ddgst:-false} 00:35:55.126 }, 00:35:55.126 "method": "bdev_nvme_attach_controller" 00:35:55.126 } 00:35:55.126 EOF 00:35:55.126 )") 00:35:55.126 16:47:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:55.126 16:47:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:35:55.126 16:47:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:35:55.126 16:47:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:55.126 "params": { 00:35:55.126 "name": "Nvme0", 00:35:55.126 "trtype": "tcp", 00:35:55.126 "traddr": "10.0.0.2", 00:35:55.126 "adrfam": "ipv4", 00:35:55.126 "trsvcid": "4420", 00:35:55.126 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:55.126 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:55.126 "hdgst": false, 00:35:55.126 "ddgst": false 00:35:55.126 }, 00:35:55.126 "method": "bdev_nvme_attach_controller" 00:35:55.126 },{ 00:35:55.126 "params": { 00:35:55.126 "name": "Nvme1", 00:35:55.126 "trtype": "tcp", 00:35:55.126 "traddr": "10.0.0.2", 00:35:55.126 "adrfam": "ipv4", 00:35:55.126 "trsvcid": "4420", 00:35:55.126 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:55.126 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:55.126 "hdgst": false, 00:35:55.126 "ddgst": false 00:35:55.126 }, 00:35:55.126 "method": "bdev_nvme_attach_controller" 00:35:55.126 },{ 00:35:55.126 "params": { 00:35:55.126 "name": "Nvme2", 00:35:55.126 "trtype": "tcp", 00:35:55.126 "traddr": "10.0.0.2", 00:35:55.126 "adrfam": "ipv4", 00:35:55.126 "trsvcid": "4420", 00:35:55.126 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:55.126 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:55.126 "hdgst": false, 00:35:55.126 "ddgst": false 00:35:55.126 }, 00:35:55.126 "method": "bdev_nvme_attach_controller" 00:35:55.126 }' 00:35:55.126 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:55.126 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:55.126 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:55.126 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:55.126 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:55.126 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:55.126 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:55.126 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:55.126 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:55.126 16:47:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:55.126 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:55.126 ... 00:35:55.126 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:55.126 ... 00:35:55.126 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:55.126 ... 00:35:55.126 fio-3.35 00:35:55.126 Starting 24 threads 00:36:07.357 00:36:07.357 filename0: (groupid=0, jobs=1): err= 0: pid=2513672: Wed Nov 20 16:47:51 2024 00:36:07.357 read: IOPS=491, BW=1967KiB/s (2014kB/s)(19.2MiB/10003msec) 00:36:07.357 slat (nsec): min=5750, max=77420, avg=12962.78, stdev=8697.50 00:36:07.357 clat (usec): min=8509, max=39259, avg=32431.74, stdev=2807.25 00:36:07.357 lat (usec): min=8520, max=39269, avg=32444.71, stdev=2807.62 00:36:07.357 clat percentiles (usec): 00:36:07.357 | 1.00th=[13566], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:36:07.357 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:36:07.357 | 70.00th=[32900], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:36:07.357 | 99.00th=[35914], 99.50th=[36439], 99.90th=[39060], 99.95th=[39060], 00:36:07.357 | 99.99th=[39060] 00:36:07.357 bw ( KiB/s): min= 1792, max= 2352, per=4.17%, avg=1969.68, stdev=114.99, samples=19 00:36:07.357 iops : min= 448, max= 588, avg=492.42, stdev=28.75, samples=19 00:36:07.357 lat (msec) : 10=0.33%, 20=1.30%, 50=98.37% 00:36:07.357 cpu : usr=98.87%, sys=0.86%, ctx=22, majf=0, minf=62 00:36:07.357 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:07.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.357 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.357 issued rwts: total=4918,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.357 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.357 filename0: (groupid=0, jobs=1): err= 0: pid=2513673: Wed Nov 20 16:47:51 2024 00:36:07.357 read: IOPS=505, BW=2022KiB/s (2071kB/s)(19.8MiB/10007msec) 00:36:07.357 slat (usec): min=5, max=132, avg=17.50, stdev=14.24 00:36:07.357 clat (usec): min=2756, max=59163, avg=31511.35, stdev=5341.47 00:36:07.357 lat (usec): min=2773, max=59196, avg=31528.85, stdev=5342.53 00:36:07.357 clat percentiles (usec): 00:36:07.357 | 1.00th=[ 7767], 5.00th=[21627], 10.00th=[24773], 20.00th=[32113], 00:36:07.357 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:36:07.357 | 70.00th=[32637], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:36:07.357 | 99.00th=[47449], 99.50th=[52167], 99.90th=[58983], 99.95th=[58983], 00:36:07.357 | 99.99th=[58983] 00:36:07.357 bw ( KiB/s): min= 1840, max= 2760, per=4.30%, avg=2029.05, stdev=206.92, samples=19 00:36:07.357 iops : min= 460, max= 690, avg=507.26, stdev=51.73, samples=19 00:36:07.357 lat (msec) : 4=0.28%, 10=0.79%, 20=2.12%, 50=96.11%, 100=0.71% 00:36:07.357 cpu : usr=98.53%, sys=1.11%, ctx=81, majf=0, minf=44 00:36:07.357 IO depths : 1=4.9%, 2=9.8%, 4=20.9%, 8=56.5%, 16=7.9%, 32=0.0%, >=64=0.0% 00:36:07.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.357 complete : 0=0.0%, 4=93.0%, 8=1.4%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.357 issued rwts: total=5059,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.357 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.357 filename0: (groupid=0, jobs=1): err= 0: pid=2513674: Wed Nov 20 16:47:51 2024 00:36:07.357 read: IOPS=606, BW=2424KiB/s (2482kB/s)(23.7MiB/10006msec) 00:36:07.357 slat (nsec): min=2852, max=26539, avg=6776.42, stdev=1472.11 00:36:07.357 clat (usec): min=884, max=33850, avg=26340.21, stdev=7160.64 00:36:07.357 lat (usec): min=891, max=33857, avg=26346.99, stdev=7160.88 00:36:07.357 clat percentiles (usec): 00:36:07.357 | 1.00th=[ 1565], 5.00th=[16057], 10.00th=[18220], 20.00th=[21103], 00:36:07.357 | 30.00th=[22676], 40.00th=[24249], 50.00th=[26608], 60.00th=[32375], 00:36:07.357 | 70.00th=[32375], 80.00th=[32375], 90.00th=[32637], 95.00th=[32900], 00:36:07.357 | 99.00th=[33424], 99.50th=[33817], 99.90th=[33817], 99.95th=[33817], 00:36:07.357 | 99.99th=[33817] 00:36:07.357 bw ( KiB/s): min= 1920, max= 3960, per=5.09%, avg=2404.63, stdev=482.37, samples=19 00:36:07.357 iops : min= 480, max= 990, avg=601.16, stdev=120.59, samples=19 00:36:07.357 lat (usec) : 1000=0.03% 00:36:07.357 lat (msec) : 2=1.25%, 4=1.47%, 10=0.54%, 20=16.69%, 50=80.01% 00:36:07.357 cpu : usr=98.67%, sys=0.94%, ctx=77, majf=0, minf=118 00:36:07.357 IO depths : 1=5.9%, 2=12.0%, 4=24.3%, 8=51.1%, 16=6.7%, 32=0.0%, >=64=0.0% 00:36:07.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.357 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.357 issued rwts: total=6064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.357 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.357 filename0: (groupid=0, jobs=1): err= 0: pid=2513675: Wed Nov 20 16:47:51 2024 00:36:07.357 read: IOPS=485, BW=1943KiB/s (1990kB/s)(19.0MiB/10011msec) 00:36:07.357 slat (nsec): min=5443, max=89536, avg=24408.32, stdev=13508.44 00:36:07.357 clat (usec): min=12369, max=63279, avg=32696.71, stdev=2324.62 00:36:07.357 lat (usec): min=12375, max=63295, avg=32721.12, stdev=2323.80 00:36:07.357 clat percentiles (usec): 00:36:07.357 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32113], 20.00th=[32113], 00:36:07.357 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:36:07.357 | 70.00th=[32637], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:36:07.357 | 99.00th=[35390], 99.50th=[37487], 99.90th=[63177], 99.95th=[63177], 00:36:07.357 | 99.99th=[63177] 00:36:07.357 bw ( KiB/s): min= 1792, max= 2048, per=4.10%, avg=1933.47, stdev=58.73, samples=19 00:36:07.357 iops : min= 448, max= 512, avg=483.37, stdev=14.68, samples=19 00:36:07.357 lat (msec) : 20=0.33%, 50=99.34%, 100=0.33% 00:36:07.357 cpu : usr=99.04%, sys=0.68%, ctx=13, majf=0, minf=43 00:36:07.357 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:07.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.357 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.357 issued rwts: total=4864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.357 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.357 filename0: (groupid=0, jobs=1): err= 0: pid=2513676: Wed Nov 20 16:47:51 2024 00:36:07.357 read: IOPS=485, BW=1943KiB/s (1990kB/s)(19.0MiB/10011msec) 00:36:07.357 slat (nsec): min=5446, max=72393, avg=15888.46, stdev=10303.07 00:36:07.357 clat (usec): min=12297, max=63440, avg=32804.99, stdev=2487.72 00:36:07.357 lat (usec): min=12308, max=63464, avg=32820.88, stdev=2487.20 00:36:07.357 clat percentiles (usec): 00:36:07.357 | 1.00th=[25560], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:36:07.357 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:36:07.357 | 70.00th=[32900], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:36:07.357 | 99.00th=[38011], 99.50th=[41681], 99.90th=[63177], 99.95th=[63177], 00:36:07.357 | 99.99th=[63701] 00:36:07.357 bw ( KiB/s): min= 1795, max= 2048, per=4.10%, avg=1933.63, stdev=58.33, samples=19 00:36:07.357 iops : min= 448, max= 512, avg=483.37, stdev=14.68, samples=19 00:36:07.357 lat (msec) : 20=0.33%, 50=99.34%, 100=0.33% 00:36:07.357 cpu : usr=98.93%, sys=0.72%, ctx=51, majf=0, minf=66 00:36:07.357 IO depths : 1=5.8%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:36:07.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.358 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.358 issued rwts: total=4864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.358 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.358 filename0: (groupid=0, jobs=1): err= 0: pid=2513677: Wed Nov 20 16:47:51 2024 00:36:07.358 read: IOPS=487, BW=1951KiB/s (1998kB/s)(19.1MiB/10015msec) 00:36:07.358 slat (nsec): min=5730, max=67816, avg=19737.69, stdev=10466.06 00:36:07.358 clat (usec): min=20781, max=53253, avg=32625.65, stdev=1720.71 00:36:07.358 lat (usec): min=20787, max=53276, avg=32645.39, stdev=1721.40 00:36:07.358 clat percentiles (usec): 00:36:07.358 | 1.00th=[22938], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:36:07.358 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:36:07.358 | 70.00th=[32637], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:36:07.358 | 99.00th=[35390], 99.50th=[37487], 99.90th=[53216], 99.95th=[53216], 00:36:07.358 | 99.99th=[53216] 00:36:07.358 bw ( KiB/s): min= 1792, max= 2048, per=4.13%, avg=1949.47, stdev=68.35, samples=19 00:36:07.358 iops : min= 448, max= 512, avg=487.37, stdev=17.09, samples=19 00:36:07.358 lat (msec) : 50=99.84%, 100=0.16% 00:36:07.358 cpu : usr=99.06%, sys=0.68%, ctx=18, majf=0, minf=51 00:36:07.358 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:07.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.358 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.358 issued rwts: total=4886,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.358 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.358 filename0: (groupid=0, jobs=1): err= 0: pid=2513678: Wed Nov 20 16:47:51 2024 00:36:07.358 read: IOPS=486, BW=1945KiB/s (1991kB/s)(19.0MiB/10004msec) 00:36:07.358 slat (nsec): min=5696, max=75218, avg=16441.30, stdev=10333.68 00:36:07.358 clat (usec): min=22542, max=39602, avg=32748.15, stdev=1026.20 00:36:07.358 lat (usec): min=22554, max=39621, avg=32764.59, stdev=1026.54 00:36:07.358 clat percentiles (usec): 00:36:07.358 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:36:07.358 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:36:07.358 | 70.00th=[32900], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:36:07.358 | 99.00th=[35390], 99.50th=[35914], 99.90th=[39584], 99.95th=[39584], 00:36:07.358 | 99.99th=[39584] 00:36:07.358 bw ( KiB/s): min= 1920, max= 2048, per=4.12%, avg=1946.95, stdev=53.61, samples=19 00:36:07.358 iops : min= 480, max= 512, avg=486.74, stdev=13.40, samples=19 00:36:07.358 lat (msec) : 50=100.00% 00:36:07.358 cpu : usr=98.99%, sys=0.74%, ctx=11, majf=0, minf=52 00:36:07.358 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:07.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.358 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.358 issued rwts: total=4864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.358 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.358 filename0: (groupid=0, jobs=1): err= 0: pid=2513679: Wed Nov 20 16:47:51 2024 00:36:07.358 read: IOPS=489, BW=1957KiB/s (2004kB/s)(19.1MiB/10021msec) 00:36:07.358 slat (nsec): min=5637, max=77442, avg=14026.68, stdev=9720.05 00:36:07.358 clat (usec): min=13153, max=46060, avg=32590.21, stdev=2228.59 00:36:07.358 lat (usec): min=13164, max=46069, avg=32604.23, stdev=2228.51 00:36:07.358 clat percentiles (usec): 00:36:07.358 | 1.00th=[20841], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:36:07.358 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:36:07.358 | 70.00th=[32900], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:36:07.358 | 99.00th=[35390], 99.50th=[40109], 99.90th=[45876], 99.95th=[45876], 00:36:07.358 | 99.99th=[45876] 00:36:07.358 bw ( KiB/s): min= 1920, max= 2052, per=4.14%, avg=1954.60, stdev=56.81, samples=20 00:36:07.358 iops : min= 480, max= 513, avg=488.65, stdev=14.20, samples=20 00:36:07.358 lat (msec) : 20=0.73%, 50=99.27% 00:36:07.358 cpu : usr=98.94%, sys=0.76%, ctx=16, majf=0, minf=85 00:36:07.358 IO depths : 1=6.0%, 2=12.1%, 4=24.6%, 8=50.7%, 16=6.5%, 32=0.0%, >=64=0.0% 00:36:07.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.358 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.358 issued rwts: total=4902,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.358 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.358 filename1: (groupid=0, jobs=1): err= 0: pid=2513680: Wed Nov 20 16:47:51 2024 00:36:07.358 read: IOPS=490, BW=1962KiB/s (2009kB/s)(19.2MiB/10016msec) 00:36:07.358 slat (nsec): min=5725, max=54670, avg=13183.66, stdev=8799.16 00:36:07.358 clat (usec): min=11347, max=40165, avg=32513.33, stdev=2300.52 00:36:07.358 lat (usec): min=11368, max=40181, avg=32526.51, stdev=2299.95 00:36:07.358 clat percentiles (usec): 00:36:07.358 | 1.00th=[19268], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:36:07.358 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:36:07.358 | 70.00th=[32900], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:36:07.358 | 99.00th=[34866], 99.50th=[35914], 99.90th=[40109], 99.95th=[40109], 00:36:07.358 | 99.99th=[40109] 00:36:07.358 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=1958.40, stdev=73.12, samples=20 00:36:07.358 iops : min= 480, max= 544, avg=489.60, stdev=18.28, samples=20 00:36:07.358 lat (msec) : 20=1.30%, 50=98.70% 00:36:07.358 cpu : usr=99.04%, sys=0.62%, ctx=61, majf=0, minf=72 00:36:07.358 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:07.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.358 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.358 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.358 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.358 filename1: (groupid=0, jobs=1): err= 0: pid=2513681: Wed Nov 20 16:47:51 2024 00:36:07.358 read: IOPS=486, BW=1947KiB/s (1994kB/s)(19.0MiB/10015msec) 00:36:07.358 slat (nsec): min=5699, max=68412, avg=15529.67, stdev=10732.33 00:36:07.358 clat (usec): min=20035, max=53324, avg=32745.38, stdev=2452.53 00:36:07.358 lat (usec): min=20041, max=53343, avg=32760.91, stdev=2452.52 00:36:07.358 clat percentiles (usec): 00:36:07.358 | 1.00th=[22676], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:36:07.358 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:36:07.358 | 70.00th=[32900], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:36:07.358 | 99.00th=[42730], 99.50th=[44303], 99.90th=[53216], 99.95th=[53216], 00:36:07.358 | 99.99th=[53216] 00:36:07.358 bw ( KiB/s): min= 1848, max= 2032, per=4.12%, avg=1945.26, stdev=46.44, samples=19 00:36:07.358 iops : min= 462, max= 508, avg=486.32, stdev=11.61, samples=19 00:36:07.358 lat (msec) : 50=99.79%, 100=0.21% 00:36:07.358 cpu : usr=98.75%, sys=0.95%, ctx=26, majf=0, minf=54 00:36:07.358 IO depths : 1=0.9%, 2=5.7%, 4=19.4%, 8=62.1%, 16=11.8%, 32=0.0%, >=64=0.0% 00:36:07.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.358 complete : 0=0.0%, 4=93.1%, 8=1.5%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.358 issued rwts: total=4876,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.358 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.358 filename1: (groupid=0, jobs=1): err= 0: pid=2513682: Wed Nov 20 16:47:51 2024 00:36:07.358 read: IOPS=487, BW=1949KiB/s (1995kB/s)(19.1MiB/10017msec) 00:36:07.358 slat (nsec): min=5710, max=79936, avg=13555.28, stdev=10915.26 00:36:07.358 clat (usec): min=21492, max=40246, avg=32729.27, stdev=1239.65 00:36:07.358 lat (usec): min=21501, max=40254, avg=32742.83, stdev=1238.57 00:36:07.358 clat percentiles (usec): 00:36:07.358 | 1.00th=[26084], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:36:07.358 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:36:07.358 | 70.00th=[32900], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:36:07.358 | 99.00th=[34866], 99.50th=[35914], 99.90th=[40109], 99.95th=[40109], 00:36:07.358 | 99.99th=[40109] 00:36:07.358 bw ( KiB/s): min= 1912, max= 2048, per=4.12%, avg=1945.20, stdev=52.77, samples=20 00:36:07.358 iops : min= 478, max= 512, avg=486.30, stdev=13.19, samples=20 00:36:07.358 lat (msec) : 50=100.00% 00:36:07.358 cpu : usr=99.06%, sys=0.56%, ctx=51, majf=0, minf=49 00:36:07.358 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:07.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.358 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.358 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.358 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.358 filename1: (groupid=0, jobs=1): err= 0: pid=2513683: Wed Nov 20 16:47:51 2024 00:36:07.358 read: IOPS=493, BW=1974KiB/s (2022kB/s)(19.3MiB/10016msec) 00:36:07.358 slat (nsec): min=5698, max=52609, avg=10215.11, stdev=6609.11 00:36:07.358 clat (usec): min=8470, max=40114, avg=32329.14, stdev=2987.75 00:36:07.358 lat (usec): min=8482, max=40122, avg=32339.35, stdev=2986.72 00:36:07.358 clat percentiles (usec): 00:36:07.358 | 1.00th=[12780], 5.00th=[31851], 10.00th=[32375], 20.00th=[32375], 00:36:07.358 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:36:07.358 | 70.00th=[32900], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:36:07.358 | 99.00th=[34341], 99.50th=[34866], 99.90th=[40109], 99.95th=[40109], 00:36:07.358 | 99.99th=[40109] 00:36:07.358 bw ( KiB/s): min= 1920, max= 2304, per=4.18%, avg=1971.20, stdev=96.50, samples=20 00:36:07.358 iops : min= 480, max= 576, avg=492.80, stdev=24.13, samples=20 00:36:07.358 lat (msec) : 10=0.32%, 20=1.62%, 50=98.06% 00:36:07.358 cpu : usr=99.16%, sys=0.57%, ctx=11, majf=0, minf=55 00:36:07.358 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:07.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.358 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.358 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.358 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.358 filename1: (groupid=0, jobs=1): err= 0: pid=2513684: Wed Nov 20 16:47:51 2024 00:36:07.358 read: IOPS=488, BW=1954KiB/s (2001kB/s)(19.1MiB/10022msec) 00:36:07.358 slat (nsec): min=5710, max=77432, avg=13284.13, stdev=9531.79 00:36:07.358 clat (usec): min=12296, max=47048, avg=32635.61, stdev=1941.93 00:36:07.358 lat (usec): min=12305, max=47065, avg=32648.90, stdev=1942.05 00:36:07.358 clat percentiles (usec): 00:36:07.358 | 1.00th=[20841], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:36:07.358 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:36:07.358 | 70.00th=[32900], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:36:07.359 | 99.00th=[35390], 99.50th=[39060], 99.90th=[46924], 99.95th=[46924], 00:36:07.359 | 99.99th=[46924] 00:36:07.359 bw ( KiB/s): min= 1920, max= 2048, per=4.13%, avg=1952.00, stdev=56.87, samples=20 00:36:07.359 iops : min= 480, max= 512, avg=488.00, stdev=14.22, samples=20 00:36:07.359 lat (msec) : 20=0.82%, 50=99.18% 00:36:07.359 cpu : usr=98.96%, sys=0.63%, ctx=53, majf=0, minf=54 00:36:07.359 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:07.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.359 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.359 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.359 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.359 filename1: (groupid=0, jobs=1): err= 0: pid=2513685: Wed Nov 20 16:47:51 2024 00:36:07.359 read: IOPS=485, BW=1944KiB/s (1991kB/s)(19.0MiB/10009msec) 00:36:07.359 slat (nsec): min=5689, max=71008, avg=16414.44, stdev=12121.04 00:36:07.359 clat (usec): min=12527, max=61408, avg=32784.48, stdev=2262.74 00:36:07.359 lat (usec): min=12533, max=61423, avg=32800.89, stdev=2261.59 00:36:07.359 clat percentiles (usec): 00:36:07.359 | 1.00th=[29230], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:36:07.359 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:36:07.359 | 70.00th=[32900], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:36:07.359 | 99.00th=[35914], 99.50th=[38011], 99.90th=[61604], 99.95th=[61604], 00:36:07.359 | 99.99th=[61604] 00:36:07.359 bw ( KiB/s): min= 1792, max= 2048, per=4.11%, avg=1940.21, stdev=64.19, samples=19 00:36:07.359 iops : min= 448, max= 512, avg=485.05, stdev=16.05, samples=19 00:36:07.359 lat (msec) : 20=0.33%, 50=99.34%, 100=0.33% 00:36:07.359 cpu : usr=98.70%, sys=0.89%, ctx=87, majf=0, minf=39 00:36:07.359 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:07.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.359 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.359 issued rwts: total=4864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.359 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.359 filename1: (groupid=0, jobs=1): err= 0: pid=2513686: Wed Nov 20 16:47:51 2024 00:36:07.359 read: IOPS=485, BW=1944KiB/s (1991kB/s)(19.0MiB/10009msec) 00:36:07.359 slat (nsec): min=5333, max=86371, avg=21085.60, stdev=12245.15 00:36:07.359 clat (usec): min=12748, max=61907, avg=32758.11, stdev=2944.11 00:36:07.359 lat (usec): min=12754, max=61922, avg=32779.20, stdev=2943.79 00:36:07.359 clat percentiles (usec): 00:36:07.359 | 1.00th=[23987], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:36:07.359 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:36:07.359 | 70.00th=[32900], 80.00th=[33817], 90.00th=[33817], 95.00th=[34866], 00:36:07.359 | 99.00th=[41681], 99.50th=[42206], 99.90th=[62129], 99.95th=[62129], 00:36:07.359 | 99.99th=[62129] 00:36:07.359 bw ( KiB/s): min= 1723, max= 2048, per=4.10%, avg=1933.63, stdev=70.56, samples=19 00:36:07.359 iops : min= 430, max= 512, avg=483.37, stdev=17.76, samples=19 00:36:07.359 lat (msec) : 20=0.37%, 50=99.26%, 100=0.37% 00:36:07.359 cpu : usr=99.02%, sys=0.71%, ctx=13, majf=0, minf=44 00:36:07.359 IO depths : 1=3.0%, 2=8.4%, 4=22.3%, 8=56.7%, 16=9.6%, 32=0.0%, >=64=0.0% 00:36:07.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.359 complete : 0=0.0%, 4=93.7%, 8=0.7%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.359 issued rwts: total=4864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.359 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.359 filename1: (groupid=0, jobs=1): err= 0: pid=2513687: Wed Nov 20 16:47:51 2024 00:36:07.359 read: IOPS=485, BW=1944KiB/s (1991kB/s)(19.0MiB/10009msec) 00:36:07.359 slat (nsec): min=5636, max=75005, avg=16227.98, stdev=9546.81 00:36:07.359 clat (usec): min=11866, max=69748, avg=32772.99, stdev=2736.23 00:36:07.359 lat (usec): min=11887, max=69764, avg=32789.22, stdev=2736.38 00:36:07.359 clat percentiles (usec): 00:36:07.359 | 1.00th=[31065], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:36:07.359 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:36:07.359 | 70.00th=[32900], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:36:07.359 | 99.00th=[35914], 99.50th=[39060], 99.90th=[69731], 99.95th=[69731], 00:36:07.359 | 99.99th=[69731] 00:36:07.359 bw ( KiB/s): min= 1792, max= 2048, per=4.10%, avg=1933.47, stdev=58.73, samples=19 00:36:07.359 iops : min= 448, max= 512, avg=483.37, stdev=14.68, samples=19 00:36:07.359 lat (msec) : 20=0.66%, 50=99.01%, 100=0.33% 00:36:07.359 cpu : usr=98.70%, sys=0.87%, ctx=93, majf=0, minf=53 00:36:07.359 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:07.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.359 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.359 issued rwts: total=4864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.359 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.359 filename2: (groupid=0, jobs=1): err= 0: pid=2513688: Wed Nov 20 16:47:51 2024 00:36:07.359 read: IOPS=488, BW=1954KiB/s (2001kB/s)(19.1MiB/10021msec) 00:36:07.359 slat (nsec): min=5680, max=53627, avg=9372.18, stdev=5313.34 00:36:07.359 clat (usec): min=12129, max=41663, avg=32667.84, stdev=1880.43 00:36:07.359 lat (usec): min=12138, max=41672, avg=32677.21, stdev=1880.24 00:36:07.359 clat percentiles (usec): 00:36:07.359 | 1.00th=[23200], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:36:07.359 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:36:07.359 | 70.00th=[32900], 80.00th=[33817], 90.00th=[33817], 95.00th=[33817], 00:36:07.359 | 99.00th=[35390], 99.50th=[36439], 99.90th=[41157], 99.95th=[41681], 00:36:07.359 | 99.99th=[41681] 00:36:07.359 bw ( KiB/s): min= 1920, max= 2048, per=4.13%, avg=1952.00, stdev=56.87, samples=20 00:36:07.359 iops : min= 480, max= 512, avg=488.00, stdev=14.22, samples=20 00:36:07.359 lat (msec) : 20=0.65%, 50=99.35% 00:36:07.359 cpu : usr=98.82%, sys=0.85%, ctx=47, majf=0, minf=95 00:36:07.359 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:07.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.359 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.359 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.359 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.359 filename2: (groupid=0, jobs=1): err= 0: pid=2513689: Wed Nov 20 16:47:51 2024 00:36:07.359 read: IOPS=487, BW=1949KiB/s (1996kB/s)(19.1MiB/10016msec) 00:36:07.359 slat (nsec): min=5717, max=69973, avg=12166.90, stdev=7846.85 00:36:07.359 clat (usec): min=21410, max=49984, avg=32736.41, stdev=1847.75 00:36:07.359 lat (usec): min=21426, max=50010, avg=32748.58, stdev=1847.78 00:36:07.359 clat percentiles (usec): 00:36:07.359 | 1.00th=[22938], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:36:07.359 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:36:07.359 | 70.00th=[32900], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:36:07.359 | 99.00th=[39060], 99.50th=[42206], 99.90th=[50070], 99.95th=[50070], 00:36:07.359 | 99.99th=[50070] 00:36:07.359 bw ( KiB/s): min= 1916, max= 2048, per=4.12%, avg=1945.40, stdev=52.64, samples=20 00:36:07.359 iops : min= 479, max= 512, avg=486.35, stdev=13.16, samples=20 00:36:07.359 lat (msec) : 50=100.00% 00:36:07.359 cpu : usr=98.79%, sys=0.87%, ctx=72, majf=0, minf=56 00:36:07.359 IO depths : 1=5.7%, 2=11.9%, 4=24.8%, 8=50.8%, 16=6.8%, 32=0.0%, >=64=0.0% 00:36:07.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.359 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.359 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.359 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.359 filename2: (groupid=0, jobs=1): err= 0: pid=2513690: Wed Nov 20 16:47:51 2024 00:36:07.359 read: IOPS=486, BW=1945KiB/s (1992kB/s)(19.0MiB/10015msec) 00:36:07.359 slat (nsec): min=5687, max=69176, avg=15625.22, stdev=10242.47 00:36:07.359 clat (usec): min=15948, max=56958, avg=32765.97, stdev=2431.84 00:36:07.359 lat (usec): min=15960, max=56964, avg=32781.60, stdev=2431.62 00:36:07.359 clat percentiles (usec): 00:36:07.359 | 1.00th=[22676], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:36:07.359 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:36:07.359 | 70.00th=[32900], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:36:07.359 | 99.00th=[43254], 99.50th=[47449], 99.90th=[53740], 99.95th=[53740], 00:36:07.359 | 99.99th=[56886] 00:36:07.359 bw ( KiB/s): min= 1840, max= 2048, per=4.12%, avg=1942.40, stdev=52.79, samples=20 00:36:07.359 iops : min= 460, max= 512, avg=485.60, stdev=13.20, samples=20 00:36:07.359 lat (msec) : 20=0.12%, 50=99.47%, 100=0.41% 00:36:07.359 cpu : usr=98.95%, sys=0.73%, ctx=61, majf=0, minf=53 00:36:07.359 IO depths : 1=2.6%, 2=8.5%, 4=24.1%, 8=54.8%, 16=9.9%, 32=0.0%, >=64=0.0% 00:36:07.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.359 complete : 0=0.0%, 4=94.1%, 8=0.3%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.359 issued rwts: total=4870,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.359 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.359 filename2: (groupid=0, jobs=1): err= 0: pid=2513691: Wed Nov 20 16:47:51 2024 00:36:07.359 read: IOPS=485, BW=1944KiB/s (1990kB/s)(19.0MiB/10010msec) 00:36:07.359 slat (nsec): min=5818, max=84129, avg=24587.79, stdev=13995.12 00:36:07.359 clat (usec): min=15260, max=42796, avg=32696.59, stdev=1179.09 00:36:07.359 lat (usec): min=15268, max=42823, avg=32721.17, stdev=1178.57 00:36:07.359 clat percentiles (usec): 00:36:07.359 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32113], 20.00th=[32113], 00:36:07.359 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:36:07.359 | 70.00th=[32637], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:36:07.359 | 99.00th=[35390], 99.50th=[38011], 99.90th=[42730], 99.95th=[42730], 00:36:07.359 | 99.99th=[42730] 00:36:07.359 bw ( KiB/s): min= 1792, max= 2048, per=4.11%, avg=1940.21, stdev=64.19, samples=19 00:36:07.359 iops : min= 448, max= 512, avg=485.05, stdev=16.05, samples=19 00:36:07.359 lat (msec) : 20=0.04%, 50=99.96% 00:36:07.359 cpu : usr=99.05%, sys=0.68%, ctx=10, majf=0, minf=44 00:36:07.359 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:07.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.359 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.359 issued rwts: total=4864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.360 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.360 filename2: (groupid=0, jobs=1): err= 0: pid=2513692: Wed Nov 20 16:47:51 2024 00:36:07.360 read: IOPS=486, BW=1945KiB/s (1992kB/s)(19.1MiB/10049msec) 00:36:07.360 slat (nsec): min=5670, max=79920, avg=14811.36, stdev=11114.51 00:36:07.360 clat (usec): min=13463, max=77100, avg=32776.69, stdev=3868.59 00:36:07.360 lat (usec): min=13468, max=77115, avg=32791.51, stdev=3867.89 00:36:07.360 clat percentiles (usec): 00:36:07.360 | 1.00th=[22152], 5.00th=[25822], 10.00th=[32113], 20.00th=[32375], 00:36:07.360 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32637], 00:36:07.360 | 70.00th=[33162], 80.00th=[33817], 90.00th=[33817], 95.00th=[35914], 00:36:07.360 | 99.00th=[46924], 99.50th=[52167], 99.90th=[61080], 99.95th=[61080], 00:36:07.360 | 99.99th=[77071] 00:36:07.360 bw ( KiB/s): min= 1776, max= 2048, per=4.12%, avg=1946.95, stdev=57.21, samples=19 00:36:07.360 iops : min= 444, max= 512, avg=486.74, stdev=14.30, samples=19 00:36:07.360 lat (msec) : 20=0.70%, 50=98.73%, 100=0.57% 00:36:07.360 cpu : usr=98.54%, sys=1.01%, ctx=148, majf=0, minf=85 00:36:07.360 IO depths : 1=0.3%, 2=2.3%, 4=9.0%, 8=72.8%, 16=15.6%, 32=0.0%, >=64=0.0% 00:36:07.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.360 complete : 0=0.0%, 4=91.0%, 8=6.4%, 16=2.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.360 issued rwts: total=4886,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.360 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.360 filename2: (groupid=0, jobs=1): err= 0: pid=2513693: Wed Nov 20 16:47:51 2024 00:36:07.360 read: IOPS=490, BW=1964KiB/s (2011kB/s)(19.2MiB/10013msec) 00:36:07.360 slat (nsec): min=5674, max=66972, avg=18447.11, stdev=11222.99 00:36:07.360 clat (usec): min=12419, max=55027, avg=32401.75, stdev=2813.96 00:36:07.360 lat (usec): min=12444, max=55044, avg=32420.20, stdev=2814.92 00:36:07.360 clat percentiles (usec): 00:36:07.360 | 1.00th=[21365], 5.00th=[28181], 10.00th=[32113], 20.00th=[32375], 00:36:07.360 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:36:07.360 | 70.00th=[32637], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:36:07.360 | 99.00th=[39584], 99.50th=[47973], 99.90th=[54789], 99.95th=[54789], 00:36:07.360 | 99.99th=[54789] 00:36:07.360 bw ( KiB/s): min= 1792, max= 2192, per=4.15%, avg=1957.89, stdev=91.01, samples=19 00:36:07.360 iops : min= 448, max= 548, avg=489.47, stdev=22.75, samples=19 00:36:07.360 lat (msec) : 20=0.20%, 50=99.47%, 100=0.33% 00:36:07.360 cpu : usr=98.86%, sys=0.86%, ctx=41, majf=0, minf=61 00:36:07.360 IO depths : 1=5.7%, 2=11.4%, 4=23.4%, 8=52.6%, 16=7.0%, 32=0.0%, >=64=0.0% 00:36:07.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.360 complete : 0=0.0%, 4=93.6%, 8=0.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.360 issued rwts: total=4916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.360 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.360 filename2: (groupid=0, jobs=1): err= 0: pid=2513694: Wed Nov 20 16:47:51 2024 00:36:07.360 read: IOPS=485, BW=1943KiB/s (1990kB/s)(19.0MiB/10011msec) 00:36:07.360 slat (nsec): min=5641, max=85633, avg=24666.23, stdev=13380.73 00:36:07.360 clat (usec): min=12490, max=63459, avg=32691.56, stdev=2340.69 00:36:07.360 lat (usec): min=12528, max=63475, avg=32716.22, stdev=2340.28 00:36:07.360 clat percentiles (usec): 00:36:07.360 | 1.00th=[28967], 5.00th=[32113], 10.00th=[32113], 20.00th=[32113], 00:36:07.360 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:36:07.360 | 70.00th=[32637], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:36:07.360 | 99.00th=[35914], 99.50th=[38011], 99.90th=[63177], 99.95th=[63701], 00:36:07.360 | 99.99th=[63701] 00:36:07.360 bw ( KiB/s): min= 1792, max= 2048, per=4.10%, avg=1933.47, stdev=58.73, samples=19 00:36:07.360 iops : min= 448, max= 512, avg=483.37, stdev=14.68, samples=19 00:36:07.360 lat (msec) : 20=0.33%, 50=99.34%, 100=0.33% 00:36:07.360 cpu : usr=98.80%, sys=0.84%, ctx=80, majf=0, minf=48 00:36:07.360 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:07.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.360 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.360 issued rwts: total=4864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.360 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.360 filename2: (groupid=0, jobs=1): err= 0: pid=2513695: Wed Nov 20 16:47:51 2024 00:36:07.360 read: IOPS=485, BW=1944KiB/s (1991kB/s)(19.0MiB/10009msec) 00:36:07.360 slat (nsec): min=5404, max=84514, avg=26630.31, stdev=14328.60 00:36:07.360 clat (usec): min=12310, max=61022, avg=32679.20, stdev=2234.10 00:36:07.360 lat (usec): min=12317, max=61038, avg=32705.83, stdev=2233.15 00:36:07.360 clat percentiles (usec): 00:36:07.360 | 1.00th=[29230], 5.00th=[32113], 10.00th=[32113], 20.00th=[32113], 00:36:07.360 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:36:07.360 | 70.00th=[32637], 80.00th=[33424], 90.00th=[33817], 95.00th=[33817], 00:36:07.360 | 99.00th=[35914], 99.50th=[38011], 99.90th=[61080], 99.95th=[61080], 00:36:07.360 | 99.99th=[61080] 00:36:07.360 bw ( KiB/s): min= 1792, max= 2048, per=4.11%, avg=1940.21, stdev=64.19, samples=19 00:36:07.360 iops : min= 448, max= 512, avg=485.05, stdev=16.05, samples=19 00:36:07.360 lat (msec) : 20=0.33%, 50=99.34%, 100=0.33% 00:36:07.360 cpu : usr=98.78%, sys=0.85%, ctx=59, majf=0, minf=45 00:36:07.360 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:07.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.360 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:07.360 issued rwts: total=4864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:07.360 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:07.360 00:36:07.360 Run status group 0 (all jobs): 00:36:07.360 READ: bw=46.1MiB/s (48.3MB/s), 1943KiB/s-2424KiB/s (1990kB/s-2482kB/s), io=463MiB (486MB), run=10003-10049msec 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:07.360 bdev_null0 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.360 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:07.361 [2024-11-20 16:47:52.119357] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:07.361 bdev_null1 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:07.361 { 00:36:07.361 "params": { 00:36:07.361 "name": "Nvme$subsystem", 00:36:07.361 "trtype": "$TEST_TRANSPORT", 00:36:07.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:07.361 "adrfam": "ipv4", 00:36:07.361 "trsvcid": "$NVMF_PORT", 00:36:07.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:07.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:07.361 "hdgst": ${hdgst:-false}, 00:36:07.361 "ddgst": ${ddgst:-false} 00:36:07.361 }, 00:36:07.361 "method": "bdev_nvme_attach_controller" 00:36:07.361 } 00:36:07.361 EOF 00:36:07.361 )") 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:07.361 { 00:36:07.361 "params": { 00:36:07.361 "name": "Nvme$subsystem", 00:36:07.361 "trtype": "$TEST_TRANSPORT", 00:36:07.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:07.361 "adrfam": "ipv4", 00:36:07.361 "trsvcid": "$NVMF_PORT", 00:36:07.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:07.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:07.361 "hdgst": ${hdgst:-false}, 00:36:07.361 "ddgst": ${ddgst:-false} 00:36:07.361 }, 00:36:07.361 "method": "bdev_nvme_attach_controller" 00:36:07.361 } 00:36:07.361 EOF 00:36:07.361 )") 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:07.361 "params": { 00:36:07.361 "name": "Nvme0", 00:36:07.361 "trtype": "tcp", 00:36:07.361 "traddr": "10.0.0.2", 00:36:07.361 "adrfam": "ipv4", 00:36:07.361 "trsvcid": "4420", 00:36:07.361 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:07.361 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:07.361 "hdgst": false, 00:36:07.361 "ddgst": false 00:36:07.361 }, 00:36:07.361 "method": "bdev_nvme_attach_controller" 00:36:07.361 },{ 00:36:07.361 "params": { 00:36:07.361 "name": "Nvme1", 00:36:07.361 "trtype": "tcp", 00:36:07.361 "traddr": "10.0.0.2", 00:36:07.361 "adrfam": "ipv4", 00:36:07.361 "trsvcid": "4420", 00:36:07.361 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:07.361 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:07.361 "hdgst": false, 00:36:07.361 "ddgst": false 00:36:07.361 }, 00:36:07.361 "method": "bdev_nvme_attach_controller" 00:36:07.361 }' 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:07.361 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:07.361 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:07.361 ... 00:36:07.361 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:07.361 ... 00:36:07.361 fio-3.35 00:36:07.361 Starting 4 threads 00:36:12.641 00:36:12.641 filename0: (groupid=0, jobs=1): err= 0: pid=2516154: Wed Nov 20 16:47:58 2024 00:36:12.641 read: IOPS=2040, BW=15.9MiB/s (16.7MB/s)(80.4MiB/5042msec) 00:36:12.641 slat (nsec): min=5495, max=28419, avg=6120.34, stdev=1834.97 00:36:12.641 clat (usec): min=2099, max=44136, avg=3882.92, stdev=1310.18 00:36:12.641 lat (usec): min=2105, max=44160, avg=3889.04, stdev=1310.28 00:36:12.641 clat percentiles (usec): 00:36:12.641 | 1.00th=[ 3589], 5.00th=[ 3785], 10.00th=[ 3785], 20.00th=[ 3818], 00:36:12.641 | 30.00th=[ 3818], 40.00th=[ 3818], 50.00th=[ 3851], 60.00th=[ 3851], 00:36:12.641 | 70.00th=[ 3851], 80.00th=[ 3851], 90.00th=[ 3884], 95.00th=[ 3884], 00:36:12.641 | 99.00th=[ 4178], 99.50th=[ 4621], 99.90th=[42730], 99.95th=[44303], 00:36:12.641 | 99.99th=[44303] 00:36:12.641 bw ( KiB/s): min=15120, max=16640, per=24.99%, avg=16454.40, stdev=470.13, samples=10 00:36:12.641 iops : min= 1890, max= 2080, avg=2056.80, stdev=58.77, samples=10 00:36:12.641 lat (msec) : 4=97.17%, 10=2.72%, 50=0.11% 00:36:12.641 cpu : usr=96.65%, sys=2.98%, ctx=117, majf=0, minf=0 00:36:12.641 IO depths : 1=0.1%, 2=0.1%, 4=74.8%, 8=25.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:12.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.641 complete : 0=0.0%, 4=90.1%, 8=9.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.641 issued rwts: total=10287,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:12.641 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:12.641 filename0: (groupid=0, jobs=1): err= 0: pid=2516155: Wed Nov 20 16:47:58 2024 00:36:12.641 read: IOPS=2085, BW=16.3MiB/s (17.1MB/s)(81.5MiB/5001msec) 00:36:12.641 slat (nsec): min=5491, max=56281, avg=6347.80, stdev=2421.33 00:36:12.641 clat (usec): min=769, max=6661, avg=3822.13, stdev=191.60 00:36:12.641 lat (usec): min=785, max=6688, avg=3828.47, stdev=191.39 00:36:12.641 clat percentiles (usec): 00:36:12.641 | 1.00th=[ 3064], 5.00th=[ 3621], 10.00th=[ 3785], 20.00th=[ 3818], 00:36:12.641 | 30.00th=[ 3818], 40.00th=[ 3851], 50.00th=[ 3851], 60.00th=[ 3851], 00:36:12.641 | 70.00th=[ 3851], 80.00th=[ 3851], 90.00th=[ 3884], 95.00th=[ 3884], 00:36:12.641 | 99.00th=[ 4228], 99.50th=[ 4948], 99.90th=[ 5538], 99.95th=[ 5538], 00:36:12.641 | 99.99th=[ 5735] 00:36:12.641 bw ( KiB/s): min=16560, max=16816, per=25.32%, avg=16675.56, stdev=88.28, samples=9 00:36:12.641 iops : min= 2070, max= 2102, avg=2084.44, stdev=11.04, samples=9 00:36:12.641 lat (usec) : 1000=0.01% 00:36:12.641 lat (msec) : 2=0.03%, 4=97.78%, 10=2.18% 00:36:12.641 cpu : usr=96.68%, sys=3.08%, ctx=5, majf=0, minf=9 00:36:12.641 IO depths : 1=0.1%, 2=0.1%, 4=65.2%, 8=34.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:12.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.641 complete : 0=0.0%, 4=98.0%, 8=2.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.641 issued rwts: total=10428,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:12.641 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:12.641 filename1: (groupid=0, jobs=1): err= 0: pid=2516156: Wed Nov 20 16:47:58 2024 00:36:12.641 read: IOPS=2074, BW=16.2MiB/s (17.0MB/s)(81.0MiB/5001msec) 00:36:12.641 slat (nsec): min=5506, max=30665, avg=6001.03, stdev=1482.64 00:36:12.641 clat (usec): min=2096, max=5995, avg=3840.28, stdev=191.37 00:36:12.641 lat (usec): min=2102, max=6000, avg=3846.28, stdev=191.09 00:36:12.641 clat percentiles (usec): 00:36:12.641 | 1.00th=[ 3228], 5.00th=[ 3752], 10.00th=[ 3785], 20.00th=[ 3818], 00:36:12.641 | 30.00th=[ 3818], 40.00th=[ 3818], 50.00th=[ 3851], 60.00th=[ 3851], 00:36:12.641 | 70.00th=[ 3851], 80.00th=[ 3851], 90.00th=[ 3884], 95.00th=[ 3884], 00:36:12.641 | 99.00th=[ 4228], 99.50th=[ 5342], 99.90th=[ 5997], 99.95th=[ 5997], 00:36:12.641 | 99.99th=[ 5997] 00:36:12.641 bw ( KiB/s): min=16496, max=16768, per=25.20%, avg=16593.78, stdev=78.56, samples=9 00:36:12.641 iops : min= 2062, max= 2096, avg=2074.22, stdev= 9.82, samples=9 00:36:12.641 lat (msec) : 4=96.95%, 10=3.05% 00:36:12.641 cpu : usr=96.96%, sys=2.82%, ctx=16, majf=0, minf=9 00:36:12.641 IO depths : 1=0.1%, 2=0.1%, 4=73.4%, 8=26.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:12.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.641 complete : 0=0.0%, 4=91.4%, 8=8.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.641 issued rwts: total=10373,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:12.641 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:12.641 filename1: (groupid=0, jobs=1): err= 0: pid=2516157: Wed Nov 20 16:47:58 2024 00:36:12.641 read: IOPS=2081, BW=16.3MiB/s (17.1MB/s)(81.3MiB/5001msec) 00:36:12.641 slat (nsec): min=5502, max=57230, avg=6391.17, stdev=2384.60 00:36:12.641 clat (usec): min=1968, max=5781, avg=3828.53, stdev=142.72 00:36:12.641 lat (usec): min=1974, max=5787, avg=3834.92, stdev=142.30 00:36:12.641 clat percentiles (usec): 00:36:12.641 | 1.00th=[ 3130], 5.00th=[ 3687], 10.00th=[ 3785], 20.00th=[ 3818], 00:36:12.641 | 30.00th=[ 3818], 40.00th=[ 3851], 50.00th=[ 3851], 60.00th=[ 3851], 00:36:12.641 | 70.00th=[ 3851], 80.00th=[ 3851], 90.00th=[ 3884], 95.00th=[ 3884], 00:36:12.641 | 99.00th=[ 4146], 99.50th=[ 4178], 99.90th=[ 5080], 99.95th=[ 5145], 00:36:12.641 | 99.99th=[ 5800] 00:36:12.641 bw ( KiB/s): min=16592, max=16880, per=25.30%, avg=16656.00, stdev=86.53, samples=9 00:36:12.641 iops : min= 2074, max= 2110, avg=2082.00, stdev=10.82, samples=9 00:36:12.641 lat (msec) : 2=0.03%, 4=97.48%, 10=2.49% 00:36:12.641 cpu : usr=96.54%, sys=3.22%, ctx=6, majf=0, minf=9 00:36:12.641 IO depths : 1=0.1%, 2=0.1%, 4=64.1%, 8=35.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:12.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.641 complete : 0=0.0%, 4=98.8%, 8=1.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.641 issued rwts: total=10412,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:12.641 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:12.641 00:36:12.641 Run status group 0 (all jobs): 00:36:12.641 READ: bw=64.3MiB/s (67.4MB/s), 15.9MiB/s-16.3MiB/s (16.7MB/s-17.1MB/s), io=324MiB (340MB), run=5001-5042msec 00:36:12.641 16:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:36:12.641 16:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:12.641 16:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:12.641 16:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:12.641 16:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:12.641 16:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:12.641 16:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.641 16:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:12.641 16:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.641 16:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:12.641 16:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.641 16:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:12.641 16:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.641 16:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:12.641 16:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:12.641 16:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:12.641 16:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:12.641 16:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.641 16:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:12.641 16:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.641 16:47:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:12.641 16:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.641 16:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:12.641 16:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.641 00:36:12.641 real 0m24.599s 00:36:12.641 user 5m16.430s 00:36:12.641 sys 0m4.236s 00:36:12.641 16:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:12.641 16:47:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:12.641 ************************************ 00:36:12.641 END TEST fio_dif_rand_params 00:36:12.641 ************************************ 00:36:12.641 16:47:58 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:36:12.641 16:47:58 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:12.641 16:47:58 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:12.641 16:47:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:12.900 ************************************ 00:36:12.900 START TEST fio_dif_digest 00:36:12.900 ************************************ 00:36:12.900 16:47:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:36:12.900 16:47:58 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:36:12.900 16:47:58 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:36:12.900 16:47:58 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:36:12.900 16:47:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:36:12.900 16:47:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:36:12.900 16:47:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:36:12.900 16:47:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:12.901 bdev_null0 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:12.901 [2024-11-20 16:47:58.684429] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:12.901 { 00:36:12.901 "params": { 00:36:12.901 "name": "Nvme$subsystem", 00:36:12.901 "trtype": "$TEST_TRANSPORT", 00:36:12.901 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:12.901 "adrfam": "ipv4", 00:36:12.901 "trsvcid": "$NVMF_PORT", 00:36:12.901 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:12.901 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:12.901 "hdgst": ${hdgst:-false}, 00:36:12.901 "ddgst": ${ddgst:-false} 00:36:12.901 }, 00:36:12.901 "method": "bdev_nvme_attach_controller" 00:36:12.901 } 00:36:12.901 EOF 00:36:12.901 )") 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:12.901 "params": { 00:36:12.901 "name": "Nvme0", 00:36:12.901 "trtype": "tcp", 00:36:12.901 "traddr": "10.0.0.2", 00:36:12.901 "adrfam": "ipv4", 00:36:12.901 "trsvcid": "4420", 00:36:12.901 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:12.901 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:12.901 "hdgst": true, 00:36:12.901 "ddgst": true 00:36:12.901 }, 00:36:12.901 "method": "bdev_nvme_attach_controller" 00:36:12.901 }' 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:12.901 16:47:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:13.160 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:13.160 ... 00:36:13.160 fio-3.35 00:36:13.160 Starting 3 threads 00:36:25.390 00:36:25.390 filename0: (groupid=0, jobs=1): err= 0: pid=2517393: Wed Nov 20 16:48:09 2024 00:36:25.390 read: IOPS=214, BW=26.8MiB/s (28.1MB/s)(270MiB/10048msec) 00:36:25.390 slat (nsec): min=5904, max=32000, avg=7918.12, stdev=1595.45 00:36:25.390 clat (usec): min=8785, max=53794, avg=13943.74, stdev=2190.43 00:36:25.390 lat (usec): min=8794, max=53801, avg=13951.66, stdev=2190.66 00:36:25.390 clat percentiles (usec): 00:36:25.390 | 1.00th=[10028], 5.00th=[11994], 10.00th=[12649], 20.00th=[13173], 00:36:25.390 | 30.00th=[13435], 40.00th=[13698], 50.00th=[13960], 60.00th=[14091], 00:36:25.390 | 70.00th=[14353], 80.00th=[14746], 90.00th=[15139], 95.00th=[15533], 00:36:25.390 | 99.00th=[16188], 99.50th=[16581], 99.90th=[52691], 99.95th=[53216], 00:36:25.390 | 99.99th=[53740] 00:36:25.390 bw ( KiB/s): min=25088, max=28928, per=33.87%, avg=27584.00, stdev=787.40, samples=20 00:36:25.390 iops : min= 196, max= 226, avg=215.50, stdev= 6.15, samples=20 00:36:25.390 lat (msec) : 10=0.93%, 20=98.84%, 100=0.23% 00:36:25.390 cpu : usr=95.19%, sys=4.56%, ctx=11, majf=0, minf=106 00:36:25.390 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:25.390 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.390 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.390 issued rwts: total=2157,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:25.390 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:25.390 filename0: (groupid=0, jobs=1): err= 0: pid=2517394: Wed Nov 20 16:48:09 2024 00:36:25.390 read: IOPS=212, BW=26.6MiB/s (27.8MB/s)(267MiB/10045msec) 00:36:25.390 slat (nsec): min=5921, max=37228, avg=7819.87, stdev=1674.75 00:36:25.390 clat (usec): min=8042, max=58056, avg=14091.98, stdev=3473.23 00:36:25.390 lat (usec): min=8048, max=58088, avg=14099.80, stdev=3473.43 00:36:25.390 clat percentiles (usec): 00:36:25.390 | 1.00th=[10814], 5.00th=[12125], 10.00th=[12518], 20.00th=[13042], 00:36:25.390 | 30.00th=[13435], 40.00th=[13698], 50.00th=[13960], 60.00th=[14091], 00:36:25.390 | 70.00th=[14353], 80.00th=[14615], 90.00th=[15139], 95.00th=[15533], 00:36:25.390 | 99.00th=[16712], 99.50th=[54264], 99.90th=[56886], 99.95th=[57934], 00:36:25.390 | 99.99th=[57934] 00:36:25.390 bw ( KiB/s): min=25088, max=28672, per=33.51%, avg=27289.60, stdev=1032.05, samples=20 00:36:25.390 iops : min= 196, max= 224, avg=213.20, stdev= 8.06, samples=20 00:36:25.390 lat (msec) : 10=0.84%, 20=98.50%, 50=0.05%, 100=0.61% 00:36:25.390 cpu : usr=94.22%, sys=5.30%, ctx=433, majf=0, minf=148 00:36:25.390 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:25.390 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.390 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.390 issued rwts: total=2134,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:25.390 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:25.390 filename0: (groupid=0, jobs=1): err= 0: pid=2517395: Wed Nov 20 16:48:09 2024 00:36:25.391 read: IOPS=209, BW=26.2MiB/s (27.4MB/s)(263MiB/10047msec) 00:36:25.391 slat (nsec): min=5875, max=33491, avg=7623.30, stdev=1644.47 00:36:25.391 clat (usec): min=7047, max=56853, avg=14310.04, stdev=2736.62 00:36:25.391 lat (usec): min=7062, max=56859, avg=14317.67, stdev=2736.42 00:36:25.391 clat percentiles (usec): 00:36:25.391 | 1.00th=[ 9896], 5.00th=[12387], 10.00th=[12911], 20.00th=[13435], 00:36:25.391 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14222], 60.00th=[14484], 00:36:25.391 | 70.00th=[14746], 80.00th=[15008], 90.00th=[15533], 95.00th=[16057], 00:36:25.391 | 99.00th=[16909], 99.50th=[17433], 99.90th=[54264], 99.95th=[56361], 00:36:25.391 | 99.99th=[56886] 00:36:25.391 bw ( KiB/s): min=24576, max=28416, per=33.00%, avg=26880.00, stdev=838.84, samples=20 00:36:25.391 iops : min= 192, max= 222, avg=210.00, stdev= 6.55, samples=20 00:36:25.391 lat (msec) : 10=1.14%, 20=98.48%, 50=0.05%, 100=0.33% 00:36:25.391 cpu : usr=94.95%, sys=4.80%, ctx=9, majf=0, minf=141 00:36:25.391 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:25.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.391 issued rwts: total=2102,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:25.391 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:25.391 00:36:25.391 Run status group 0 (all jobs): 00:36:25.391 READ: bw=79.5MiB/s (83.4MB/s), 26.2MiB/s-26.8MiB/s (27.4MB/s-28.1MB/s), io=799MiB (838MB), run=10045-10048msec 00:36:25.391 16:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:25.391 16:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:36:25.391 16:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:36:25.391 16:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:25.391 16:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:36:25.391 16:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:25.391 16:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.391 16:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:25.391 16:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.391 16:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:25.391 16:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.391 16:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:25.391 16:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.391 00:36:25.391 real 0m11.119s 00:36:25.391 user 0m40.981s 00:36:25.391 sys 0m1.781s 00:36:25.391 16:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:25.391 16:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:25.391 ************************************ 00:36:25.391 END TEST fio_dif_digest 00:36:25.391 ************************************ 00:36:25.391 16:48:09 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:25.391 16:48:09 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:36:25.391 16:48:09 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:25.391 16:48:09 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:36:25.391 16:48:09 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:25.391 16:48:09 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:36:25.391 16:48:09 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:25.391 16:48:09 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:25.391 rmmod nvme_tcp 00:36:25.391 rmmod nvme_fabrics 00:36:25.391 rmmod nvme_keyring 00:36:25.391 16:48:09 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:25.391 16:48:09 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:36:25.391 16:48:09 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:36:25.391 16:48:09 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 2507238 ']' 00:36:25.391 16:48:09 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 2507238 00:36:25.391 16:48:09 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 2507238 ']' 00:36:25.391 16:48:09 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 2507238 00:36:25.391 16:48:09 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:36:25.391 16:48:09 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:25.391 16:48:09 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2507238 00:36:25.391 16:48:09 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:25.391 16:48:09 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:25.391 16:48:09 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2507238' 00:36:25.391 killing process with pid 2507238 00:36:25.391 16:48:09 nvmf_dif -- common/autotest_common.sh@973 -- # kill 2507238 00:36:25.391 16:48:09 nvmf_dif -- common/autotest_common.sh@978 -- # wait 2507238 00:36:25.391 16:48:10 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:36:25.391 16:48:10 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:27.941 Waiting for block devices as requested 00:36:27.941 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:27.941 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:27.941 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:27.941 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:27.941 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:27.941 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:27.941 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:28.202 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:28.202 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:36:28.462 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:28.462 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:28.462 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:28.462 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:28.723 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:28.723 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:28.723 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:28.983 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:29.244 16:48:15 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:29.244 16:48:15 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:29.244 16:48:15 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:36:29.244 16:48:15 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:36:29.244 16:48:15 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:29.244 16:48:15 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:36:29.244 16:48:15 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:29.244 16:48:15 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:29.244 16:48:15 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:29.244 16:48:15 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:29.244 16:48:15 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:31.156 16:48:17 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:31.156 00:36:31.156 real 1m17.653s 00:36:31.156 user 8m0.316s 00:36:31.156 sys 0m21.263s 00:36:31.156 16:48:17 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:31.156 16:48:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:31.156 ************************************ 00:36:31.156 END TEST nvmf_dif 00:36:31.156 ************************************ 00:36:31.422 16:48:17 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:31.422 16:48:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:31.422 16:48:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:31.422 16:48:17 -- common/autotest_common.sh@10 -- # set +x 00:36:31.422 ************************************ 00:36:31.422 START TEST nvmf_abort_qd_sizes 00:36:31.422 ************************************ 00:36:31.422 16:48:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:31.422 * Looking for test storage... 00:36:31.422 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:31.422 16:48:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:31.422 16:48:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:36:31.422 16:48:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:31.422 16:48:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:31.422 16:48:17 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:31.422 16:48:17 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:31.422 16:48:17 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:31.423 16:48:17 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:36:31.423 16:48:17 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:36:31.423 16:48:17 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:36:31.423 16:48:17 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:36:31.423 16:48:17 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:36:31.423 16:48:17 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:36:31.423 16:48:17 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:36:31.423 16:48:17 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:31.423 16:48:17 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:36:31.423 16:48:17 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:36:31.423 16:48:17 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:31.423 16:48:17 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:31.423 16:48:17 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:36:31.423 16:48:17 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:36:31.423 16:48:17 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:31.423 16:48:17 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:36:31.423 16:48:17 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:36:31.423 16:48:17 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:36:31.423 16:48:17 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:36:31.423 16:48:17 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:31.423 16:48:17 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:36:31.423 16:48:17 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:36:31.423 16:48:17 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:31.423 16:48:17 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:31.423 16:48:17 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:36:31.423 16:48:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:31.423 16:48:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:31.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:31.423 --rc genhtml_branch_coverage=1 00:36:31.423 --rc genhtml_function_coverage=1 00:36:31.423 --rc genhtml_legend=1 00:36:31.423 --rc geninfo_all_blocks=1 00:36:31.423 --rc geninfo_unexecuted_blocks=1 00:36:31.423 00:36:31.423 ' 00:36:31.423 16:48:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:31.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:31.423 --rc genhtml_branch_coverage=1 00:36:31.423 --rc genhtml_function_coverage=1 00:36:31.423 --rc genhtml_legend=1 00:36:31.423 --rc geninfo_all_blocks=1 00:36:31.423 --rc geninfo_unexecuted_blocks=1 00:36:31.423 00:36:31.423 ' 00:36:31.423 16:48:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:31.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:31.423 --rc genhtml_branch_coverage=1 00:36:31.423 --rc genhtml_function_coverage=1 00:36:31.423 --rc genhtml_legend=1 00:36:31.423 --rc geninfo_all_blocks=1 00:36:31.423 --rc geninfo_unexecuted_blocks=1 00:36:31.423 00:36:31.423 ' 00:36:31.423 16:48:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:31.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:31.423 --rc genhtml_branch_coverage=1 00:36:31.423 --rc genhtml_function_coverage=1 00:36:31.423 --rc genhtml_legend=1 00:36:31.423 --rc geninfo_all_blocks=1 00:36:31.423 --rc geninfo_unexecuted_blocks=1 00:36:31.423 00:36:31.423 ' 00:36:31.423 16:48:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:31.754 16:48:17 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:36:31.754 16:48:17 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:31.754 16:48:17 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:31.754 16:48:17 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:31.754 16:48:17 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:31.754 16:48:17 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:31.754 16:48:17 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:31.754 16:48:17 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:31.754 16:48:17 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:31.754 16:48:17 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:31.754 16:48:17 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:31.754 16:48:17 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:36:31.754 16:48:17 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:36:31.754 16:48:17 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:31.754 16:48:17 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:31.754 16:48:17 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:31.754 16:48:17 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:31.754 16:48:17 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:31.754 16:48:17 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:36:31.754 16:48:17 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:31.754 16:48:17 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:31.754 16:48:17 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:31.754 16:48:17 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:31.754 16:48:17 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:31.754 16:48:17 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:31.754 16:48:17 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:36:31.754 16:48:17 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:31.754 16:48:17 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:36:31.754 16:48:17 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:31.754 16:48:17 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:31.754 16:48:17 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:31.754 16:48:17 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:31.754 16:48:17 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:31.754 16:48:17 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:31.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:31.754 16:48:17 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:31.754 16:48:17 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:31.754 16:48:17 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:31.754 16:48:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:36:31.754 16:48:17 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:31.754 16:48:17 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:31.754 16:48:17 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:31.754 16:48:17 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:31.754 16:48:17 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:31.754 16:48:17 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:31.754 16:48:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:31.754 16:48:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:31.754 16:48:17 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:31.754 16:48:17 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:31.754 16:48:17 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:36:31.754 16:48:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:39.892 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:39.892 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:36:39.892 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:39.892 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:39.892 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:39.892 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:39.892 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:39.892 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:36:39.892 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:39.892 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:36:39.892 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:36:39.892 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:36:39.892 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:36:39.892 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:36:39.892 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:36:39.892 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:39.892 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:39.892 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:39.892 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:39.892 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:39.892 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:39.892 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:39.892 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:39.892 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:39.892 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:39.892 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:39.892 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:39.892 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:39.892 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:39.892 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:39.892 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:39.892 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:39.892 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:39.892 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:39.892 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:39.892 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:39.892 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:39.892 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:39.892 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:39.892 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:39.892 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:39.892 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:39.892 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:39.892 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:39.892 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:39.892 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:39.892 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:39.892 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:39.892 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:39.892 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:39.893 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:39.893 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:39.893 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:39.893 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:39.893 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:39.893 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:39.893 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:39.893 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:39.893 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:39.893 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:39.893 Found net devices under 0000:31:00.0: cvl_0_0 00:36:39.893 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:39.893 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:39.893 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:39.893 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:39.893 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:39.893 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:39.893 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:39.893 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:39.893 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:39.893 Found net devices under 0000:31:00.1: cvl_0_1 00:36:39.893 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:39.893 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:39.893 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:36:39.893 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:39.893 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:39.893 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:39.893 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:39.893 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:39.893 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:39.893 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:39.893 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:39.893 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:39.893 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:39.893 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:39.893 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:39.893 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:39.893 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:39.893 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:39.893 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:39.893 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:39.893 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:39.893 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:39.893 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:39.893 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:39.893 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:39.893 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:39.893 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:39.893 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:39.893 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:39.893 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:39.893 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:36:39.893 00:36:39.893 --- 10.0.0.2 ping statistics --- 00:36:39.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:39.893 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:36:39.893 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:39.893 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:39.893 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:36:39.893 00:36:39.893 --- 10.0.0.1 ping statistics --- 00:36:39.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:39.893 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:36:39.893 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:39.893 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:36:39.893 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:36:39.893 16:48:24 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:42.435 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:42.435 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:42.435 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:42.435 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:42.435 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:42.435 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:42.435 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:42.435 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:42.435 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:42.435 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:42.435 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:42.435 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:42.435 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:42.435 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:42.695 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:42.695 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:42.695 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:36:42.955 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:42.955 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:42.955 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:42.955 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:42.955 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:42.955 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:42.955 16:48:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:36:42.955 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:42.955 16:48:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:42.955 16:48:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:42.955 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=2527446 00:36:42.955 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 2527446 00:36:42.955 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:42.955 16:48:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 2527446 ']' 00:36:42.955 16:48:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:42.955 16:48:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:42.955 16:48:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:42.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:42.955 16:48:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:42.955 16:48:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:42.955 [2024-11-20 16:48:28.906147] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:36:42.955 [2024-11-20 16:48:28.906198] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:43.216 [2024-11-20 16:48:28.990422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:43.216 [2024-11-20 16:48:29.029676] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:43.216 [2024-11-20 16:48:29.029711] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:43.216 [2024-11-20 16:48:29.029719] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:43.216 [2024-11-20 16:48:29.029726] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:43.216 [2024-11-20 16:48:29.029732] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:43.216 [2024-11-20 16:48:29.031336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:43.216 [2024-11-20 16:48:29.031474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:43.216 [2024-11-20 16:48:29.031633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:43.216 [2024-11-20 16:48:29.031634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:43.786 16:48:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:43.786 16:48:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:36:43.786 16:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:43.786 16:48:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:43.786 16:48:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:44.047 16:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:44.047 16:48:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:44.047 16:48:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:36:44.047 16:48:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:36:44.047 16:48:29 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:36:44.047 16:48:29 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:36:44.047 16:48:29 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:36:44.047 16:48:29 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:36:44.047 16:48:29 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:36:44.047 16:48:29 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:36:44.047 16:48:29 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:36:44.047 16:48:29 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:36:44.047 16:48:29 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:36:44.047 16:48:29 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:36:44.047 16:48:29 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:36:44.047 16:48:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:36:44.047 16:48:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:36:44.047 16:48:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:36:44.047 16:48:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:44.047 16:48:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:44.047 16:48:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:44.047 ************************************ 00:36:44.047 START TEST spdk_target_abort 00:36:44.047 ************************************ 00:36:44.047 16:48:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:36:44.047 16:48:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:44.047 16:48:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:36:44.047 16:48:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.047 16:48:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:44.308 spdk_targetn1 00:36:44.308 16:48:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.308 16:48:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:44.308 16:48:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.308 16:48:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:44.308 [2024-11-20 16:48:30.113115] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:44.308 16:48:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.308 16:48:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:36:44.308 16:48:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.308 16:48:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:44.308 16:48:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.308 16:48:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:36:44.308 16:48:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.308 16:48:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:44.308 16:48:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.308 16:48:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:36:44.308 16:48:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.308 16:48:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:44.308 [2024-11-20 16:48:30.161423] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:44.308 16:48:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.308 16:48:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:36:44.308 16:48:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:44.308 16:48:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:44.308 16:48:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:36:44.308 16:48:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:44.308 16:48:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:44.308 16:48:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:44.308 16:48:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:44.309 16:48:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:44.309 16:48:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:44.309 16:48:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:44.309 16:48:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:44.309 16:48:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:44.309 16:48:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:44.309 16:48:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:36:44.309 16:48:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:44.309 16:48:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:44.309 16:48:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:44.309 16:48:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:44.309 16:48:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:44.309 16:48:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:44.570 [2024-11-20 16:48:30.312462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:296 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:36:44.570 [2024-11-20 16:48:30.312490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0028 p:1 m:0 dnr:0 00:36:44.570 [2024-11-20 16:48:30.328478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:848 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:36:44.570 [2024-11-20 16:48:30.328495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:006d p:1 m:0 dnr:0 00:36:44.570 [2024-11-20 16:48:30.336462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:1152 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:36:44.570 [2024-11-20 16:48:30.336477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0091 p:1 m:0 dnr:0 00:36:44.570 [2024-11-20 16:48:30.352440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:1784 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:36:44.570 [2024-11-20 16:48:30.352456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00e0 p:1 m:0 dnr:0 00:36:44.570 [2024-11-20 16:48:30.400448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:3464 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:36:44.570 [2024-11-20 16:48:30.400465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00b3 p:0 m:0 dnr:0 00:36:44.570 [2024-11-20 16:48:30.408429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:3744 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:36:44.570 [2024-11-20 16:48:30.408443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00d7 p:0 m:0 dnr:0 00:36:47.869 Initializing NVMe Controllers 00:36:47.869 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:47.869 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:47.869 Initialization complete. Launching workers. 00:36:47.869 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12570, failed: 6 00:36:47.869 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3077, failed to submit 9499 00:36:47.869 success 723, unsuccessful 2354, failed 0 00:36:47.870 16:48:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:47.870 16:48:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:47.870 [2024-11-20 16:48:33.494139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:174 nsid:1 lba:312 len:8 PRP1 0x200004e58000 PRP2 0x0 00:36:47.870 [2024-11-20 16:48:33.494171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:174 cdw0:0 sqhd:0035 p:1 m:0 dnr:0 00:36:47.870 [2024-11-20 16:48:33.518233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:188 nsid:1 lba:856 len:8 PRP1 0x200004e56000 PRP2 0x0 00:36:47.870 [2024-11-20 16:48:33.518259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:188 cdw0:0 sqhd:0077 p:1 m:0 dnr:0 00:36:47.870 [2024-11-20 16:48:33.590133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:190 nsid:1 lba:2488 len:8 PRP1 0x200004e4a000 PRP2 0x0 00:36:47.870 [2024-11-20 16:48:33.590158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:190 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:36:47.870 [2024-11-20 16:48:33.622157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:180 nsid:1 lba:3328 len:8 PRP1 0x200004e4e000 PRP2 0x0 00:36:47.870 [2024-11-20 16:48:33.622183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:180 cdw0:0 sqhd:00a9 p:0 m:0 dnr:0 00:36:48.809 [2024-11-20 16:48:34.712087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:168 nsid:1 lba:28616 len:8 PRP1 0x200004e3a000 PRP2 0x0 00:36:48.809 [2024-11-20 16:48:34.712119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:168 cdw0:0 sqhd:00fb p:0 m:0 dnr:0 00:36:50.717 Initializing NVMe Controllers 00:36:50.717 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:50.717 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:50.717 Initialization complete. Launching workers. 00:36:50.717 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8566, failed: 5 00:36:50.717 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1220, failed to submit 7351 00:36:50.717 success 343, unsuccessful 877, failed 0 00:36:50.717 16:48:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:50.717 16:48:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:51.286 [2024-11-20 16:48:37.068906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:23744 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:36:51.286 [2024-11-20 16:48:37.068931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00be p:1 m:0 dnr:0 00:36:53.196 [2024-11-20 16:48:38.672290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:156 nsid:1 lba:202240 len:8 PRP1 0x200004ad8000 PRP2 0x0 00:36:53.196 [2024-11-20 16:48:38.672314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:156 cdw0:0 sqhd:00e9 p:0 m:0 dnr:0 00:36:54.134 Initializing NVMe Controllers 00:36:54.134 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:54.134 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:54.134 Initialization complete. Launching workers. 00:36:54.134 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 41825, failed: 2 00:36:54.134 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2767, failed to submit 39060 00:36:54.134 success 571, unsuccessful 2196, failed 0 00:36:54.134 16:48:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:36:54.134 16:48:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.134 16:48:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:54.134 16:48:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.135 16:48:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:54.135 16:48:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.135 16:48:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:56.047 16:48:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:56.047 16:48:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2527446 00:36:56.047 16:48:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 2527446 ']' 00:36:56.047 16:48:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 2527446 00:36:56.047 16:48:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:36:56.047 16:48:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:56.047 16:48:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2527446 00:36:56.047 16:48:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:56.047 16:48:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:56.047 16:48:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2527446' 00:36:56.047 killing process with pid 2527446 00:36:56.047 16:48:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 2527446 00:36:56.047 16:48:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 2527446 00:36:56.047 00:36:56.047 real 0m12.138s 00:36:56.047 user 0m49.555s 00:36:56.047 sys 0m1.866s 00:36:56.047 16:48:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:56.047 16:48:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:56.047 ************************************ 00:36:56.047 END TEST spdk_target_abort 00:36:56.047 ************************************ 00:36:56.047 16:48:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:36:56.047 16:48:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:56.047 16:48:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:56.047 16:48:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:56.308 ************************************ 00:36:56.308 START TEST kernel_target_abort 00:36:56.308 ************************************ 00:36:56.308 16:48:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:36:56.308 16:48:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:36:56.308 16:48:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:36:56.308 16:48:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:56.308 16:48:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:56.308 16:48:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:56.308 16:48:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:56.308 16:48:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:56.308 16:48:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:56.308 16:48:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:56.308 16:48:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:56.308 16:48:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:56.308 16:48:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:56.308 16:48:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:56.308 16:48:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:36:56.308 16:48:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:56.308 16:48:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:56.308 16:48:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:56.308 16:48:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:36:56.308 16:48:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:36:56.308 16:48:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:36:56.308 16:48:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:56.308 16:48:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:59.606 Waiting for block devices as requested 00:36:59.606 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:59.606 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:59.867 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:59.867 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:59.867 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:00.127 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:00.127 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:00.127 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:00.386 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:00.386 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:00.386 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:00.646 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:00.646 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:00.646 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:00.646 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:00.906 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:00.906 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:01.167 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:37:01.167 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:37:01.167 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:37:01.167 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:37:01.167 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:37:01.167 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:37:01.167 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:37:01.167 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:37:01.167 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:37:01.167 No valid GPT data, bailing 00:37:01.167 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:37:01.167 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:37:01.167 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:37:01.167 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:37:01.167 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:37:01.167 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:01.167 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:01.428 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:37:01.428 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:37:01.428 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:37:01.428 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:37:01.428 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:37:01.428 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:37:01.428 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:37:01.428 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:37:01.428 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:37:01.428 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:37:01.428 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 10.0.0.1 -t tcp -s 4420 00:37:01.428 00:37:01.428 Discovery Log Number of Records 2, Generation counter 2 00:37:01.428 =====Discovery Log Entry 0====== 00:37:01.428 trtype: tcp 00:37:01.428 adrfam: ipv4 00:37:01.428 subtype: current discovery subsystem 00:37:01.428 treq: not specified, sq flow control disable supported 00:37:01.428 portid: 1 00:37:01.428 trsvcid: 4420 00:37:01.428 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:37:01.428 traddr: 10.0.0.1 00:37:01.428 eflags: none 00:37:01.428 sectype: none 00:37:01.428 =====Discovery Log Entry 1====== 00:37:01.428 trtype: tcp 00:37:01.428 adrfam: ipv4 00:37:01.428 subtype: nvme subsystem 00:37:01.428 treq: not specified, sq flow control disable supported 00:37:01.428 portid: 1 00:37:01.428 trsvcid: 4420 00:37:01.428 subnqn: nqn.2016-06.io.spdk:testnqn 00:37:01.428 traddr: 10.0.0.1 00:37:01.428 eflags: none 00:37:01.428 sectype: none 00:37:01.428 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:37:01.428 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:01.428 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:01.428 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:37:01.428 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:01.428 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:01.428 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:01.428 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:01.428 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:01.428 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:01.428 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:01.428 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:01.428 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:01.428 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:01.428 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:37:01.428 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:01.428 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:37:01.428 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:01.428 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:01.428 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:01.428 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:04.723 Initializing NVMe Controllers 00:37:04.723 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:04.723 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:04.723 Initialization complete. Launching workers. 00:37:04.723 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 68207, failed: 0 00:37:04.723 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 68207, failed to submit 0 00:37:04.723 success 0, unsuccessful 68207, failed 0 00:37:04.723 16:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:04.723 16:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:08.017 Initializing NVMe Controllers 00:37:08.017 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:08.017 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:08.017 Initialization complete. Launching workers. 00:37:08.017 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 108746, failed: 0 00:37:08.017 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 27382, failed to submit 81364 00:37:08.017 success 0, unsuccessful 27382, failed 0 00:37:08.017 16:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:08.017 16:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:11.314 Initializing NVMe Controllers 00:37:11.314 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:11.315 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:11.315 Initialization complete. Launching workers. 00:37:11.315 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 102815, failed: 0 00:37:11.315 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25726, failed to submit 77089 00:37:11.315 success 0, unsuccessful 25726, failed 0 00:37:11.315 16:48:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:37:11.315 16:48:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:37:11.315 16:48:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:37:11.315 16:48:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:11.315 16:48:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:11.315 16:48:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:11.315 16:48:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:11.315 16:48:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:37:11.315 16:48:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:37:11.315 16:48:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:14.614 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:14.614 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:14.614 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:14.614 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:14.614 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:14.614 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:14.614 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:14.614 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:14.614 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:14.614 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:14.614 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:14.614 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:14.614 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:14.614 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:14.614 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:14.614 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:15.998 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:16.571 00:37:16.571 real 0m20.269s 00:37:16.571 user 0m9.987s 00:37:16.571 sys 0m6.146s 00:37:16.571 16:49:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:16.571 16:49:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:16.571 ************************************ 00:37:16.571 END TEST kernel_target_abort 00:37:16.571 ************************************ 00:37:16.571 16:49:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:37:16.571 16:49:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:37:16.571 16:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:16.571 16:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:37:16.571 16:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:16.571 16:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:37:16.571 16:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:16.571 16:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:16.571 rmmod nvme_tcp 00:37:16.571 rmmod nvme_fabrics 00:37:16.571 rmmod nvme_keyring 00:37:16.571 16:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:16.571 16:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:37:16.571 16:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:37:16.571 16:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 2527446 ']' 00:37:16.571 16:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 2527446 00:37:16.571 16:49:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 2527446 ']' 00:37:16.571 16:49:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 2527446 00:37:16.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2527446) - No such process 00:37:16.571 16:49:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 2527446 is not found' 00:37:16.571 Process with pid 2527446 is not found 00:37:16.571 16:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:37:16.571 16:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:19.981 Waiting for block devices as requested 00:37:19.981 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:19.981 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:19.981 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:19.981 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:19.981 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:19.981 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:19.981 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:19.981 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:20.242 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:20.242 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:20.503 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:20.503 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:20.503 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:20.503 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:20.763 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:20.763 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:20.763 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:21.023 16:49:06 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:21.023 16:49:06 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:21.023 16:49:06 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:37:21.023 16:49:06 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:37:21.023 16:49:06 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:21.023 16:49:06 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:37:21.023 16:49:06 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:21.023 16:49:06 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:21.023 16:49:06 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:21.023 16:49:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:21.023 16:49:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:23.566 16:49:09 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:23.566 00:37:23.566 real 0m51.864s 00:37:23.566 user 1m4.944s 00:37:23.566 sys 0m18.781s 00:37:23.566 16:49:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:23.566 16:49:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:23.566 ************************************ 00:37:23.566 END TEST nvmf_abort_qd_sizes 00:37:23.566 ************************************ 00:37:23.566 16:49:09 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:23.566 16:49:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:23.566 16:49:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:23.566 16:49:09 -- common/autotest_common.sh@10 -- # set +x 00:37:23.566 ************************************ 00:37:23.566 START TEST keyring_file 00:37:23.566 ************************************ 00:37:23.566 16:49:09 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:23.566 * Looking for test storage... 00:37:23.566 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:23.566 16:49:09 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:23.566 16:49:09 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:37:23.566 16:49:09 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:23.566 16:49:09 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:23.566 16:49:09 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:23.566 16:49:09 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:23.566 16:49:09 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:23.566 16:49:09 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:37:23.566 16:49:09 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:37:23.566 16:49:09 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:37:23.567 16:49:09 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:37:23.567 16:49:09 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:37:23.567 16:49:09 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:37:23.567 16:49:09 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:37:23.567 16:49:09 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:23.567 16:49:09 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:37:23.567 16:49:09 keyring_file -- scripts/common.sh@345 -- # : 1 00:37:23.567 16:49:09 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:23.567 16:49:09 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:23.567 16:49:09 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:37:23.567 16:49:09 keyring_file -- scripts/common.sh@353 -- # local d=1 00:37:23.567 16:49:09 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:23.567 16:49:09 keyring_file -- scripts/common.sh@355 -- # echo 1 00:37:23.567 16:49:09 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:37:23.567 16:49:09 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:37:23.567 16:49:09 keyring_file -- scripts/common.sh@353 -- # local d=2 00:37:23.567 16:49:09 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:23.567 16:49:09 keyring_file -- scripts/common.sh@355 -- # echo 2 00:37:23.567 16:49:09 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:37:23.567 16:49:09 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:23.567 16:49:09 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:23.567 16:49:09 keyring_file -- scripts/common.sh@368 -- # return 0 00:37:23.567 16:49:09 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:23.567 16:49:09 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:23.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:23.567 --rc genhtml_branch_coverage=1 00:37:23.567 --rc genhtml_function_coverage=1 00:37:23.567 --rc genhtml_legend=1 00:37:23.567 --rc geninfo_all_blocks=1 00:37:23.567 --rc geninfo_unexecuted_blocks=1 00:37:23.567 00:37:23.567 ' 00:37:23.567 16:49:09 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:23.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:23.567 --rc genhtml_branch_coverage=1 00:37:23.567 --rc genhtml_function_coverage=1 00:37:23.567 --rc genhtml_legend=1 00:37:23.567 --rc geninfo_all_blocks=1 00:37:23.567 --rc geninfo_unexecuted_blocks=1 00:37:23.567 00:37:23.567 ' 00:37:23.567 16:49:09 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:23.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:23.567 --rc genhtml_branch_coverage=1 00:37:23.567 --rc genhtml_function_coverage=1 00:37:23.567 --rc genhtml_legend=1 00:37:23.567 --rc geninfo_all_blocks=1 00:37:23.567 --rc geninfo_unexecuted_blocks=1 00:37:23.567 00:37:23.567 ' 00:37:23.567 16:49:09 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:23.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:23.567 --rc genhtml_branch_coverage=1 00:37:23.567 --rc genhtml_function_coverage=1 00:37:23.567 --rc genhtml_legend=1 00:37:23.567 --rc geninfo_all_blocks=1 00:37:23.567 --rc geninfo_unexecuted_blocks=1 00:37:23.567 00:37:23.567 ' 00:37:23.567 16:49:09 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:23.567 16:49:09 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:23.567 16:49:09 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:37:23.567 16:49:09 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:23.567 16:49:09 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:23.567 16:49:09 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:23.567 16:49:09 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:23.567 16:49:09 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:23.567 16:49:09 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:23.567 16:49:09 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:23.567 16:49:09 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:23.567 16:49:09 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:23.567 16:49:09 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:23.567 16:49:09 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:37:23.567 16:49:09 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:37:23.567 16:49:09 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:23.567 16:49:09 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:23.567 16:49:09 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:23.567 16:49:09 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:23.567 16:49:09 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:23.567 16:49:09 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:37:23.567 16:49:09 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:23.567 16:49:09 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:23.567 16:49:09 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:23.567 16:49:09 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:23.567 16:49:09 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:23.567 16:49:09 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:23.567 16:49:09 keyring_file -- paths/export.sh@5 -- # export PATH 00:37:23.567 16:49:09 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:23.567 16:49:09 keyring_file -- nvmf/common.sh@51 -- # : 0 00:37:23.567 16:49:09 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:23.567 16:49:09 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:23.567 16:49:09 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:23.567 16:49:09 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:23.567 16:49:09 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:23.568 16:49:09 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:23.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:23.568 16:49:09 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:23.568 16:49:09 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:23.568 16:49:09 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:23.568 16:49:09 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:23.568 16:49:09 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:23.568 16:49:09 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:23.568 16:49:09 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:37:23.568 16:49:09 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:37:23.568 16:49:09 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:37:23.568 16:49:09 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:23.568 16:49:09 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:23.568 16:49:09 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:23.568 16:49:09 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:23.568 16:49:09 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:23.568 16:49:09 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:23.568 16:49:09 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.bk9rDVpQZ7 00:37:23.568 16:49:09 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:23.568 16:49:09 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:23.568 16:49:09 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:37:23.568 16:49:09 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:23.568 16:49:09 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:37:23.568 16:49:09 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:37:23.568 16:49:09 keyring_file -- nvmf/common.sh@733 -- # python - 00:37:23.568 16:49:09 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.bk9rDVpQZ7 00:37:23.568 16:49:09 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.bk9rDVpQZ7 00:37:23.568 16:49:09 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.bk9rDVpQZ7 00:37:23.568 16:49:09 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:37:23.568 16:49:09 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:23.568 16:49:09 keyring_file -- keyring/common.sh@17 -- # name=key1 00:37:23.568 16:49:09 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:23.568 16:49:09 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:23.568 16:49:09 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:23.568 16:49:09 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.6BgbM4eErd 00:37:23.568 16:49:09 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:23.568 16:49:09 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:23.568 16:49:09 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:37:23.568 16:49:09 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:23.568 16:49:09 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:37:23.568 16:49:09 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:37:23.568 16:49:09 keyring_file -- nvmf/common.sh@733 -- # python - 00:37:23.568 16:49:09 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.6BgbM4eErd 00:37:23.568 16:49:09 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.6BgbM4eErd 00:37:23.568 16:49:09 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.6BgbM4eErd 00:37:23.568 16:49:09 keyring_file -- keyring/file.sh@30 -- # tgtpid=2537708 00:37:23.568 16:49:09 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2537708 00:37:23.568 16:49:09 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:23.568 16:49:09 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2537708 ']' 00:37:23.568 16:49:09 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:23.568 16:49:09 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:23.568 16:49:09 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:23.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:23.568 16:49:09 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:23.568 16:49:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:23.568 [2024-11-20 16:49:09.506529] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:37:23.568 [2024-11-20 16:49:09.506599] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2537708 ] 00:37:23.828 [2024-11-20 16:49:09.583919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:23.828 [2024-11-20 16:49:09.625036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:24.397 16:49:10 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:24.397 16:49:10 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:37:24.397 16:49:10 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:37:24.397 16:49:10 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.397 16:49:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:24.397 [2024-11-20 16:49:10.316258] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:24.397 null0 00:37:24.397 [2024-11-20 16:49:10.348297] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:24.397 [2024-11-20 16:49:10.348653] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:24.658 16:49:10 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.658 16:49:10 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:24.658 16:49:10 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:37:24.658 16:49:10 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:24.658 16:49:10 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:37:24.658 16:49:10 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:24.658 16:49:10 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:37:24.658 16:49:10 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:24.658 16:49:10 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:24.658 16:49:10 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.658 16:49:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:24.658 [2024-11-20 16:49:10.376355] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:37:24.658 request: 00:37:24.658 { 00:37:24.658 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:37:24.658 "secure_channel": false, 00:37:24.658 "listen_address": { 00:37:24.658 "trtype": "tcp", 00:37:24.658 "traddr": "127.0.0.1", 00:37:24.658 "trsvcid": "4420" 00:37:24.658 }, 00:37:24.658 "method": "nvmf_subsystem_add_listener", 00:37:24.658 "req_id": 1 00:37:24.658 } 00:37:24.658 Got JSON-RPC error response 00:37:24.658 response: 00:37:24.658 { 00:37:24.658 "code": -32602, 00:37:24.658 "message": "Invalid parameters" 00:37:24.658 } 00:37:24.658 16:49:10 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:37:24.658 16:49:10 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:37:24.658 16:49:10 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:24.658 16:49:10 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:24.658 16:49:10 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:24.658 16:49:10 keyring_file -- keyring/file.sh@47 -- # bperfpid=2538011 00:37:24.658 16:49:10 keyring_file -- keyring/file.sh@49 -- # waitforlisten 2538011 /var/tmp/bperf.sock 00:37:24.658 16:49:10 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2538011 ']' 00:37:24.658 16:49:10 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:24.658 16:49:10 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:24.658 16:49:10 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:24.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:24.658 16:49:10 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:24.658 16:49:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:24.658 16:49:10 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:37:24.658 [2024-11-20 16:49:10.431571] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:37:24.658 [2024-11-20 16:49:10.431618] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2538011 ] 00:37:24.658 [2024-11-20 16:49:10.518870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:24.658 [2024-11-20 16:49:10.554459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:25.600 16:49:11 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:25.600 16:49:11 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:37:25.600 16:49:11 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.bk9rDVpQZ7 00:37:25.600 16:49:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.bk9rDVpQZ7 00:37:25.600 16:49:11 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.6BgbM4eErd 00:37:25.600 16:49:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.6BgbM4eErd 00:37:25.600 16:49:11 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:37:25.600 16:49:11 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:37:25.600 16:49:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:25.600 16:49:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:25.600 16:49:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:25.860 16:49:11 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.bk9rDVpQZ7 == \/\t\m\p\/\t\m\p\.\b\k\9\r\D\V\p\Q\Z\7 ]] 00:37:25.860 16:49:11 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:37:25.860 16:49:11 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:37:25.860 16:49:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:25.861 16:49:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:25.861 16:49:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:26.121 16:49:11 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.6BgbM4eErd == \/\t\m\p\/\t\m\p\.\6\B\g\b\M\4\e\E\r\d ]] 00:37:26.121 16:49:11 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:37:26.121 16:49:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:26.121 16:49:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:26.121 16:49:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:26.121 16:49:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:26.121 16:49:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:26.121 16:49:12 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:37:26.121 16:49:12 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:37:26.121 16:49:12 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:26.121 16:49:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:26.121 16:49:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:26.121 16:49:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:26.121 16:49:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:26.382 16:49:12 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:37:26.382 16:49:12 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:26.382 16:49:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:26.642 [2024-11-20 16:49:12.382105] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:26.642 nvme0n1 00:37:26.642 16:49:12 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:37:26.642 16:49:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:26.642 16:49:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:26.642 16:49:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:26.642 16:49:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:26.642 16:49:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:26.902 16:49:12 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:37:26.902 16:49:12 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:37:26.902 16:49:12 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:26.902 16:49:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:26.902 16:49:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:26.902 16:49:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:26.902 16:49:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:26.902 16:49:12 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:37:26.902 16:49:12 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:27.163 Running I/O for 1 seconds... 00:37:28.108 15752.00 IOPS, 61.53 MiB/s 00:37:28.108 Latency(us) 00:37:28.108 [2024-11-20T15:49:14.067Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:28.108 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:37:28.108 nvme0n1 : 1.01 15785.92 61.66 0.00 0.00 8082.64 5843.63 15837.87 00:37:28.108 [2024-11-20T15:49:14.067Z] =================================================================================================================== 00:37:28.108 [2024-11-20T15:49:14.067Z] Total : 15785.92 61.66 0.00 0.00 8082.64 5843.63 15837.87 00:37:28.108 { 00:37:28.108 "results": [ 00:37:28.108 { 00:37:28.108 "job": "nvme0n1", 00:37:28.108 "core_mask": "0x2", 00:37:28.108 "workload": "randrw", 00:37:28.108 "percentage": 50, 00:37:28.108 "status": "finished", 00:37:28.108 "queue_depth": 128, 00:37:28.108 "io_size": 4096, 00:37:28.108 "runtime": 1.00596, 00:37:28.108 "iops": 15785.91594099169, 00:37:28.108 "mibps": 61.66373414449879, 00:37:28.108 "io_failed": 0, 00:37:28.108 "io_timeout": 0, 00:37:28.108 "avg_latency_us": 8082.63920738875, 00:37:28.108 "min_latency_us": 5843.626666666667, 00:37:28.108 "max_latency_us": 15837.866666666667 00:37:28.108 } 00:37:28.108 ], 00:37:28.108 "core_count": 1 00:37:28.108 } 00:37:28.108 16:49:13 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:28.108 16:49:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:28.369 16:49:14 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:37:28.369 16:49:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:28.369 16:49:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:28.369 16:49:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:28.369 16:49:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:28.369 16:49:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:28.369 16:49:14 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:37:28.369 16:49:14 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:37:28.369 16:49:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:28.369 16:49:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:28.369 16:49:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:28.369 16:49:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:28.369 16:49:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:28.629 16:49:14 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:37:28.629 16:49:14 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:28.629 16:49:14 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:37:28.629 16:49:14 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:28.629 16:49:14 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:37:28.629 16:49:14 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:28.629 16:49:14 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:37:28.629 16:49:14 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:28.629 16:49:14 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:28.629 16:49:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:28.890 [2024-11-20 16:49:14.641521] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:28.890 [2024-11-20 16:49:14.641784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe00cb0 (107): Transport endpoint is not connected 00:37:28.890 [2024-11-20 16:49:14.642781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe00cb0 (9): Bad file descriptor 00:37:28.890 [2024-11-20 16:49:14.643783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:37:28.890 [2024-11-20 16:49:14.643789] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:28.890 [2024-11-20 16:49:14.643795] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:37:28.890 [2024-11-20 16:49:14.643802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:37:28.890 request: 00:37:28.890 { 00:37:28.890 "name": "nvme0", 00:37:28.890 "trtype": "tcp", 00:37:28.890 "traddr": "127.0.0.1", 00:37:28.890 "adrfam": "ipv4", 00:37:28.890 "trsvcid": "4420", 00:37:28.890 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:28.890 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:28.890 "prchk_reftag": false, 00:37:28.890 "prchk_guard": false, 00:37:28.890 "hdgst": false, 00:37:28.890 "ddgst": false, 00:37:28.890 "psk": "key1", 00:37:28.890 "allow_unrecognized_csi": false, 00:37:28.890 "method": "bdev_nvme_attach_controller", 00:37:28.891 "req_id": 1 00:37:28.891 } 00:37:28.891 Got JSON-RPC error response 00:37:28.891 response: 00:37:28.891 { 00:37:28.891 "code": -5, 00:37:28.891 "message": "Input/output error" 00:37:28.891 } 00:37:28.891 16:49:14 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:37:28.891 16:49:14 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:28.891 16:49:14 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:28.891 16:49:14 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:28.891 16:49:14 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:37:28.891 16:49:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:28.891 16:49:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:28.891 16:49:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:28.891 16:49:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:28.891 16:49:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:28.891 16:49:14 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:37:28.891 16:49:14 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:37:28.891 16:49:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:28.891 16:49:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:28.891 16:49:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:28.891 16:49:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:28.891 16:49:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:29.151 16:49:15 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:37:29.151 16:49:15 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:37:29.151 16:49:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:29.412 16:49:15 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:37:29.412 16:49:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:37:29.412 16:49:15 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:37:29.412 16:49:15 keyring_file -- keyring/file.sh@78 -- # jq length 00:37:29.412 16:49:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:29.672 16:49:15 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:37:29.672 16:49:15 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.bk9rDVpQZ7 00:37:29.672 16:49:15 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.bk9rDVpQZ7 00:37:29.672 16:49:15 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:37:29.672 16:49:15 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.bk9rDVpQZ7 00:37:29.672 16:49:15 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:37:29.672 16:49:15 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:29.672 16:49:15 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:37:29.672 16:49:15 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:29.672 16:49:15 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.bk9rDVpQZ7 00:37:29.672 16:49:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.bk9rDVpQZ7 00:37:29.933 [2024-11-20 16:49:15.660500] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.bk9rDVpQZ7': 0100660 00:37:29.933 [2024-11-20 16:49:15.660519] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:37:29.933 request: 00:37:29.933 { 00:37:29.933 "name": "key0", 00:37:29.933 "path": "/tmp/tmp.bk9rDVpQZ7", 00:37:29.933 "method": "keyring_file_add_key", 00:37:29.933 "req_id": 1 00:37:29.933 } 00:37:29.933 Got JSON-RPC error response 00:37:29.933 response: 00:37:29.933 { 00:37:29.933 "code": -1, 00:37:29.933 "message": "Operation not permitted" 00:37:29.933 } 00:37:29.933 16:49:15 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:37:29.933 16:49:15 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:29.933 16:49:15 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:29.933 16:49:15 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:29.933 16:49:15 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.bk9rDVpQZ7 00:37:29.933 16:49:15 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.bk9rDVpQZ7 00:37:29.933 16:49:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.bk9rDVpQZ7 00:37:29.933 16:49:15 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.bk9rDVpQZ7 00:37:29.933 16:49:15 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:37:29.933 16:49:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:29.933 16:49:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:29.933 16:49:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:29.933 16:49:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:29.933 16:49:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:30.194 16:49:16 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:37:30.194 16:49:16 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:30.194 16:49:16 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:37:30.194 16:49:16 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:30.194 16:49:16 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:37:30.194 16:49:16 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:30.194 16:49:16 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:37:30.194 16:49:16 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:30.194 16:49:16 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:30.194 16:49:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:30.455 [2024-11-20 16:49:16.177814] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.bk9rDVpQZ7': No such file or directory 00:37:30.455 [2024-11-20 16:49:16.177827] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:37:30.455 [2024-11-20 16:49:16.177840] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:37:30.455 [2024-11-20 16:49:16.177846] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:37:30.455 [2024-11-20 16:49:16.177851] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:30.455 [2024-11-20 16:49:16.177856] bdev_nvme.c:6764:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:37:30.455 request: 00:37:30.455 { 00:37:30.455 "name": "nvme0", 00:37:30.455 "trtype": "tcp", 00:37:30.455 "traddr": "127.0.0.1", 00:37:30.455 "adrfam": "ipv4", 00:37:30.455 "trsvcid": "4420", 00:37:30.455 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:30.455 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:30.455 "prchk_reftag": false, 00:37:30.455 "prchk_guard": false, 00:37:30.455 "hdgst": false, 00:37:30.455 "ddgst": false, 00:37:30.455 "psk": "key0", 00:37:30.455 "allow_unrecognized_csi": false, 00:37:30.455 "method": "bdev_nvme_attach_controller", 00:37:30.455 "req_id": 1 00:37:30.455 } 00:37:30.455 Got JSON-RPC error response 00:37:30.455 response: 00:37:30.455 { 00:37:30.455 "code": -19, 00:37:30.455 "message": "No such device" 00:37:30.455 } 00:37:30.455 16:49:16 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:37:30.455 16:49:16 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:30.455 16:49:16 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:30.455 16:49:16 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:30.455 16:49:16 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:37:30.455 16:49:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:30.455 16:49:16 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:30.455 16:49:16 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:30.455 16:49:16 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:30.455 16:49:16 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:30.455 16:49:16 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:30.455 16:49:16 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:30.455 16:49:16 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Ahlv0qXu5f 00:37:30.455 16:49:16 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:30.455 16:49:16 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:30.455 16:49:16 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:37:30.455 16:49:16 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:30.455 16:49:16 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:37:30.455 16:49:16 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:37:30.455 16:49:16 keyring_file -- nvmf/common.sh@733 -- # python - 00:37:30.455 16:49:16 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Ahlv0qXu5f 00:37:30.455 16:49:16 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Ahlv0qXu5f 00:37:30.455 16:49:16 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.Ahlv0qXu5f 00:37:30.455 16:49:16 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Ahlv0qXu5f 00:37:30.455 16:49:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Ahlv0qXu5f 00:37:30.714 16:49:16 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:30.714 16:49:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:30.975 nvme0n1 00:37:30.975 16:49:16 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:37:30.975 16:49:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:30.975 16:49:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:30.975 16:49:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:30.975 16:49:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:30.975 16:49:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:31.237 16:49:16 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:37:31.237 16:49:16 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:37:31.237 16:49:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:31.237 16:49:17 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:37:31.237 16:49:17 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:37:31.237 16:49:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:31.237 16:49:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:31.237 16:49:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:31.498 16:49:17 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:37:31.498 16:49:17 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:37:31.498 16:49:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:31.498 16:49:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:31.498 16:49:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:31.498 16:49:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:31.498 16:49:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:31.758 16:49:17 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:37:31.758 16:49:17 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:31.758 16:49:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:31.758 16:49:17 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:37:31.758 16:49:17 keyring_file -- keyring/file.sh@105 -- # jq length 00:37:31.758 16:49:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:32.019 16:49:17 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:37:32.019 16:49:17 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Ahlv0qXu5f 00:37:32.019 16:49:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Ahlv0qXu5f 00:37:32.019 16:49:17 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.6BgbM4eErd 00:37:32.019 16:49:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.6BgbM4eErd 00:37:32.280 16:49:18 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:32.280 16:49:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:32.540 nvme0n1 00:37:32.540 16:49:18 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:37:32.540 16:49:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:37:32.800 16:49:18 keyring_file -- keyring/file.sh@113 -- # config='{ 00:37:32.800 "subsystems": [ 00:37:32.800 { 00:37:32.800 "subsystem": "keyring", 00:37:32.800 "config": [ 00:37:32.800 { 00:37:32.800 "method": "keyring_file_add_key", 00:37:32.800 "params": { 00:37:32.800 "name": "key0", 00:37:32.800 "path": "/tmp/tmp.Ahlv0qXu5f" 00:37:32.800 } 00:37:32.800 }, 00:37:32.800 { 00:37:32.800 "method": "keyring_file_add_key", 00:37:32.800 "params": { 00:37:32.800 "name": "key1", 00:37:32.800 "path": "/tmp/tmp.6BgbM4eErd" 00:37:32.800 } 00:37:32.800 } 00:37:32.800 ] 00:37:32.800 }, 00:37:32.800 { 00:37:32.800 "subsystem": "iobuf", 00:37:32.800 "config": [ 00:37:32.800 { 00:37:32.800 "method": "iobuf_set_options", 00:37:32.800 "params": { 00:37:32.800 "small_pool_count": 8192, 00:37:32.800 "large_pool_count": 1024, 00:37:32.800 "small_bufsize": 8192, 00:37:32.800 "large_bufsize": 135168, 00:37:32.800 "enable_numa": false 00:37:32.800 } 00:37:32.800 } 00:37:32.800 ] 00:37:32.800 }, 00:37:32.800 { 00:37:32.801 "subsystem": "sock", 00:37:32.801 "config": [ 00:37:32.801 { 00:37:32.801 "method": "sock_set_default_impl", 00:37:32.801 "params": { 00:37:32.801 "impl_name": "posix" 00:37:32.801 } 00:37:32.801 }, 00:37:32.801 { 00:37:32.801 "method": "sock_impl_set_options", 00:37:32.801 "params": { 00:37:32.801 "impl_name": "ssl", 00:37:32.801 "recv_buf_size": 4096, 00:37:32.801 "send_buf_size": 4096, 00:37:32.801 "enable_recv_pipe": true, 00:37:32.801 "enable_quickack": false, 00:37:32.801 "enable_placement_id": 0, 00:37:32.801 "enable_zerocopy_send_server": true, 00:37:32.801 "enable_zerocopy_send_client": false, 00:37:32.801 "zerocopy_threshold": 0, 00:37:32.801 "tls_version": 0, 00:37:32.801 "enable_ktls": false 00:37:32.801 } 00:37:32.801 }, 00:37:32.801 { 00:37:32.801 "method": "sock_impl_set_options", 00:37:32.801 "params": { 00:37:32.801 "impl_name": "posix", 00:37:32.801 "recv_buf_size": 2097152, 00:37:32.801 "send_buf_size": 2097152, 00:37:32.801 "enable_recv_pipe": true, 00:37:32.801 "enable_quickack": false, 00:37:32.801 "enable_placement_id": 0, 00:37:32.801 "enable_zerocopy_send_server": true, 00:37:32.801 "enable_zerocopy_send_client": false, 00:37:32.801 "zerocopy_threshold": 0, 00:37:32.801 "tls_version": 0, 00:37:32.801 "enable_ktls": false 00:37:32.801 } 00:37:32.801 } 00:37:32.801 ] 00:37:32.801 }, 00:37:32.801 { 00:37:32.801 "subsystem": "vmd", 00:37:32.801 "config": [] 00:37:32.801 }, 00:37:32.801 { 00:37:32.801 "subsystem": "accel", 00:37:32.801 "config": [ 00:37:32.801 { 00:37:32.801 "method": "accel_set_options", 00:37:32.801 "params": { 00:37:32.801 "small_cache_size": 128, 00:37:32.801 "large_cache_size": 16, 00:37:32.801 "task_count": 2048, 00:37:32.801 "sequence_count": 2048, 00:37:32.801 "buf_count": 2048 00:37:32.801 } 00:37:32.801 } 00:37:32.801 ] 00:37:32.801 }, 00:37:32.801 { 00:37:32.801 "subsystem": "bdev", 00:37:32.801 "config": [ 00:37:32.801 { 00:37:32.801 "method": "bdev_set_options", 00:37:32.801 "params": { 00:37:32.801 "bdev_io_pool_size": 65535, 00:37:32.801 "bdev_io_cache_size": 256, 00:37:32.801 "bdev_auto_examine": true, 00:37:32.801 "iobuf_small_cache_size": 128, 00:37:32.801 "iobuf_large_cache_size": 16 00:37:32.801 } 00:37:32.801 }, 00:37:32.801 { 00:37:32.801 "method": "bdev_raid_set_options", 00:37:32.801 "params": { 00:37:32.801 "process_window_size_kb": 1024, 00:37:32.801 "process_max_bandwidth_mb_sec": 0 00:37:32.801 } 00:37:32.801 }, 00:37:32.801 { 00:37:32.801 "method": "bdev_iscsi_set_options", 00:37:32.801 "params": { 00:37:32.801 "timeout_sec": 30 00:37:32.801 } 00:37:32.801 }, 00:37:32.801 { 00:37:32.801 "method": "bdev_nvme_set_options", 00:37:32.801 "params": { 00:37:32.801 "action_on_timeout": "none", 00:37:32.801 "timeout_us": 0, 00:37:32.801 "timeout_admin_us": 0, 00:37:32.801 "keep_alive_timeout_ms": 10000, 00:37:32.801 "arbitration_burst": 0, 00:37:32.801 "low_priority_weight": 0, 00:37:32.801 "medium_priority_weight": 0, 00:37:32.801 "high_priority_weight": 0, 00:37:32.801 "nvme_adminq_poll_period_us": 10000, 00:37:32.801 "nvme_ioq_poll_period_us": 0, 00:37:32.801 "io_queue_requests": 512, 00:37:32.801 "delay_cmd_submit": true, 00:37:32.801 "transport_retry_count": 4, 00:37:32.801 "bdev_retry_count": 3, 00:37:32.801 "transport_ack_timeout": 0, 00:37:32.801 "ctrlr_loss_timeout_sec": 0, 00:37:32.801 "reconnect_delay_sec": 0, 00:37:32.801 "fast_io_fail_timeout_sec": 0, 00:37:32.801 "disable_auto_failback": false, 00:37:32.801 "generate_uuids": false, 00:37:32.801 "transport_tos": 0, 00:37:32.801 "nvme_error_stat": false, 00:37:32.801 "rdma_srq_size": 0, 00:37:32.801 "io_path_stat": false, 00:37:32.801 "allow_accel_sequence": false, 00:37:32.801 "rdma_max_cq_size": 0, 00:37:32.801 "rdma_cm_event_timeout_ms": 0, 00:37:32.801 "dhchap_digests": [ 00:37:32.801 "sha256", 00:37:32.801 "sha384", 00:37:32.801 "sha512" 00:37:32.801 ], 00:37:32.801 "dhchap_dhgroups": [ 00:37:32.801 "null", 00:37:32.801 "ffdhe2048", 00:37:32.801 "ffdhe3072", 00:37:32.801 "ffdhe4096", 00:37:32.801 "ffdhe6144", 00:37:32.801 "ffdhe8192" 00:37:32.801 ] 00:37:32.801 } 00:37:32.801 }, 00:37:32.801 { 00:37:32.801 "method": "bdev_nvme_attach_controller", 00:37:32.801 "params": { 00:37:32.801 "name": "nvme0", 00:37:32.801 "trtype": "TCP", 00:37:32.801 "adrfam": "IPv4", 00:37:32.801 "traddr": "127.0.0.1", 00:37:32.801 "trsvcid": "4420", 00:37:32.801 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:32.801 "prchk_reftag": false, 00:37:32.801 "prchk_guard": false, 00:37:32.801 "ctrlr_loss_timeout_sec": 0, 00:37:32.801 "reconnect_delay_sec": 0, 00:37:32.801 "fast_io_fail_timeout_sec": 0, 00:37:32.801 "psk": "key0", 00:37:32.801 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:32.801 "hdgst": false, 00:37:32.801 "ddgst": false, 00:37:32.801 "multipath": "multipath" 00:37:32.801 } 00:37:32.801 }, 00:37:32.801 { 00:37:32.801 "method": "bdev_nvme_set_hotplug", 00:37:32.801 "params": { 00:37:32.801 "period_us": 100000, 00:37:32.801 "enable": false 00:37:32.801 } 00:37:32.801 }, 00:37:32.801 { 00:37:32.801 "method": "bdev_wait_for_examine" 00:37:32.801 } 00:37:32.801 ] 00:37:32.801 }, 00:37:32.801 { 00:37:32.801 "subsystem": "nbd", 00:37:32.801 "config": [] 00:37:32.801 } 00:37:32.801 ] 00:37:32.802 }' 00:37:32.802 16:49:18 keyring_file -- keyring/file.sh@115 -- # killprocess 2538011 00:37:32.802 16:49:18 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2538011 ']' 00:37:32.802 16:49:18 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2538011 00:37:32.802 16:49:18 keyring_file -- common/autotest_common.sh@959 -- # uname 00:37:32.802 16:49:18 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:32.802 16:49:18 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2538011 00:37:32.802 16:49:18 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:32.802 16:49:18 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:32.802 16:49:18 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2538011' 00:37:32.802 killing process with pid 2538011 00:37:32.802 16:49:18 keyring_file -- common/autotest_common.sh@973 -- # kill 2538011 00:37:32.802 Received shutdown signal, test time was about 1.000000 seconds 00:37:32.802 00:37:32.802 Latency(us) 00:37:32.802 [2024-11-20T15:49:18.761Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:32.802 [2024-11-20T15:49:18.761Z] =================================================================================================================== 00:37:32.802 [2024-11-20T15:49:18.761Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:32.802 16:49:18 keyring_file -- common/autotest_common.sh@978 -- # wait 2538011 00:37:33.196 16:49:18 keyring_file -- keyring/file.sh@118 -- # bperfpid=2539645 00:37:33.196 16:49:18 keyring_file -- keyring/file.sh@120 -- # waitforlisten 2539645 /var/tmp/bperf.sock 00:37:33.196 16:49:18 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2539645 ']' 00:37:33.196 16:49:18 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:37:33.196 "subsystems": [ 00:37:33.196 { 00:37:33.196 "subsystem": "keyring", 00:37:33.196 "config": [ 00:37:33.196 { 00:37:33.196 "method": "keyring_file_add_key", 00:37:33.196 "params": { 00:37:33.196 "name": "key0", 00:37:33.196 "path": "/tmp/tmp.Ahlv0qXu5f" 00:37:33.196 } 00:37:33.196 }, 00:37:33.196 { 00:37:33.196 "method": "keyring_file_add_key", 00:37:33.196 "params": { 00:37:33.196 "name": "key1", 00:37:33.196 "path": "/tmp/tmp.6BgbM4eErd" 00:37:33.196 } 00:37:33.196 } 00:37:33.196 ] 00:37:33.196 }, 00:37:33.196 { 00:37:33.196 "subsystem": "iobuf", 00:37:33.196 "config": [ 00:37:33.196 { 00:37:33.196 "method": "iobuf_set_options", 00:37:33.196 "params": { 00:37:33.196 "small_pool_count": 8192, 00:37:33.196 "large_pool_count": 1024, 00:37:33.196 "small_bufsize": 8192, 00:37:33.196 "large_bufsize": 135168, 00:37:33.196 "enable_numa": false 00:37:33.196 } 00:37:33.196 } 00:37:33.196 ] 00:37:33.196 }, 00:37:33.196 { 00:37:33.196 "subsystem": "sock", 00:37:33.196 "config": [ 00:37:33.196 { 00:37:33.196 "method": "sock_set_default_impl", 00:37:33.196 "params": { 00:37:33.196 "impl_name": "posix" 00:37:33.196 } 00:37:33.196 }, 00:37:33.196 { 00:37:33.196 "method": "sock_impl_set_options", 00:37:33.196 "params": { 00:37:33.196 "impl_name": "ssl", 00:37:33.196 "recv_buf_size": 4096, 00:37:33.196 "send_buf_size": 4096, 00:37:33.196 "enable_recv_pipe": true, 00:37:33.196 "enable_quickack": false, 00:37:33.196 "enable_placement_id": 0, 00:37:33.196 "enable_zerocopy_send_server": true, 00:37:33.196 "enable_zerocopy_send_client": false, 00:37:33.196 "zerocopy_threshold": 0, 00:37:33.196 "tls_version": 0, 00:37:33.196 "enable_ktls": false 00:37:33.196 } 00:37:33.196 }, 00:37:33.196 { 00:37:33.196 "method": "sock_impl_set_options", 00:37:33.196 "params": { 00:37:33.196 "impl_name": "posix", 00:37:33.196 "recv_buf_size": 2097152, 00:37:33.196 "send_buf_size": 2097152, 00:37:33.196 "enable_recv_pipe": true, 00:37:33.196 "enable_quickack": false, 00:37:33.196 "enable_placement_id": 0, 00:37:33.196 "enable_zerocopy_send_server": true, 00:37:33.196 "enable_zerocopy_send_client": false, 00:37:33.196 "zerocopy_threshold": 0, 00:37:33.196 "tls_version": 0, 00:37:33.196 "enable_ktls": false 00:37:33.196 } 00:37:33.196 } 00:37:33.196 ] 00:37:33.196 }, 00:37:33.196 { 00:37:33.196 "subsystem": "vmd", 00:37:33.196 "config": [] 00:37:33.196 }, 00:37:33.196 { 00:37:33.196 "subsystem": "accel", 00:37:33.196 "config": [ 00:37:33.196 { 00:37:33.196 "method": "accel_set_options", 00:37:33.196 "params": { 00:37:33.196 "small_cache_size": 128, 00:37:33.196 "large_cache_size": 16, 00:37:33.196 "task_count": 2048, 00:37:33.196 "sequence_count": 2048, 00:37:33.196 "buf_count": 2048 00:37:33.196 } 00:37:33.196 } 00:37:33.196 ] 00:37:33.196 }, 00:37:33.196 { 00:37:33.196 "subsystem": "bdev", 00:37:33.196 "config": [ 00:37:33.196 { 00:37:33.196 "method": "bdev_set_options", 00:37:33.196 "params": { 00:37:33.196 "bdev_io_pool_size": 65535, 00:37:33.196 "bdev_io_cache_size": 256, 00:37:33.196 "bdev_auto_examine": true, 00:37:33.196 "iobuf_small_cache_size": 128, 00:37:33.196 "iobuf_large_cache_size": 16 00:37:33.196 } 00:37:33.196 }, 00:37:33.196 { 00:37:33.196 "method": "bdev_raid_set_options", 00:37:33.196 "params": { 00:37:33.196 "process_window_size_kb": 1024, 00:37:33.196 "process_max_bandwidth_mb_sec": 0 00:37:33.196 } 00:37:33.196 }, 00:37:33.196 { 00:37:33.196 "method": "bdev_iscsi_set_options", 00:37:33.196 "params": { 00:37:33.196 "timeout_sec": 30 00:37:33.196 } 00:37:33.196 }, 00:37:33.196 { 00:37:33.196 "method": "bdev_nvme_set_options", 00:37:33.196 "params": { 00:37:33.196 "action_on_timeout": "none", 00:37:33.196 "timeout_us": 0, 00:37:33.196 "timeout_admin_us": 0, 00:37:33.196 "keep_alive_timeout_ms": 10000, 00:37:33.196 "arbitration_burst": 0, 00:37:33.196 "low_priority_weight": 0, 00:37:33.196 "medium_priority_weight": 0, 00:37:33.196 "high_priority_weight": 0, 00:37:33.196 "nvme_adminq_poll_period_us": 10000, 00:37:33.196 "nvme_ioq_poll_period_us": 0, 00:37:33.196 "io_queue_requests": 512, 00:37:33.196 "delay_cmd_submit": true, 00:37:33.196 "transport_retry_count": 4, 00:37:33.196 "bdev_retry_count": 3, 00:37:33.196 "transport_ack_timeout": 0, 00:37:33.196 "ctrlr_loss_timeout_sec": 0, 00:37:33.196 "reconnect_delay_sec": 0, 00:37:33.196 "fast_io_fail_timeout_sec": 0, 00:37:33.196 "disable_auto_failback": false, 00:37:33.196 "generate_uuids": false, 00:37:33.196 "transport_tos": 0, 00:37:33.196 "nvme_error_stat": false, 00:37:33.196 "rdma_srq_size": 0, 00:37:33.196 "io_path_stat": false, 00:37:33.196 "allow_accel_sequence": false, 00:37:33.196 "rdma_max_cq_size": 0, 00:37:33.196 "rdma_cm_event_timeout_ms": 0, 00:37:33.196 "dhchap_digests": [ 00:37:33.196 "sha256", 00:37:33.196 "sha384", 00:37:33.196 "sha512" 00:37:33.196 ], 00:37:33.196 "dhchap_dhgroups": [ 00:37:33.196 "null", 00:37:33.196 "ffdhe2048", 00:37:33.196 "ffdhe3072", 00:37:33.196 "ffdhe4096", 00:37:33.196 "ffdhe6144", 00:37:33.196 "ffdhe8192" 00:37:33.196 ] 00:37:33.196 } 00:37:33.196 }, 00:37:33.196 { 00:37:33.196 "method": "bdev_nvme_attach_controller", 00:37:33.196 "params": { 00:37:33.196 "name": "nvme0", 00:37:33.196 "trtype": "TCP", 00:37:33.196 "adrfam": "IPv4", 00:37:33.196 "traddr": "127.0.0.1", 00:37:33.196 "trsvcid": "4420", 00:37:33.196 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:33.196 "prchk_reftag": false, 00:37:33.196 "prchk_guard": false, 00:37:33.196 "ctrlr_loss_timeout_sec": 0, 00:37:33.196 "reconnect_delay_sec": 0, 00:37:33.196 "fast_io_fail_timeout_sec": 0, 00:37:33.196 "psk": "key0", 00:37:33.196 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:33.196 "hdgst": false, 00:37:33.196 "ddgst": false, 00:37:33.196 "multipath": "multipath" 00:37:33.196 } 00:37:33.196 }, 00:37:33.196 { 00:37:33.196 "method": "bdev_nvme_set_hotplug", 00:37:33.196 "params": { 00:37:33.196 "period_us": 100000, 00:37:33.196 "enable": false 00:37:33.196 } 00:37:33.196 }, 00:37:33.196 { 00:37:33.196 "method": "bdev_wait_for_examine" 00:37:33.196 } 00:37:33.196 ] 00:37:33.196 }, 00:37:33.196 { 00:37:33.196 "subsystem": "nbd", 00:37:33.196 "config": [] 00:37:33.197 } 00:37:33.197 ] 00:37:33.197 }' 00:37:33.197 16:49:18 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:33.197 16:49:18 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:37:33.197 16:49:18 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:33.197 16:49:18 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:33.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:33.197 16:49:18 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:33.197 16:49:18 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:33.197 [2024-11-20 16:49:18.837273] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:37:33.197 [2024-11-20 16:49:18.837330] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2539645 ] 00:37:33.197 [2024-11-20 16:49:18.919620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:33.197 [2024-11-20 16:49:18.949279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:33.197 [2024-11-20 16:49:19.094030] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:33.788 16:49:19 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:33.788 16:49:19 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:37:33.788 16:49:19 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:37:33.788 16:49:19 keyring_file -- keyring/file.sh@121 -- # jq length 00:37:33.788 16:49:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:34.049 16:49:19 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:37:34.049 16:49:19 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:37:34.049 16:49:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:34.049 16:49:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:34.049 16:49:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:34.049 16:49:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:34.049 16:49:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:34.049 16:49:19 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:37:34.049 16:49:19 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:37:34.049 16:49:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:34.049 16:49:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:34.049 16:49:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:34.049 16:49:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:34.049 16:49:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:34.309 16:49:20 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:37:34.309 16:49:20 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:37:34.309 16:49:20 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:37:34.309 16:49:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:37:34.570 16:49:20 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:37:34.570 16:49:20 keyring_file -- keyring/file.sh@1 -- # cleanup 00:37:34.570 16:49:20 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.Ahlv0qXu5f /tmp/tmp.6BgbM4eErd 00:37:34.570 16:49:20 keyring_file -- keyring/file.sh@20 -- # killprocess 2539645 00:37:34.570 16:49:20 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2539645 ']' 00:37:34.570 16:49:20 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2539645 00:37:34.570 16:49:20 keyring_file -- common/autotest_common.sh@959 -- # uname 00:37:34.570 16:49:20 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:34.570 16:49:20 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2539645 00:37:34.570 16:49:20 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:34.570 16:49:20 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:34.570 16:49:20 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2539645' 00:37:34.570 killing process with pid 2539645 00:37:34.570 16:49:20 keyring_file -- common/autotest_common.sh@973 -- # kill 2539645 00:37:34.570 Received shutdown signal, test time was about 1.000000 seconds 00:37:34.570 00:37:34.570 Latency(us) 00:37:34.570 [2024-11-20T15:49:20.529Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:34.570 [2024-11-20T15:49:20.529Z] =================================================================================================================== 00:37:34.570 [2024-11-20T15:49:20.529Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:34.570 16:49:20 keyring_file -- common/autotest_common.sh@978 -- # wait 2539645 00:37:34.570 16:49:20 keyring_file -- keyring/file.sh@21 -- # killprocess 2537708 00:37:34.570 16:49:20 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2537708 ']' 00:37:34.570 16:49:20 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2537708 00:37:34.570 16:49:20 keyring_file -- common/autotest_common.sh@959 -- # uname 00:37:34.570 16:49:20 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:34.570 16:49:20 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2537708 00:37:34.830 16:49:20 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:34.830 16:49:20 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:34.830 16:49:20 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2537708' 00:37:34.830 killing process with pid 2537708 00:37:34.830 16:49:20 keyring_file -- common/autotest_common.sh@973 -- # kill 2537708 00:37:34.830 16:49:20 keyring_file -- common/autotest_common.sh@978 -- # wait 2537708 00:37:34.830 00:37:34.830 real 0m11.648s 00:37:34.830 user 0m28.176s 00:37:34.830 sys 0m2.501s 00:37:34.830 16:49:20 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:34.830 16:49:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:34.830 ************************************ 00:37:34.830 END TEST keyring_file 00:37:34.830 ************************************ 00:37:35.091 16:49:20 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:37:35.091 16:49:20 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:35.091 16:49:20 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:35.091 16:49:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:35.091 16:49:20 -- common/autotest_common.sh@10 -- # set +x 00:37:35.091 ************************************ 00:37:35.091 START TEST keyring_linux 00:37:35.091 ************************************ 00:37:35.091 16:49:20 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:35.091 Joined session keyring: 826727195 00:37:35.091 * Looking for test storage... 00:37:35.091 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:35.091 16:49:20 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:35.091 16:49:20 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:37:35.091 16:49:20 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:35.091 16:49:21 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:35.091 16:49:21 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:35.091 16:49:21 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:35.091 16:49:21 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:35.091 16:49:21 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:37:35.091 16:49:21 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:37:35.091 16:49:21 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:37:35.091 16:49:21 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:37:35.091 16:49:21 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:37:35.091 16:49:21 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:37:35.091 16:49:21 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:37:35.091 16:49:21 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:35.091 16:49:21 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:37:35.091 16:49:21 keyring_linux -- scripts/common.sh@345 -- # : 1 00:37:35.091 16:49:21 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:35.091 16:49:21 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:35.091 16:49:21 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:37:35.091 16:49:21 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:37:35.091 16:49:21 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:35.091 16:49:21 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:37:35.091 16:49:21 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:37:35.091 16:49:21 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:37:35.091 16:49:21 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:37:35.091 16:49:21 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:35.091 16:49:21 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:37:35.091 16:49:21 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:37:35.091 16:49:21 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:35.091 16:49:21 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:35.091 16:49:21 keyring_linux -- scripts/common.sh@368 -- # return 0 00:37:35.091 16:49:21 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:35.091 16:49:21 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:35.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:35.091 --rc genhtml_branch_coverage=1 00:37:35.091 --rc genhtml_function_coverage=1 00:37:35.091 --rc genhtml_legend=1 00:37:35.091 --rc geninfo_all_blocks=1 00:37:35.091 --rc geninfo_unexecuted_blocks=1 00:37:35.091 00:37:35.091 ' 00:37:35.091 16:49:21 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:35.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:35.091 --rc genhtml_branch_coverage=1 00:37:35.091 --rc genhtml_function_coverage=1 00:37:35.091 --rc genhtml_legend=1 00:37:35.091 --rc geninfo_all_blocks=1 00:37:35.091 --rc geninfo_unexecuted_blocks=1 00:37:35.091 00:37:35.091 ' 00:37:35.091 16:49:21 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:35.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:35.091 --rc genhtml_branch_coverage=1 00:37:35.091 --rc genhtml_function_coverage=1 00:37:35.091 --rc genhtml_legend=1 00:37:35.091 --rc geninfo_all_blocks=1 00:37:35.091 --rc geninfo_unexecuted_blocks=1 00:37:35.091 00:37:35.091 ' 00:37:35.091 16:49:21 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:35.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:35.091 --rc genhtml_branch_coverage=1 00:37:35.091 --rc genhtml_function_coverage=1 00:37:35.091 --rc genhtml_legend=1 00:37:35.091 --rc geninfo_all_blocks=1 00:37:35.091 --rc geninfo_unexecuted_blocks=1 00:37:35.091 00:37:35.091 ' 00:37:35.352 16:49:21 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:35.352 16:49:21 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:35.352 16:49:21 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:37:35.352 16:49:21 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:35.352 16:49:21 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:35.352 16:49:21 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:35.352 16:49:21 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:35.352 16:49:21 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:35.352 16:49:21 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:35.352 16:49:21 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:35.352 16:49:21 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:35.352 16:49:21 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:35.352 16:49:21 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:35.352 16:49:21 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:37:35.352 16:49:21 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:37:35.352 16:49:21 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:35.352 16:49:21 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:35.352 16:49:21 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:35.352 16:49:21 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:35.352 16:49:21 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:35.352 16:49:21 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:37:35.352 16:49:21 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:35.352 16:49:21 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:35.352 16:49:21 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:35.352 16:49:21 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:35.352 16:49:21 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:35.353 16:49:21 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:35.353 16:49:21 keyring_linux -- paths/export.sh@5 -- # export PATH 00:37:35.353 16:49:21 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:35.353 16:49:21 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:37:35.353 16:49:21 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:35.353 16:49:21 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:35.353 16:49:21 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:35.353 16:49:21 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:35.353 16:49:21 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:35.353 16:49:21 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:35.353 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:35.353 16:49:21 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:35.353 16:49:21 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:35.353 16:49:21 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:35.353 16:49:21 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:35.353 16:49:21 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:35.353 16:49:21 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:35.353 16:49:21 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:37:35.353 16:49:21 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:37:35.353 16:49:21 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:37:35.353 16:49:21 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:37:35.353 16:49:21 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:35.353 16:49:21 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:37:35.353 16:49:21 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:35.353 16:49:21 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:35.353 16:49:21 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:37:35.353 16:49:21 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:35.353 16:49:21 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:35.353 16:49:21 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:37:35.353 16:49:21 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:35.353 16:49:21 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:37:35.353 16:49:21 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:37:35.353 16:49:21 keyring_linux -- nvmf/common.sh@733 -- # python - 00:37:35.353 16:49:21 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:37:35.353 16:49:21 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:37:35.353 /tmp/:spdk-test:key0 00:37:35.353 16:49:21 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:37:35.353 16:49:21 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:35.353 16:49:21 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:37:35.353 16:49:21 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:35.353 16:49:21 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:35.353 16:49:21 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:37:35.353 16:49:21 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:35.353 16:49:21 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:35.353 16:49:21 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:37:35.353 16:49:21 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:35.353 16:49:21 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:37:35.353 16:49:21 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:37:35.353 16:49:21 keyring_linux -- nvmf/common.sh@733 -- # python - 00:37:35.353 16:49:21 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:37:35.353 16:49:21 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:37:35.353 /tmp/:spdk-test:key1 00:37:35.353 16:49:21 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2540263 00:37:35.353 16:49:21 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2540263 00:37:35.353 16:49:21 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:35.353 16:49:21 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2540263 ']' 00:37:35.353 16:49:21 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:35.353 16:49:21 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:35.353 16:49:21 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:35.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:35.353 16:49:21 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:35.353 16:49:21 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:35.353 [2024-11-20 16:49:21.226780] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:37:35.353 [2024-11-20 16:49:21.226859] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2540263 ] 00:37:35.353 [2024-11-20 16:49:21.301837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:35.613 [2024-11-20 16:49:21.343855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:36.182 16:49:22 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:36.182 16:49:22 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:37:36.182 16:49:22 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:37:36.182 16:49:22 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.182 16:49:22 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:36.182 [2024-11-20 16:49:22.012595] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:36.182 null0 00:37:36.182 [2024-11-20 16:49:22.044636] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:36.182 [2024-11-20 16:49:22.045031] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:36.182 16:49:22 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.182 16:49:22 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:37:36.182 768358639 00:37:36.182 16:49:22 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:37:36.182 82857976 00:37:36.182 16:49:22 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2540286 00:37:36.182 16:49:22 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2540286 /var/tmp/bperf.sock 00:37:36.182 16:49:22 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:37:36.182 16:49:22 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2540286 ']' 00:37:36.182 16:49:22 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:36.182 16:49:22 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:36.182 16:49:22 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:36.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:36.182 16:49:22 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:36.182 16:49:22 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:36.182 [2024-11-20 16:49:22.122555] Starting SPDK v25.01-pre git sha1 7bc1aace1 / DPDK 24.03.0 initialization... 00:37:36.182 [2024-11-20 16:49:22.122605] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2540286 ] 00:37:36.443 [2024-11-20 16:49:22.204879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:36.443 [2024-11-20 16:49:22.234827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:37.014 16:49:22 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:37.014 16:49:22 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:37:37.014 16:49:22 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:37:37.014 16:49:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:37:37.274 16:49:23 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:37:37.274 16:49:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:37.534 16:49:23 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:37.535 16:49:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:37.535 [2024-11-20 16:49:23.460482] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:37.795 nvme0n1 00:37:37.795 16:49:23 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:37:37.795 16:49:23 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:37:37.795 16:49:23 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:37.795 16:49:23 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:37.795 16:49:23 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:37.795 16:49:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:37.795 16:49:23 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:37:37.795 16:49:23 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:37.795 16:49:23 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:37:37.795 16:49:23 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:37:37.795 16:49:23 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:37.795 16:49:23 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:37:37.795 16:49:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:38.055 16:49:23 keyring_linux -- keyring/linux.sh@25 -- # sn=768358639 00:37:38.055 16:49:23 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:37:38.055 16:49:23 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:38.055 16:49:23 keyring_linux -- keyring/linux.sh@26 -- # [[ 768358639 == \7\6\8\3\5\8\6\3\9 ]] 00:37:38.055 16:49:23 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 768358639 00:37:38.055 16:49:23 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:37:38.055 16:49:23 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:38.055 Running I/O for 1 seconds... 00:37:39.438 17494.00 IOPS, 68.34 MiB/s 00:37:39.438 Latency(us) 00:37:39.438 [2024-11-20T15:49:25.397Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:39.438 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:39.438 nvme0n1 : 1.01 17494.26 68.34 0.00 0.00 7286.99 2239.15 8738.13 00:37:39.438 [2024-11-20T15:49:25.397Z] =================================================================================================================== 00:37:39.438 [2024-11-20T15:49:25.397Z] Total : 17494.26 68.34 0.00 0.00 7286.99 2239.15 8738.13 00:37:39.438 { 00:37:39.438 "results": [ 00:37:39.438 { 00:37:39.438 "job": "nvme0n1", 00:37:39.438 "core_mask": "0x2", 00:37:39.438 "workload": "randread", 00:37:39.438 "status": "finished", 00:37:39.438 "queue_depth": 128, 00:37:39.438 "io_size": 4096, 00:37:39.438 "runtime": 1.007302, 00:37:39.438 "iops": 17494.25693585439, 00:37:39.438 "mibps": 68.33694115568122, 00:37:39.438 "io_failed": 0, 00:37:39.438 "io_timeout": 0, 00:37:39.438 "avg_latency_us": 7286.989009193055, 00:37:39.438 "min_latency_us": 2239.1466666666665, 00:37:39.438 "max_latency_us": 8738.133333333333 00:37:39.438 } 00:37:39.438 ], 00:37:39.438 "core_count": 1 00:37:39.438 } 00:37:39.438 16:49:25 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:39.438 16:49:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:39.438 16:49:25 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:37:39.438 16:49:25 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:37:39.438 16:49:25 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:39.439 16:49:25 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:39.439 16:49:25 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:39.439 16:49:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:39.439 16:49:25 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:37:39.439 16:49:25 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:39.439 16:49:25 keyring_linux -- keyring/linux.sh@23 -- # return 00:37:39.439 16:49:25 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:39.439 16:49:25 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:37:39.439 16:49:25 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:39.439 16:49:25 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:37:39.439 16:49:25 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:39.439 16:49:25 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:37:39.439 16:49:25 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:39.439 16:49:25 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:39.439 16:49:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:39.699 [2024-11-20 16:49:25.545089] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:39.699 [2024-11-20 16:49:25.545431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1806a60 (107): Transport endpoint is not connected 00:37:39.699 [2024-11-20 16:49:25.546427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1806a60 (9): Bad file descriptor 00:37:39.699 [2024-11-20 16:49:25.547429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:37:39.699 [2024-11-20 16:49:25.547436] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:39.699 [2024-11-20 16:49:25.547442] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:37:39.699 [2024-11-20 16:49:25.547448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:37:39.699 request: 00:37:39.699 { 00:37:39.699 "name": "nvme0", 00:37:39.699 "trtype": "tcp", 00:37:39.699 "traddr": "127.0.0.1", 00:37:39.699 "adrfam": "ipv4", 00:37:39.699 "trsvcid": "4420", 00:37:39.699 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:39.699 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:39.699 "prchk_reftag": false, 00:37:39.699 "prchk_guard": false, 00:37:39.699 "hdgst": false, 00:37:39.699 "ddgst": false, 00:37:39.699 "psk": ":spdk-test:key1", 00:37:39.699 "allow_unrecognized_csi": false, 00:37:39.699 "method": "bdev_nvme_attach_controller", 00:37:39.699 "req_id": 1 00:37:39.699 } 00:37:39.699 Got JSON-RPC error response 00:37:39.699 response: 00:37:39.699 { 00:37:39.699 "code": -5, 00:37:39.699 "message": "Input/output error" 00:37:39.699 } 00:37:39.699 16:49:25 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:37:39.699 16:49:25 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:39.699 16:49:25 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:39.699 16:49:25 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:39.699 16:49:25 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:37:39.699 16:49:25 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:39.699 16:49:25 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:37:39.699 16:49:25 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:37:39.699 16:49:25 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:37:39.699 16:49:25 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:39.699 16:49:25 keyring_linux -- keyring/linux.sh@33 -- # sn=768358639 00:37:39.699 16:49:25 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 768358639 00:37:39.699 1 links removed 00:37:39.699 16:49:25 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:39.699 16:49:25 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:37:39.699 16:49:25 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:37:39.699 16:49:25 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:37:39.699 16:49:25 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:37:39.699 16:49:25 keyring_linux -- keyring/linux.sh@33 -- # sn=82857976 00:37:39.699 16:49:25 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 82857976 00:37:39.699 1 links removed 00:37:39.699 16:49:25 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2540286 00:37:39.699 16:49:25 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2540286 ']' 00:37:39.699 16:49:25 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2540286 00:37:39.699 16:49:25 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:37:39.699 16:49:25 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:39.699 16:49:25 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2540286 00:37:39.699 16:49:25 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:39.699 16:49:25 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:39.699 16:49:25 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2540286' 00:37:39.699 killing process with pid 2540286 00:37:39.699 16:49:25 keyring_linux -- common/autotest_common.sh@973 -- # kill 2540286 00:37:39.699 Received shutdown signal, test time was about 1.000000 seconds 00:37:39.699 00:37:39.699 Latency(us) 00:37:39.699 [2024-11-20T15:49:25.658Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:39.699 [2024-11-20T15:49:25.658Z] =================================================================================================================== 00:37:39.699 [2024-11-20T15:49:25.659Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:39.700 16:49:25 keyring_linux -- common/autotest_common.sh@978 -- # wait 2540286 00:37:39.959 16:49:25 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2540263 00:37:39.959 16:49:25 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2540263 ']' 00:37:39.959 16:49:25 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2540263 00:37:39.959 16:49:25 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:37:39.959 16:49:25 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:39.959 16:49:25 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2540263 00:37:39.959 16:49:25 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:39.959 16:49:25 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:39.959 16:49:25 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2540263' 00:37:39.959 killing process with pid 2540263 00:37:39.959 16:49:25 keyring_linux -- common/autotest_common.sh@973 -- # kill 2540263 00:37:39.959 16:49:25 keyring_linux -- common/autotest_common.sh@978 -- # wait 2540263 00:37:40.219 00:37:40.219 real 0m5.179s 00:37:40.219 user 0m9.697s 00:37:40.219 sys 0m1.397s 00:37:40.219 16:49:26 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:40.219 16:49:26 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:40.219 ************************************ 00:37:40.219 END TEST keyring_linux 00:37:40.219 ************************************ 00:37:40.219 16:49:26 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:37:40.219 16:49:26 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:37:40.219 16:49:26 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:37:40.219 16:49:26 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:37:40.219 16:49:26 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:37:40.219 16:49:26 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:37:40.219 16:49:26 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:37:40.219 16:49:26 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:37:40.219 16:49:26 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:37:40.219 16:49:26 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:37:40.219 16:49:26 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:37:40.219 16:49:26 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:37:40.219 16:49:26 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:37:40.219 16:49:26 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:37:40.219 16:49:26 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:37:40.219 16:49:26 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:37:40.219 16:49:26 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:37:40.219 16:49:26 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:40.219 16:49:26 -- common/autotest_common.sh@10 -- # set +x 00:37:40.219 16:49:26 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:37:40.220 16:49:26 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:37:40.220 16:49:26 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:37:40.220 16:49:26 -- common/autotest_common.sh@10 -- # set +x 00:37:48.351 INFO: APP EXITING 00:37:48.351 INFO: killing all VMs 00:37:48.351 INFO: killing vhost app 00:37:48.351 INFO: EXIT DONE 00:37:50.894 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:37:50.894 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:37:50.894 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:37:50.894 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:37:50.894 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:37:50.894 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:37:50.894 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:37:50.894 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:37:50.894 0000:65:00.0 (144d a80a): Already using the nvme driver 00:37:50.894 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:37:50.894 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:37:50.894 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:37:50.894 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:37:50.894 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:37:50.894 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:37:50.894 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:37:50.894 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:37:54.194 Cleaning 00:37:54.194 Removing: /var/run/dpdk/spdk0/config 00:37:54.194 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:54.194 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:54.194 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:54.194 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:54.194 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:37:54.194 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:37:54.194 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:37:54.194 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:37:54.194 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:54.194 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:54.194 Removing: /var/run/dpdk/spdk1/config 00:37:54.194 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:37:54.194 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:37:54.194 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:37:54.194 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:37:54.194 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:37:54.194 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:37:54.194 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:37:54.194 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:37:54.194 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:37:54.194 Removing: /var/run/dpdk/spdk1/hugepage_info 00:37:54.194 Removing: /var/run/dpdk/spdk2/config 00:37:54.194 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:37:54.194 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:37:54.194 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:37:54.194 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:37:54.194 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:37:54.194 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:37:54.194 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:37:54.194 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:37:54.194 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:37:54.194 Removing: /var/run/dpdk/spdk2/hugepage_info 00:37:54.194 Removing: /var/run/dpdk/spdk3/config 00:37:54.194 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:37:54.194 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:37:54.194 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:37:54.194 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:37:54.194 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:37:54.194 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:37:54.194 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:37:54.194 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:37:54.194 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:37:54.194 Removing: /var/run/dpdk/spdk3/hugepage_info 00:37:54.194 Removing: /var/run/dpdk/spdk4/config 00:37:54.194 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:37:54.194 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:37:54.194 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:37:54.194 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:37:54.194 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:37:54.194 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:37:54.194 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:37:54.194 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:37:54.194 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:37:54.194 Removing: /var/run/dpdk/spdk4/hugepage_info 00:37:54.194 Removing: /dev/shm/bdev_svc_trace.1 00:37:54.194 Removing: /dev/shm/nvmf_trace.0 00:37:54.194 Removing: /dev/shm/spdk_tgt_trace.pid1962736 00:37:54.194 Removing: /var/run/dpdk/spdk0 00:37:54.194 Removing: /var/run/dpdk/spdk1 00:37:54.194 Removing: /var/run/dpdk/spdk2 00:37:54.194 Removing: /var/run/dpdk/spdk3 00:37:54.194 Removing: /var/run/dpdk/spdk4 00:37:54.194 Removing: /var/run/dpdk/spdk_pid1961245 00:37:54.194 Removing: /var/run/dpdk/spdk_pid1962736 00:37:54.194 Removing: /var/run/dpdk/spdk_pid1963281 00:37:54.194 Removing: /var/run/dpdk/spdk_pid1964467 00:37:54.194 Removing: /var/run/dpdk/spdk_pid1964647 00:37:54.194 Removing: /var/run/dpdk/spdk_pid1965959 00:37:54.194 Removing: /var/run/dpdk/spdk_pid1966040 00:37:54.194 Removing: /var/run/dpdk/spdk_pid1966502 00:37:54.194 Removing: /var/run/dpdk/spdk_pid1967633 00:37:54.194 Removing: /var/run/dpdk/spdk_pid1968106 00:37:54.194 Removing: /var/run/dpdk/spdk_pid1968496 00:37:54.194 Removing: /var/run/dpdk/spdk_pid1968898 00:37:54.194 Removing: /var/run/dpdk/spdk_pid1969314 00:37:54.194 Removing: /var/run/dpdk/spdk_pid1969709 00:37:54.194 Removing: /var/run/dpdk/spdk_pid1970063 00:37:54.194 Removing: /var/run/dpdk/spdk_pid1970267 00:37:54.194 Removing: /var/run/dpdk/spdk_pid1970535 00:37:54.194 Removing: /var/run/dpdk/spdk_pid1971583 00:37:54.194 Removing: /var/run/dpdk/spdk_pid1975110 00:37:54.194 Removing: /var/run/dpdk/spdk_pid1975309 00:37:54.194 Removing: /var/run/dpdk/spdk_pid1975637 00:37:54.194 Removing: /var/run/dpdk/spdk_pid1975876 00:37:54.194 Removing: /var/run/dpdk/spdk_pid1976264 00:37:54.194 Removing: /var/run/dpdk/spdk_pid1976582 00:37:54.194 Removing: /var/run/dpdk/spdk_pid1976961 00:37:54.194 Removing: /var/run/dpdk/spdk_pid1977232 00:37:54.194 Removing: /var/run/dpdk/spdk_pid1977361 00:37:54.194 Removing: /var/run/dpdk/spdk_pid1977673 00:37:54.194 Removing: /var/run/dpdk/spdk_pid1977824 00:37:54.194 Removing: /var/run/dpdk/spdk_pid1978048 00:37:54.194 Removing: /var/run/dpdk/spdk_pid1978498 00:37:54.194 Removing: /var/run/dpdk/spdk_pid1978849 00:37:54.194 Removing: /var/run/dpdk/spdk_pid1979250 00:37:54.195 Removing: /var/run/dpdk/spdk_pid1983800 00:37:54.195 Removing: /var/run/dpdk/spdk_pid1989221 00:37:54.195 Removing: /var/run/dpdk/spdk_pid2001374 00:37:54.195 Removing: /var/run/dpdk/spdk_pid2002053 00:37:54.195 Removing: /var/run/dpdk/spdk_pid2007263 00:37:54.195 Removing: /var/run/dpdk/spdk_pid2007619 00:37:54.195 Removing: /var/run/dpdk/spdk_pid2013224 00:37:54.195 Removing: /var/run/dpdk/spdk_pid2020344 00:37:54.195 Removing: /var/run/dpdk/spdk_pid2023703 00:37:54.195 Removing: /var/run/dpdk/spdk_pid2036002 00:37:54.195 Removing: /var/run/dpdk/spdk_pid2047080 00:37:54.195 Removing: /var/run/dpdk/spdk_pid2049096 00:37:54.195 Removing: /var/run/dpdk/spdk_pid2050116 00:37:54.195 Removing: /var/run/dpdk/spdk_pid2071771 00:37:54.195 Removing: /var/run/dpdk/spdk_pid2076559 00:37:54.195 Removing: /var/run/dpdk/spdk_pid2132934 00:37:54.195 Removing: /var/run/dpdk/spdk_pid2139072 00:37:54.195 Removing: /var/run/dpdk/spdk_pid2146251 00:37:54.195 Removing: /var/run/dpdk/spdk_pid2154195 00:37:54.195 Removing: /var/run/dpdk/spdk_pid2154197 00:37:54.195 Removing: /var/run/dpdk/spdk_pid2155199 00:37:54.195 Removing: /var/run/dpdk/spdk_pid2156203 00:37:54.195 Removing: /var/run/dpdk/spdk_pid2157219 00:37:54.195 Removing: /var/run/dpdk/spdk_pid2158066 00:37:54.195 Removing: /var/run/dpdk/spdk_pid2158206 00:37:54.195 Removing: /var/run/dpdk/spdk_pid2158449 00:37:54.195 Removing: /var/run/dpdk/spdk_pid2158550 00:37:54.195 Removing: /var/run/dpdk/spdk_pid2158564 00:37:54.195 Removing: /var/run/dpdk/spdk_pid2159566 00:37:54.195 Removing: /var/run/dpdk/spdk_pid2160569 00:37:54.195 Removing: /var/run/dpdk/spdk_pid2161580 00:37:54.195 Removing: /var/run/dpdk/spdk_pid2162254 00:37:54.195 Removing: /var/run/dpdk/spdk_pid2162372 00:37:54.195 Removing: /var/run/dpdk/spdk_pid2162615 00:37:54.195 Removing: /var/run/dpdk/spdk_pid2164046 00:37:54.195 Removing: /var/run/dpdk/spdk_pid2165477 00:37:54.195 Removing: /var/run/dpdk/spdk_pid2176044 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2211921 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2217352 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2219353 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2221371 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2221657 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2221726 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2221869 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2222454 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2224599 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2225543 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2226090 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2228632 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2229331 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2230056 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2235133 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2241767 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2241769 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2241771 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2246444 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2256686 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2261977 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2269207 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2270708 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2272473 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2274063 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2279665 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2284966 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2290031 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2299189 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2299197 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2304340 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2304609 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2304938 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2305304 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2305489 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2311031 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2311834 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2317628 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2320793 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2327394 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2334110 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2344240 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2352930 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2352969 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2376960 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2377818 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2378505 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2379191 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2380252 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2380943 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2381625 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2382430 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2387706 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2387918 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2395120 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2395486 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2401943 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2407052 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2419035 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2419709 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2424788 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2425132 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2430202 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2437194 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2440143 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2452266 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2463010 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2465013 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2466020 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2486285 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2491038 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2494218 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2501372 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2501441 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2507404 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2509817 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2512011 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2513391 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2515715 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2517236 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2527804 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2528445 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2528965 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2531786 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2532439 00:37:54.455 Removing: /var/run/dpdk/spdk_pid2533107 00:37:54.715 Removing: /var/run/dpdk/spdk_pid2537708 00:37:54.715 Removing: /var/run/dpdk/spdk_pid2538011 00:37:54.715 Removing: /var/run/dpdk/spdk_pid2539645 00:37:54.715 Removing: /var/run/dpdk/spdk_pid2540263 00:37:54.715 Removing: /var/run/dpdk/spdk_pid2540286 00:37:54.715 Clean 00:37:54.715 16:49:40 -- common/autotest_common.sh@1453 -- # return 0 00:37:54.715 16:49:40 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:37:54.715 16:49:40 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:54.715 16:49:40 -- common/autotest_common.sh@10 -- # set +x 00:37:54.715 16:49:40 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:37:54.715 16:49:40 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:54.715 16:49:40 -- common/autotest_common.sh@10 -- # set +x 00:37:54.715 16:49:40 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:54.715 16:49:40 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:37:54.715 16:49:40 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:37:54.715 16:49:40 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:37:54.715 16:49:40 -- spdk/autotest.sh@398 -- # hostname 00:37:54.715 16:49:40 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-13 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:37:54.977 geninfo: WARNING: invalid characters removed from testname! 00:38:21.548 16:50:05 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:22.928 16:50:08 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:24.836 16:50:10 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:26.217 16:50:12 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:28.125 16:50:13 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:30.037 16:50:15 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:31.420 16:50:17 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:38:31.420 16:50:17 -- spdk/autorun.sh@1 -- $ timing_finish 00:38:31.420 16:50:17 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:38:31.420 16:50:17 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:38:31.420 16:50:17 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:38:31.420 16:50:17 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:31.420 + [[ -n 1876486 ]] 00:38:31.420 + sudo kill 1876486 00:38:31.430 [Pipeline] } 00:38:31.446 [Pipeline] // stage 00:38:31.449 [Pipeline] } 00:38:31.463 [Pipeline] // timeout 00:38:31.466 [Pipeline] } 00:38:31.479 [Pipeline] // catchError 00:38:31.485 [Pipeline] } 00:38:31.502 [Pipeline] // wrap 00:38:31.507 [Pipeline] } 00:38:31.521 [Pipeline] // catchError 00:38:31.530 [Pipeline] stage 00:38:31.532 [Pipeline] { (Epilogue) 00:38:31.546 [Pipeline] catchError 00:38:31.548 [Pipeline] { 00:38:31.560 [Pipeline] echo 00:38:31.562 Cleanup processes 00:38:31.567 [Pipeline] sh 00:38:31.854 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:31.854 2552929 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:31.869 [Pipeline] sh 00:38:32.154 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:32.154 ++ grep -v 'sudo pgrep' 00:38:32.154 ++ awk '{print $1}' 00:38:32.154 + sudo kill -9 00:38:32.154 + true 00:38:32.166 [Pipeline] sh 00:38:32.454 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:44.693 [Pipeline] sh 00:38:44.984 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:44.984 Artifacts sizes are good 00:38:44.998 [Pipeline] archiveArtifacts 00:38:45.005 Archiving artifacts 00:38:45.156 [Pipeline] sh 00:38:45.442 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:38:45.458 [Pipeline] cleanWs 00:38:45.468 [WS-CLEANUP] Deleting project workspace... 00:38:45.468 [WS-CLEANUP] Deferred wipeout is used... 00:38:45.475 [WS-CLEANUP] done 00:38:45.477 [Pipeline] } 00:38:45.494 [Pipeline] // catchError 00:38:45.508 [Pipeline] sh 00:38:45.793 + logger -p user.info -t JENKINS-CI 00:38:45.803 [Pipeline] } 00:38:45.816 [Pipeline] // stage 00:38:45.821 [Pipeline] } 00:38:45.834 [Pipeline] // node 00:38:45.839 [Pipeline] End of Pipeline 00:38:45.874 Finished: SUCCESS